Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 7 de 7
1.
Animals (Basel) ; 14(8)2024 Apr 19.
Article En | MEDLINE | ID: mdl-38672375

A pig inventory is a crucial component of achieving precise and large-scale farming. In complex pigsty environments, due to pigs' stress reactions and frequent obstructions, it is challenging to count them accurately and automatically. This difficulty contrasts with most current deep learning studies, which rely on overhead views or static images for counting. This research proposes a video-based dynamic counting method, combining YOLOv7 with DeepSORT. By utilizing the YOLOv7 network structure and optimizing the second and third 3 × 3 convolution operations in the head network ELAN-W with PConv, the model reduces the computational demand and improves the inference speed without sacrificing accuracy. To ensure that the network acquires accurate position perception information at oblique angles and extracts rich semantic information, we introduce the coordinate attention (CA) mechanism before the three re-referentialization paths (REPConv) in the head network, enhancing robustness in complex scenarios. Experimental results show that, compared to the original model, the improved model increases the mAP by 3.24, 0.05, and 1.00 percentage points for oblique, overhead, and all pig counting datasets, respectively, while reducing the computational cost by 3.6 GFLOPS. The enhanced YOLOv7 outperforms YOLOv5, YOLOv4, YOLOv3, Faster RCNN, and SSD in target detection with mAP improvements of 2.07, 5.20, 2.16, 7.05, and 19.73 percentage points, respectively. In dynamic counting experiments, the improved YOLOv7 combined with DeepSORT was tested on videos with total pig counts of 144, 201, 285, and 295, yielding errors of -3, -3, -4, and -26, respectively, with an average accuracy of 96.58% and an FPS of 22. This demonstrates the model's capability of performing the real-time counting of pigs in various scenes, providing valuable data and references for automated pig counting research.

2.
Sensors (Basel) ; 24(5)2024 Mar 03.
Article En | MEDLINE | ID: mdl-38475189

Wheat seed detection has important applications in calculating thousand-grain weight and crop breeding. In order to solve the problems of seed accumulation, adhesion, and occlusion that can lead to low counting accuracy, while ensuring fast detection speed with high accuracy, a wheat seed counting method is proposed to provide technical support for the development of the embedded platform of the seed counter. This study proposes a lightweight real-time wheat seed detection model, YOLOv8-HD, based on YOLOv8. Firstly, we introduce the concept of shared convolutional layers to improve the YOLOv8 detection head, reducing the number of parameters and achieving a lightweight design to improve runtime speed. Secondly, we incorporate the Vision Transformer with a Deformable Attention mechanism into the C2f module of the backbone network to enhance the network's feature extraction capability and improve detection accuracy. The results show that in the stacked scenes with impurities (severe seed adhesion), the YOLOv8-HD model achieves an average detection accuracy (mAP) of 77.6%, which is 9.1% higher than YOLOv8. In all scenes, the YOLOv8-HD model achieves an average detection accuracy (mAP) of 99.3%, which is 16.8% higher than YOLOv8. The memory size of the YOLOv8-HD model is 6.35 MB, approximately 4/5 of YOLOv8. The GFLOPs of YOLOv8-HD decrease by 16%. The inference time of YOLOv8-HD is 2.86 ms (on GPU), which is lower than YOLOv8. Finally, we conducted numerous experiments and the results showed that YOLOv8-HD outperforms other mainstream networks in terms of mAP, speed, and model size. Therefore, our YOLOv8-HD can efficiently detect wheat seeds in various scenarios, providing technical support for the development of seed counting instruments.


Plant Breeding , Triticum , Semen Analysis , Cell Count , Seeds
3.
Animals (Basel) ; 13(13)2023 Jul 03.
Article En | MEDLINE | ID: mdl-37443979

In the pig farming environment, complex factors such as pig adhesion, occlusion, and changes in body posture pose significant challenges for segmenting multiple target pigs. To address these challenges, this study collected video data using a horizontal angle of view and a non-fixed lens. Specifically, a total of 45 pigs aged 20-105 days in 8 pens were selected as research subjects, resulting in 1917 labeled images. These images were divided into 959 for training, 192 for validation, and 766 for testing. The grouped attention module was employed in the feature pyramid network to fuse the feature maps from deep and shallow layers. The grouped attention module consists of a channel attention branch and a spatial attention branch. The channel attention branch effectively models dependencies between channels to enhance feature mapping between related channels and improve semantic feature representation. The spatial attention branch establishes pixel-level dependencies by applying the response values of all pixels in a single-channel feature map to the target pixel. It further guides the original feature map to filter spatial location information and generate context-related outputs. The grouped attention, along with data augmentation strategies, was incorporated into the Mask R-CNN and Cascade Mask R-CNN task networks to explore their impact on pig segmentation. The experiments showed that introducing data augmentation strategies improved the segmentation performance of the model to a certain extent. Taking Mask-RCNN as an example, under the same experimental conditions, the introduction of data augmentation strategies resulted in improvements of 1.5%, 0.7%, 0.4%, and 0.5% in metrics AP50, AP75, APL, and AP, respectively. Furthermore, our grouped attention module achieved the best performance. For example, compared to the existing attention module CBAM, taking Mask R-CNN as an example, in terms of the metric AP50, AP75, APL, and AP, the grouped attention outperformed 1.0%, 0.3%, 1.1%, and 1.2%, respectively. We further studied the impact of the number of groups in the grouped attention on the final segmentation results. Additionally, visualizations of predictions on third-party data collected using a top-down data acquisition method, which was not involved in the model training, demonstrated that the proposed model in this paper still achieved good segmentation results, proving the transferability and robustness of the grouped attention. Through comprehensive analysis, we found that grouped attention is beneficial for achieving high-precision segmentation of individual pigs in different scenes, ages, and time periods. The research results can provide references for subsequent applications such as pig identification and behavior analysis in mobile settings.

4.
Animals (Basel) ; 13(9)2023 May 06.
Article En | MEDLINE | ID: mdl-37174592

To explore the application of a traditional machine learning model in the intelligent management of pigs, in this paper, the influence of PCA pre-treatment on pig face identification with RF is studied. By this testing method, the parameters of two testing schemes, one adopting RF alone and the other adopting RF + PCA, were determined to be 65 and 70, respectively. With individual identification tests carried out on 10 pigs, accuracy, recall, and f1-score were increased by 2.66, 2.76, and 2.81 percentage points, respectively. Except for the slight increase in training time, the test time was reduced to 75% of the old scheme, and the efficiency of the optimized scheme was greatly improved. It indicates that PCA pre-treatment positively improved the efficiency of individual pig identification with RF. Furthermore, it provides experimental support for the mobile terminals and the embedded application of RF classifiers.

5.
Plants (Basel) ; 12(9)2023 Apr 26.
Article En | MEDLINE | ID: mdl-37176827

Intelligent detection is vital for achieving the intelligent picking operation of daylily, but complex field environments pose challenges due to branch occlusion, overlapping plants, and uneven lighting. To address these challenges, this study selected an intelligent detection model based on YOLOv5s for daylily, the depth and width parameters of the YOLOv5s network were optimized, with Ghost, Transformer, and MobileNetv3 lightweight networks used to optimize the CSPDarknet backbone network of YOLOv5s, continuously improving the model's performance. The experimental results show that the original YOLOv5s model increased mean average precision (mAP) by 49%, 44%, and 24.9% compared to YOLOv4, SSD, and Faster R-CNN models, optimizing the depth and width parameters of the network increased the mAP of the original YOLOv5s model by 7.7%, and the YOLOv5s model with Transformer as the backbone network increased the mAP by 0.2% and the inference speed by 69% compared to the model after network parameter optimization. The optimized YOLOv5s model provided precision, recall rate, mAP, and inference speed of 81.4%, 74.4%, 78.1%, and 93 frames per second (FPS), which can achieve accurate and fast detection of daylily in complex field environments. The research results can provide data and experimental references for developing intelligent picking equipment for daylily.

6.
Cell Physiol Biochem ; 43(2): 636-643, 2017.
Article En | MEDLINE | ID: mdl-28942448

BACKGROUND: MiR-134 is enriched in dendrites of hippocampal neurons and plays crucial roles in the progress of epilepsy. The present study aims to investigate the effects of antagomirs targeting miroRNA-134 (Ant-134) on limk1 expression and the binding of miR-134 and limk1 in experimental seizure. METHODS: Status epilepticus (SE) rat model was established by lithium chloride-pilocarpine injection and was treated with Ant-134 by intracerebroventricular injection. Low Mg2+-exposed primary neurons were used as an in vitro model of SE. The expression of miR-134 was determined using real-time PCR. Protein expressions of limk1 and cofilin were determined by Western blotting. Luciferase reporter assay was used to examine the binding between miR-134 and limk1 3'-untranslated region. RESULTS: The expression of miR-134 was markedly enhanced in hippocampus of the SE rats and low Mg2+-exposed neurons. Ant-134 increased the expression of limk1 and reduced the expression of cofilin in the SE hippocampus and Low Mg2+-exposed neurons. In addition, luciferase reporter assay confirmed that miR-134 bound limk1 3'-UTR. MiR-134 overexpression inhibited limk1 mRNA and protein expressions in neurons. CONCLUSION: Blockage of miR-134 upregulates limk1 expression and downregulated cofilin expression in hippocampus of the SE rats. This mechanism may contribute to the neuroprotective effects of Ant-134.


Antagomirs/therapeutic use , Lim Kinases/genetics , MicroRNAs/genetics , Seizures/therapy , Status Epilepticus/therapy , Up-Regulation , Animals , Cells, Cultured , Genetic Therapy , Hippocampus/metabolism , Hippocampus/pathology , Male , Neurons/metabolism , Neurons/pathology , Rats, Sprague-Dawley , Seizures/genetics , Seizures/pathology , Status Epilepticus/genetics , Status Epilepticus/pathology
7.
Front Pharmacol ; 8: 524, 2017.
Article En | MEDLINE | ID: mdl-28848439

The effects of the existing anti-epileptic drugs are unsatisfactory to almost one third of epileptic patients. MiR-134 antagomirs prevent pilocarpine-induced status epilepticus. In this study, a lithium chloride-pilocarpine-induced status epilepticus model was established and treated with intracerebroventricular injection of antagomirs targeting miR-134 (Ant-134). The Ant-134 treatment significantly improved the performance of rats in Morris water maze tests, inhibited mossy fiber sprouting in the dentate gyrus, and increased the survival neurons in the hippocampal CA1 region. Silencing of miR-134 remarkably decreased malonaldehyde and 4-hydroxynonenal levels and increased superoxide dismutase activity in the hippocampus. The Ant-134 treatment also significantly increased the production of ATP and the activities of mitochondrial respiratory enzyme complexes and significantly decreased the reactive oxygen species generation in the hippocampus compared with the status epilepticus rats. Finally, the Ant-134 treatment remarkably downregulated the hippocampal expressions of autophagy-associated proteins Atg5, beclin-1 and light chain 3B. In conclusion, Ant-134 attenuates epilepsy via inhibiting oxidative stress, improving mitochondrial functions and regulating autophagy in the hippocampus.

...