RESUMO
Pig counting is an important task in pig sales and breeding supervision. Currently, manual counting is low-efficiency and high-cost and presents challenges in terms of statistical analysis. In response to the difficulties faced in pig part feature detection, the loss of tracking due to rapid movement, and the large counting deviation in pig video tracking and counting research, this paper proposes an improved pig counting algorithm (Mobile Pig Counting Algorithm with YOLOv5xpig and DeepSORTPig (MPC-YD)) based on YOLOv5 + DeepSORT model. The algorithm improves the detection rate of pig body parts by adding two different sizes of SPP networks and using SoftPool instead of MaxPool operations in YOLOv5x. In addition, the algorithm includes a pig reidentification network, a pig-tracking method based on spatial state correction, and a pig counting method based on frame number judgment on the DeepSORT algorithm to improve pig tracking accuracy. Experimental analysis shows that the MPC-YD algorithm achieves an average precision of 99.24% in pig object detection and an accuracy of 85.32% in multitarget pig tracking. In the aisle environment of the slaughterhouse, the MPC-YD algorithm achieves a correlation coefficient (R2) of 98.14% in pig counting from video, and it achieves stable pig counting in a breeding environment. The algorithm has a wide range of application prospects.
Assuntos
Matadouros , Algoritmos , Suínos , Animais , Comércio , JulgamentoRESUMO
To address the problem that duck egg mortality is not easily detected at mid-incubation, this paper explored a method to detect mid-incubation egg activity information based on temperature drop curve (TDC) features. In this paper, we used a thermal infrared camera to obtain continuous thermal images of death fertilized duck eggs (DFDE) on the 16th day of incubation and alive fertilized duck eggs (AFDE) hatched for 16-19 days in a 20 °C environment. By observing the temperature drop curve of egg surface, we extracted and visualized five features that could reflect the activity information of duck eggs. And we used K-Nearest Neighbor (KNN), Naive Bayesian (NB) and Support Vector Machine (SVM) to establish the activity information detection models for different incubation days. The results showed that KNN could better distinguish the activity of eggs at the 16th and the 17th day of incubation, with F1-score of 85.43% and 85.98%, respectively. The SVM showed better results at the 18th and the 19th day of incubation, with F1-score of 90.57% and 96.3%, respectively. The experimental results demonstrated that the activity detection method based on the temperature drop curve features in this paper could efficiently and nondestructively detect the activity information of mid-incubation duck eggs, which provided a technical foundation for detecting the activity information of duck eggs at mid-incubation.
Assuntos
Patos , Ovos , Animais , Temperatura , Teorema de Bayes , ZigotoRESUMO
Pig tracking provides strong support for refined management in pig farms. However, long and continuous multi-pig tracking is still extremely challenging due to occlusion, distortion, and motion blurring in real farming scenarios. This study proposes a long-term video tracking method for group-housed pigs based on improved StrongSORT, which can significantly improve the performance of pig tracking in production scenarios. In addition, this research constructs a 24 h pig tracking video dataset, providing a basis for exploring the effectiveness of long-term tracking algorithms. For object detection, a lightweight pig detection network, YOLO v7-tiny_Pig, improved based on YOLO v7-tiny, is proposed to reduce model parameters and improve detection speed. To address the target association problem, the trajectory management method of StrongSORT is optimized according to the characteristics of the pig tracking task to reduce the tracking identity (ID) switching and improve the stability of the algorithm. The experimental results show that YOLO v7-tiny_Pig ensures detection applicability while reducing parameters by 36.7% compared to YOLO v7-tiny and achieving an average video detection speed of 435 frames per second. In terms of pig tracking, Higher-Order Tracking Accuracy (HOTA), Multi-Object Tracking Accuracy (MOTP), and Identification F1 (IDF1) scores reach 83.16%, 97.6%, and 91.42%, respectively. Compared with the original StrongSORT algorithm, HOTA and IDF1 are improved by 6.19% and 10.89%, respectively, and Identity Switch (IDSW) is reduced by 69%. Our algorithm can achieve the continuous tracking of pigs in real scenarios for up to 24 h. This method provides technical support for non-contact pig automatic monitoring.
RESUMO
Smart farming technologies to track and analyze pig behaviors in natural environments are critical for monitoring the health status and welfare of pigs. This study aimed to develop a robust multi-object tracking (MOT) approach named YOLOv8 + OC-SORT(V8-Sort) for the automatic monitoring of the different behaviors of group-housed pigs. We addressed common challenges such as variable lighting, occlusion, and clustering between pigs, which often lead to significant errors in long-term behavioral monitoring. Our approach offers a reliable solution for real-time behavior tracking, contributing to improved health and welfare management in smart farming systems. First, the YOLOv8 is employed for the real-time detection and behavior classification of pigs under variable light and occlusion scenes. Second, the OC-SORT is utilized to track each pig to reduce the impact of pigs clustering together and occlusion on tracking. And, when a target is lost during tracking, the OC-SORT can recover the lost trajectory and re-track the target. Finally, to implement the automatic long-time monitoring of behaviors for each pig, we created an automatic behavior analysis algorithm that integrates the behavioral information from detection and the tracking results from OC-SORT. On the one-minute video datasets for pig tracking, the proposed MOT method outperforms JDE, Trackformer, and TransTrack, achieving the highest HOTA, MOTA, and IDF1 scores of 82.0%, 96.3%, and 96.8%, respectively. And, it achieved scores of 69.0% for HOTA, 99.7% for MOTA, and 75.1% for IDF1 on sixty-minute video datasets. In terms of pig behavior analysis, the proposed automatic behavior analysis algorithm can record the duration of four types of behaviors for each pig in each pen based on behavior classification and ID information to represent the pigs' health status and welfare. These results demonstrate that the proposed method exhibits excellent performance in behavior recognition and tracking, providing technical support for prompt anomaly detection and health status monitoring for pig farming managers.
RESUMO
The body mass of pigs is an essential indicator of their growth and health. Lately, contactless pig body mass estimation methods based on computer vision technology have gained attention thanks to their potential to improve animal welfare and ensure breeders' safety. Nonetheless, current methods require pigs to be restrained in a confinement pen, and no study has been conducted in an unconstrained environment. In this study, we develop a pig mass estimation model based on deep learning, capable of estimating body mass without constraints. Our model comprises a Mask R-CNN-based pig instance segmentation algorithm, a Keypoint R-CNN-based pig keypoint detection algorithm and an improved ResNet-based pig mass estimation algorithm that includes multi-branch convolution, depthwise convolution, and an inverted bottleneck to improve accuracy. We constructed a dataset for this study using images and body mass data from 117 pigs. Our model achieved an RMSE of 3.52 kg on the test set, which is lower than that of the pig body mass estimation algorithm with ResNet and ConvNeXt as the backbone network, and the average estimation speed was 0.339 s·frame-1 Our model can evaluate the body quality of pigs in real-time to provide data support for grading and adjusting breeding plans, and has broad application prospects.
RESUMO
Since it is difficult to accurately identify the fertilization and infertility status of multiple duck eggs on an incubation tray, and due to the lack of easy-to-deploy detection models, a novel lightweight detection architecture (LDA) based on the YOLOX-Tiny framework is proposed in this paper to identify sterile duck eggs with the aim of reducing model deployment requirements and improving detection accuracy. Specifically, the method acquires duck egg images through an acquisition device and augments the dataset using rotation, symmetry, and contrast enhancement methods. Then, the traditional convolution is replaced by a depth-wise separable convolution with a smaller number of parameters, while a new CSP structure and backbone network structure are used to reduce the number of parameters of the model. Finally, to improve the accuracy of the network, the method includes an attention mechanism after the backbone network and uses the cosine annealing algorithm in training. An experiment was conducted on 2111 duck eggs, and 6488 duck egg images were obtained after data augmentation. In the test set of 326 duck egg images, the mean average precision (mAP) of the method in this paper was 99.74%, which was better than the 94.92% of the YOLOX-Tiny network before improvement, and better than the reported prediction accuracy of 92.06%. The number of model parameters was only 1.93 M, which was better than the 5.03 M of the YOLOX-Tiny network. Further, by analyzing the concurrent detection of single 3 × 5, 5 × 7 and 7 × 9 grids, the algorithm achieved a single detection number of 7 × 9 = 63 eggs. The method proposed in this paper significantly improves the efficiency and detection accuracy of single-step detection of breeder duck eggs, reduces the network size, and provides a suitable method for identifying sterile duck eggs on hatching egg trays. Therefore, the method has good application prospects.
RESUMO
In the field of livestock management, noncontact pig weight estimation has advanced considerably with the integration of computer vision and sensor technologies. However, real-world agricultural settings present substantial challenges for these estimation techniques, including the impacts of variable lighting and the complexities of measuring pigs in constant motion. To address these issues, we have developed an innovative algorithm, the moving pig weight estimate algorithm based on deep vision (MPWEADV). This algorithm effectively utilizes RGB and depth images to accurately estimate the weight of pigs on the move. The MPWEADV employs the advanced ConvNeXtV2 network for robust feature extraction and integrates a cutting-edge feature fusion module. Supported by a confidence map estimator, this module effectively merges information from both RGB and depth modalities, enhancing the algorithm's accuracy in determining pig weight. To demonstrate its efficacy, the MPWEADV achieved a root-mean-square error (RMSE) of 4.082 kg and a mean absolute percentage error (MAPE) of 2.383% in our test set. Comparative analyses with models replicating the latest research show the potential of the MPWEADV in unconstrained pig weight estimation practices. Our approach enables real-time assessment of pig conditions, offering valuable data support for grading and adjusting breeding plans, and holds broad prospects for application.
RESUMO
Broiler weighing is essential in the broiler farming industry. Camera-based systems can economically weigh various broiler types without expensive platforms. However, existing computer vision methods for weight estimation are less mature, as they focus on young broilers. In effect, the estimation error increases with the age of the broiler. To tackle this, this paper presents a novel framework. First, it employs Mask R-CNN for instance segmentation of depth images captured by 3D cameras. Next, once the images of either a single broiler or multiple broilers are segmented, the extended artificial features and the learned features extracted by Customized Resnet50 (C-Resnet50) are fused by a feature fusion module. Finally, the fused features are adopted to estimate the body weight of each broiler employing gradient boosting decision tree (GBDT). By integrating diverse features with GBTD, the proposed framework can effectively obtain the broiler instance among many depth images of multiple broilers in the visual field despite the complex background. Experimental results show that this framework significantly boosts accuracy and robustness. With an MAE of 0.093 kg and an R2 of 0.707 in a test set of 240 63-day-old bantam chicken images, it outperforms other methods.
RESUMO
BACKGROUND: Ochratoxin A (OTA) is a mycotoxin widely present in raw food and feed materials and is mainly produced by Aspergillus ochraceus and Penicillium verrucosum. Our previous study showed that OTA principally induces liver inflammation by causing intestinal flora disorder, especially Bacteroides plebeius (B. plebeius) overgrowth. However, whether OTA or B. plebeius alteration leads to abnormal tryptophan-related metabolism in the intestine and liver is largely unknown. This study aimed to elucidate the metabolic changes in the intestine and liver induced by OTA and the tryptophan-related metabolic pathway in the liver. MATERIALS AND METHODS: A total of 30 healthy 1-day-old male Cherry Valley ducks were randomly divided into 2 groups. The control group was given 0.1 mol/L NaHCO3 solution, and the OTA group was given 235 µg/kg body weight OTA for 14 consecutive days. Tryptophan metabolites were determined by intestinal chyme metabolomics and liver tryptophan-targeted metabolomics. AMPK-related signaling pathway factors were analyzed by Western blotting and mRNA expression. RESULTS: Metabolomic analysis of the intestinal chyme showed that OTA treatment resulted in a decrease in intestinal nicotinuric acid levels, the downstream product of tryptophan metabolism, which were significantly negatively correlated with B. plebeius abundance. In contrast, OTA induced a significant increase in indole-3-acetamide levels, which were positively correlated with B. plebeius abundance. Simultaneously, OTA decreased the levels of ATP, NAD+ and dipeptidase in the liver. Liver tryptophan metabolomics analysis showed that OTA inhibited the kynurenine metabolic pathway and reduced the levels of kynurenine, anthranilic acid and nicotinic acid. Moreover, OTA increased the phosphorylation of AMPK protein and decreased the phosphorylation of mTOR protein. CONCLUSION: OTA decreased the level of nicotinuric acid in the intestinal tract, which was negatively correlated with B. plebeius abundance. The abnormal metabolism of tryptophan led to a deficiency of NAD+ and ATP in the liver, which in turn activated the AMPK signaling pathway. Our results provide new insights into the toxic mechanism of OTA, and tryptophan metabolism might be a target for prevention and treatment.
RESUMO
Accurately counting the number of insect pests from digital images captured on yellow sticky traps remains a challenge in the field of insect pest monitoring. In this study, we develop a new approach to counting the number of insect pests using a saliency map and improved non-maximum suppression. Specifically, as the background of a yellow sticky trap is simple and the insect pest object is small, we exploit a saliency map to construct a region proposal generator including saliency map building, activation region formation, background-foreground classifier, and tune-up boxes involved in region proposal generation. For each region proposal, a convolutional neural network (CNN) model is used to classify it as a specific insect pest class, resulting in detection bounding boxes. By considering the relationship between detection bounding boxes, we thus develop an improved non-maximum suppression to sophisticatedly handle the redundant detection bounding boxes and obtain the insect pest number through counting the handled detection bounding boxes, each of which covers one insect pest. As this insect pest counter may miscount insect pests that are close to each other, we further integrate the widely used Faster R-CNN with the mentioned insect pest counter to construct a dual-path network. Extensive experimental simulations show that the two proposed insect pest counters achieve significant improvement in terms of F1 score against the state-of-the-art object detectors as well as insect pest detection methods.
RESUMO
Pest early warning technology is part of the prerequisite for the timely and effective control of pest outbreaks. Traditional pest warning system with artificial mathematical statistics, radar, and remote sensing has some deficiency in many aspects, such as higher cost, weakness of accuracy, low efficiency, and so on. In this study, Pest image data was collected and information about four major vegetable pests (Bemisia tabaci (Gennadius), Phyllotreta striolata (Fabricius), Plutella xylostella (Linnaeus), and Frankliniella occidentalis (Pergande) (Thysanoptera, Thripidae)) in southern China was extracted. A multi-sensor network system was constructed to collect small-scale environmental data on vegetable production sites. The key factors affecting the distribution of pests were discovered by multi-dimensional information, such as soil, environment, eco-climate, and meteorology of vegetable fields, and finally, the vegetable pest warning system that is based on multidimensional big data (VPWS-MBD) was implemented. Pest and environmental data from Guangzhou Dongsheng Bio-Park were collected from June 2017 to February 2018. The number of pests is classified as level I (0â»56), level II (57â»131), level III (132â»299), and level IV (above 300) by K-Means algorithm. The Pearson correlation coefficient and the grey relational analysis algorithm were used to calculate the five key influence factors of rainfall, soil temperature, air temperature, leaf surface humidity, and soil moisture. Finally, Back Propagation (BP) Neural Network was used for classification prediction. The result shows: I-level warning accuracy was 96.14%, recall rate was 97.56%; II-level pest warning accuracy was 95.34%, the recall rate was 96.45%; III-level pest warning accuracy of 100%, the recall rate was 96.28%; IV-level pest warning accuracy of 100%, recall rate was 100%. It proves that the early warning system can effectively predict vegetable pests and achieve the early warning of vegetable pest’s requirements, with high availability.