Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 15 de 15
Filtrar
Más filtros




Base de datos
Asunto de la revista
Intervalo de año de publicación
1.
Front Med (Lausanne) ; 11: 1445069, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39440041

RESUMEN

Background: Biliary atresia (BA) is a severe congenital biliary developmental abnormality threatening neonatal health. Traditional diagnostic methods rely heavily on experienced radiologists, making the process time-consuming and prone to variability. The application of deep learning for the automated diagnosis of BA remains underexplored. Methods: This study introduces GallScopeNet, a deep learning model designed to improve diagnostic efficiency and accuracy through innovative architecture and advanced feature extraction techniques. The model utilizes data from a carefully constructed dataset of gallbladder ultrasound images. A dataset comprising thousands of ultrasound images was employed, with the majority used for training and validation and a subset reserved for external testing. The model's performance was evaluated using five-fold cross-validation and external assessment, employing metrics such as accuracy and the area under the receiver operating characteristic curve (AUC), compared against clinical diagnostic standards. Results: GallScopeNet demonstrated exceptional performance in distinguishing BA from non-BA cases. In the external test dataset, GallScopeNet achieved an accuracy of 81.21% and an AUC of 0.85, indicating strong diagnostic capabilities. The results highlighted the model's ability to maintain high classification performance, reducing misdiagnosis and missed diagnosis. Conclusion: GallScopeNet effectively differentiates between BA and non-BA images, demonstrating significant potential and reliability for early diagnosis. The system's high efficiency and accuracy suggest it could serve as a valuable diagnostic tool in clinical settings, providing substantial technical support for improving diagnostic workflows.

2.
Front Plant Sci ; 15: 1407839, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39119493

RESUMEN

Grapefruit and stem detection play a crucial role in automated grape harvesting. However, the dense arrangement of fruits in vineyards and the similarity in color between grape stems and branches pose challenges, often leading to missed or false detections in most existing models. Furthermore, these models' substantial parameters and computational demands result in slow detection speeds and difficulty deploying them on mobile devices. Therefore, we propose a lightweight TiGra-YOLOv8 model based on YOLOv8n. Initially, we integrated the Attentional Scale Fusion (ASF) module into the Neck, enhancing the network's ability to extract grape features in dense orchards. Subsequently, we employed Adaptive Training Sample Selection (ATSS) as the label-matching strategy to improve the quality of positive samples and address the challenge of detecting grape stems with similar colors. We then utilized the Weighted Interpolation of Sequential Evidence for Intersection over Union (Wise-IoU) loss function to overcome the limitations of CIoU, which does not consider the geometric attributes of targets, thereby enhancing detection efficiency. Finally, the model's size was reduced through channel pruning. The results indicate that the TiGra-YOLOv8 model's mAP(0.5) increased by 3.33% compared to YOLOv8n, with a 7.49% improvement in detection speed (FPS), a 52.19% reduction in parameter count, and a 51.72% decrease in computational demand, while also reducing the model size by 45.76%. The TiGra-YOLOv8 model not only improves the detection accuracy for dense and challenging targets but also reduces model parameters and speeds up detection, offering significant benefits for grape detection.

3.
Front Plant Sci ; 15: 1411178, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38903423

RESUMEN

Introduction: Fingered citron slices possess significant nutritional value and economic advantages as herbal products that are experiencing increasing demand. The grading of fingered citron slices plays a crucial role in the marketing strategy to maximize profits. However, due to the limited adoption of standardization practices and the decentralized structure of producers and distributors, the grading process of fingered citron slices requires substantial manpower and lead to a reduction in profitability. In order to provide authoritative, rapid and accurate grading standards for the market of fingered citron slices, this paper proposes a grading detection model for fingered citron slices based on improved YOLOv8n. Methods: Firstly, we obtained the raw materials of fingered citron slices from a dealer of Sichuan fingered citron origin in Shimian County, Ya'an City, Sichuan Province, China. Subsequently, high-resolution fingered citron slices images were taken using an experimental bench, and the dataset for grading detection of fingered citron slices was formed after manual screening and labelling. Based on this dataset, we chose YOLOv8n as the base model, and then replaced the YOLOv8n backbone structure with the Fasternet main module to improve the computational efficiency in the feature extraction process. Then we redesigned the PAN-FPN structure used in the original model with BiFPN structure to make full use of the high-resolution features to extend the sensory field of the model while balancing the computation amount and model volume, and finally we get the improved target detection algorithm YOLOv8-FCS. Results: The findings from the experiments indicated that this approach surpassed the conventional RT-DETR, Faster R-CNN, SSD300 and YOLOv8n models in most evaluation indicators. The experimental results show that the grading accuracy of the YOLOv8-FCS model reaches 98.1%, and the model size is only 6.4 M, and the FPS is 130.3. Discussion: The results suggest that our model offers both rapid and precise grading for fingered citron slices, holding significant practical value for promoting the advancement of automated grading systems tailored to fingered citron slices.

4.
Heliyon ; 10(9): e30373, 2024 May 15.
Artículo en Inglés | MEDLINE | ID: mdl-38765108

RESUMEN

In the vanguard of oncological advancement, this investigation delineates the integration of deep learning paradigms to refine the screening process for Anticancer Peptides (ACPs), epitomizing a new frontier in broad-spectrum oncolytic therapeutics renowned for their targeted antitumor efficacy and specificity. Conventional methodologies for ACP identification are marred by prohibitive time and financial exigencies, representing a formidable impediment to the evolution of precision oncology. In response, our research heralds the development of a groundbreaking screening apparatus that marries Natural Language Processing (NLP) with the Pseudo Amino Acid Composition (PseAAC) technique, thereby inaugurating a comprehensive ACP compendium for the extraction of quintessential primary and secondary structural attributes. This innovative methodological approach is augmented by an optimized BERT model, meticulously calibrated for ACP detection, which conspicuously surpasses existing BERT variants and traditional machine learning algorithms in both accuracy and selectivity. Subjected to rigorous validation via five-fold cross-validation and external assessment, our model exhibited exemplary performance, boasting an average Area Under the Curve (AUC) of 0.9726 and an F1 score of 0.9385, with external validation further affirming its prowess (AUC of 0.9848 and F1 of 0.9371). These findings vividly underscore the method's unparalleled efficacy and prospective utility in the precise identification and prognostication of ACPs, significantly ameliorating the financial and temporal burdens traditionally associated with ACP research and development. Ergo, this pioneering screening paradigm promises to catalyze the discovery and clinical application of ACPs, constituting a seminal stride towards the realization of more efficacious and economically viable precision oncology interventions.

5.
Front Plant Sci ; 14: 1224884, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37534292

RESUMEN

Introduction: The difficulties in tea shoot recognition are that the recognition is affected by lighting conditions, it is challenging to segment images with similar backgrounds to the shoot color, and the occlusion and overlap between leaves. Methods: To solve the problem of low accuracy of dense small object detection of tea shoots, this paper proposes a real-time dense small object detection algorithm based on multimodal optimization. First, RGB, depth, and infrared images are collected form a multimodal image set, and a complete shoot object labeling is performed. Then, the YOLOv5 model is improved and applied to dense and tiny tea shoot detection. Secondly, based on the improved YOLOv5 model, this paper designs two data layer-based multimodal image fusion methods and a feature layerbased multimodal image fusion method; meanwhile, a cross-modal fusion module (FFA) based on frequency domain and attention mechanisms is designed for the feature layer fusion method to adaptively align and focus critical regions in intra- and inter-modal channel and frequency domain dimensions. Finally, an objective-based scale matching method is developed to further improve the detection performance of small dense objects in natural environments with the assistance of transfer learning techniques. Results and discussion: The experimental results indicate that the improved YOLOv5 model increases the mAP50 value by 1.7% compared to the benchmark model with fewer parameters and less computational effort. Compared with the single modality, the multimodal image fusion method increases the mAP50 value in all cases, with the method introducing the FFA module obtaining the highest mAP50 value of 0.827. After the pre-training strategy is used after scale matching, the mAP values can be improved by 1% and 1.4% on the two datasets. The research idea of multimodal optimization in this paper can provide a basis and technical support for dense small object detection.

6.
Plants (Basel) ; 12(15)2023 Aug 07.
Artículo en Inglés | MEDLINE | ID: mdl-37571037

RESUMEN

The plum is a kind of delicious and common fruit with high edible value and nutritional value. The accurate and effective detection of plum fruit is the key to fruit number counting and pest and disease early warning. However, the actual plum orchard environment is complex, and the detection of plum fruits has many problems, such as leaf shading and fruit overlapping. The traditional method of manually estimating the number of fruits and the presence of pests and diseases used in the plum growing industry has disadvantages, such as low efficiency, a high cost, and low accuracy. To detect plum fruits quickly and accurately in a complex orchard environment, this paper proposes an efficient plum fruit detection model based on an improved You Only Look Once version 7(YOLOv7). First, different devices were used to capture high-resolution images of plum fruits growing under natural conditions in a plum orchard in Gulin County, Sichuan Province, and a dataset for plum fruit detection was formed after the manual screening, data enhancement, and annotation. Based on the dataset, this paper chose YOLOv7 as the base model, introduced the Convolutional Block Attention Module (CBAM) attention mechanism in YOLOv7, used Cross Stage Partial Spatial Pyramid Pooling-Fast (CSPSPPF) instead of Cross Stage Partial Spatial Pyramid Pooling(CSPSPP) in the network, and used bilinear interpolation to replace the nearest neighbor interpolation in the original network upsampling module to form the improved target detection algorithm YOLOv7-plum. The tested YOLOv7-plum model achieved an average precision (AP) value of 94.91%, which was a 2.03% improvement compared to the YOLOv7 model. In order to verify the effectiveness of the YOLOv7-plum algorithm, this paper evaluated the performance of the algorithm through ablation experiments, statistical analysis, etc. The experimental results showed that the method proposed in this study could better achieve plum fruit detection in complex backgrounds, which helped to promote the development of intelligent cultivation in the plum industry.

7.
PLoS One ; 18(7): e0287778, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37498811

RESUMEN

Real-time, rapid, accurate, and non-destructive batch testing of fruit growth state is crucial for improving economic benefits. However, for plums, environmental variability, multi-scale, occlusion, overlapping of leaves or fruits pose significant challenges to accurate and complete labeling using mainstream algorithms like YOLOv5. In this study, we established the first artificial dataset of plums and used deep learning to improve target detection. Our improved YOLOv5 algorithm achieved more accurate and rapid batch identification of immature plums, resulting in improved quality and economic benefits. The YOLOv5-plum algorithm showed 91.65% recognition accuracy for immature plums after our algorithmic improvements. Currently, the YOLOv5-plum algorithm has demonstrated significant advantages in detecting unripe plums and can potentially be applied to other unripe fruits in the future.


Asunto(s)
Prunus domestica , Frutas , Hojas de la Planta
8.
Plant Phenomics ; 5: 0024, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36930773

RESUMEN

Plant trichomes are epidermal structures with a wide variety of functions in plant development and stress responses. Although the functional importance of trichomes has been realized, the tedious and time-consuming manual phenotyping process greatly limits the research progress of trichome gene cloning. Currently, there are no fully automated methods for identifying maize trichomes. We introduce TrichomeYOLO, an automated trichome counting and measuring method that uses a deep convolutional neural network, to identify the density and length of maize trichomes from scanning electron microscopy images. Our network achieved 92.1% identification accuracy on scanning electron microscopy micrographs of maize leaves, which is much better performed than the other 5 currently mainstream object detection models, Faster R-CNN, YOLOv3, YOLOv5, DETR, and Cascade R-CNN. We applied TrichomeYOLO to investigate trichome variations in a natural population of maize and achieved robust trichome identification. Our method and the pretrained model are open access in Github (https://github.com/yaober/trichomecounter). We believe TrichomeYOLO will help make efficient trichome identification and help facilitate researches on maize trichomes.

9.
Plants (Basel) ; 11(22)2022 Nov 20.
Artículo en Inglés | MEDLINE | ID: mdl-36432903

RESUMEN

The accurate segmentation of significant rice diseases and assessment of the degree of disease damage are the keys to their early diagnosis and intelligent monitoring and are the core of accurate pest control and information management. Deep learning applied to rice disease detection and segmentation can significantly improve the accuracy of disease detection and identification but requires a large number of training samples to determine the optimal parameters of the model. This study proposed a lightweight network based on copy paste and semantic segmentation for accurate disease region segmentation and severity assessment. First, a dataset for rice significant disease segmentation was selected and collated based on 3 open-source datasets, containing 450 sample images belonging to 3 categories of rice leaf bacterial blight, blast and brown spot. Then, to increase the diversity of samples, a data augmentation method, rice leaf disease copy paste (RLDCP), was proposed that expanded the collected disease samples with the concept of copy and paste. The new RSegformer model was then trained by replacing the new backbone network with the lightweight semantic segmentation network Segformer, combining the attention mechanism and changing the upsampling operator, so that the model could better balance local and global information, speed up the training process and reduce the degree of overfitting of the network. The results show that RLDCP could effectively improve the accuracy and generalisation performance of the semantic segmentation model compared with traditional data augmentation methods and could improve the MIoU of the semantic segmentation model by about 5% with a dataset only twice the size. RSegformer can achieve an 85.38% MIoU at a model size of 14.36 M. The method proposed in this paper can quickly, easily and accurately identify disease occurrence areas, their species and the degree of disease damage, providing a reference for timely and effective rice disease control.

10.
Animals (Basel) ; 12(12)2022 Jun 16.
Artículo en Inglés | MEDLINE | ID: mdl-35739889

RESUMEN

The slow loris (Genus Nycticebus) is a group of small, nocturnal and venomous primates with a distinctive locomotion mode. The detection of slow loris plays an important role in the subsequent individual identification and behavioral recognition and thus contributes to formulating targeted conservation strategies, particularly in reintroduction and post-release monitoring. However, fewer studies have been conducted on efficient and accurate detection methods of this endangered taxa. The traditional methods to detect the slow loris involve long-term observation or watching surveillance video repeatedly, which would involve manpower and be time consuming. Because humans cannot maintain a high degree of attention for a long time, they are also prone to making missed detections or false detections. Due to these observational challenges, using computer vision to detect slow loris presence and activity is desirable. This article establishes a novel target detection dataset based on monitoring videos of captive Bengal slow loris (N. bengalensis) from the wildlife rescue centers in Xishuangbanna and Pu'er, Yunnan, China. The dataset is used to test two improvement schemes based on the YOLOv5 network: (1) YOLOv5-CBAM + TC, the attention mechanism and deconvolution are introduced; (2) YOLOv5-SD, the small object detection layer is added. The results demonstrate that the YOLOv5-CBAM + TC effectively improves the detection effect. At the cost of increasing the model size by 0.6 MB, the precision rate, the recall rate and the mean average precision (mAP) are increased by 2.9%, 3.7% and 3.5%, respectively. The YOLOv5-CBAM + TC model can be used as an effective method to detect individual slow loris in a captive environment, which helps to realize slow loris face and posture recognition based on computer vision.

11.
Sci Rep ; 12(1): 7738, 2022 05 11.
Artículo en Inglés | MEDLINE | ID: mdl-35545645

RESUMEN

The precise identification of postural behavior plays a crucial role in evaluation of animal welfare and captive management. Deep learning technology has been widely used in automatic behavior recognition of wild and domestic fauna species. The Asian slow loris is a group of small, nocturnal primates with a distinctive locomotion mode, and a large number of individuals were confiscated into captive settings due to illegal trade, making the species an ideal as a model for postural behavior monitoring. Captive animals may suffer from being housed in an inappropriate environment and may display abnormal behavior patterns. Traditional data collection methods are time-consuming and laborious, impeding efforts to improve lorises' captive welfare and to develop effective reintroduction strategies. This study established the first human-labeled postural behavior dataset of slow lorises and used deep learning technology to recognize postural behavior based on object detection and semantic segmentation. The precision of the classification based on YOLOv5 reached 95.1%. The Dilated Residual Networks (DRN) feature extraction network showed the best performance in semantic segmentation, and the classification accuracy reached 95.2%. The results imply that computer automatic identification of postural behavior may offer advantages in assessing animal activity and can be applied to other nocturnal taxa.


Asunto(s)
Aprendizaje Profundo , Lorisidae , Bienestar del Animal , Animales , Locomoción , Primates
12.
Plants (Basel) ; 10(8)2021 Aug 06.
Artículo en Inglés | MEDLINE | ID: mdl-34451670

RESUMEN

The real-time detection and counting of rice ears in fields is one of the most important methods for estimating rice yield. The traditional manual counting method has many disadvantages: it is time-consuming, inefficient and subjective. Therefore, the use of computer vision technology can improve the accuracy and efficiency of rice ear counting in the field. The contributions of this article are as follows. (1) This paper establishes a dataset containing 3300 rice ear samples, which represent various complex situations, including variable light and complex backgrounds, overlapping rice and overlapping leaves. The collected images were manually labeled, and a data enhancement method was used to increase the sample size. (2) This paper proposes a method that combines the LC-FCN (localization-based counting fully convolutional neural network) model based on transfer learning with the watershed algorithm for the recognition of dense rice images. The results show that the model is superior to traditional machine learning methods and the single-shot multibox detector (SSD) algorithm for target detection. Moreover, it is currently considered an advanced and innovative rice ear counting model. The mean absolute error (MAE) of the model on the 300-size test set is 2.99. The model can be used to calculate the number of rice ears in the field. In addition, it can provide reliable basic data for rice yield estimation and a rice dataset for research.

13.
Animals (Basel) ; 11(5)2021 Apr 30.
Artículo en Inglés | MEDLINE | ID: mdl-33946472

RESUMEN

Posture changes in pigs during growth are often precursors of disease. Monitoring pigs' behavioral activities can allow us to detect pathological changes in pigs earlier and identify the factors threatening the health of pigs in advance. Pigs tend to be farmed on a large scale, and manual observation by keepers is time consuming and laborious. Therefore, the use of computers to monitor the growth processes of pigs in real time, and to recognize the duration and frequency of pigs' postural changes over time, can prevent outbreaks of porcine diseases. The contributions of this article are as follows: (1) The first human-annotated pig-posture-identification dataset in the world was established, including 800 pictures of each of the four pig postures: standing, lying on the stomach, lying on the side, and exploring. (2) When using a deep separable convolutional network to classify pig postures, the accuracy was 92.45%. The results show that the method proposed in this paper achieves adequate pig-posture recognition in a piggery environment and may be suitable for livestock farm applications.

14.
Sensors (Basel) ; 21(2)2021 Jan 18.
Artículo en Inglés | MEDLINE | ID: mdl-33477600

RESUMEN

Multi-rotor unmanned aerial vehicles (UAVs) for plant protection are widely used in China's agricultural production. However, spray droplets often drift and distribute nonuniformly, thereby harming its utilization and the environment. A variable spray system is designed, discussed, and verified to solve this problem. The distribution characteristics of droplet deposition under different spray states (flight state, environment state, nozzle state) are obtained through computational fluid dynamics simulation. In the verification experiment, the wind velocity error of most sample points is less than 1 m/s, and the deposition ratio error is less than 10%, indicating that the simulation is reliable. A simulation data set is used to train support vector regression and back propagation neural network with multiple parameters. An optimal regression model with the root mean square error of 6.5% is selected. The UAV offset and nozzle flow of the variable spray system can be obtained in accordance with the current spray state by multi-sensor fusion and the predicted deposition distribution characteristics. The farmland experiment shows that the deposition volume error between the prediction and experiment is within 30%, thereby proving the effectiveness of the system. This article provides a reference for the improvement of UAV intelligent spray system.


Asunto(s)
Agricultura , Análisis de Regresión , Viento
15.
Entropy (Basel) ; 22(7)2020 Jun 29.
Artículo en Inglés | MEDLINE | ID: mdl-33286491

RESUMEN

The gender ratio of free-range chickens is considered as a major animal welfare problem in commercial broiler farming. Free-range chicken producers need to identify chicken gender to estimate the economic value of their flock. However, it is challenging for farmers to estimate the gender ratio of chickens efficiently and accurately, since the environmental background is complicated and the chicken number is dynamic. Moreover, manual estimation is likely double counts or missed count and thus is inaccurate and time consuming. Hence, automated methods that can lead to results efficiently and accurately replace the identification abilities of a chicken gender expert, working in a farm environment, are beneficial to the industry. The contributions in this paper include: (1) Building the world's first chicken gender classification database annotated manually, which comprises 800 chicken flock images captured on a farm and 1000 single chicken images separated from the flock images by an object detection network, labelled with gender information. (2) Training a rooster and hen classifier using a deep neural network and cross entropy in information theory to achieve an average accuracy of 96.85%. The evaluation of the algorithm performance indicates that the proposed automated method is practical for the gender classification of chickens on the farm environment and provides a feasible way of thinking for the estimation of the gender ratio.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA