Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 20 de 31
Filtrar
1.
Sensors (Basel) ; 24(9)2024 Apr 24.
Artículo en Inglés | MEDLINE | ID: mdl-38732816

RESUMEN

Target detection technology based on unmanned aerial vehicle (UAV)-derived aerial imagery has been widely applied in the field of forest fire patrol and rescue. However, due to the specificity of UAV platforms, there are still significant issues to be resolved such as severe omission, low detection accuracy, and poor early warning effectiveness. In light of these issues, this paper proposes an improved YOLOX network for the rapid detection of forest fires in images captured by UAVs. Firstly, to enhance the network's feature-extraction capability in complex fire environments, a multi-level-feature-extraction structure, CSP-ML, is designed to improve the algorithm's detection accuracy for small-target fire areas. Additionally, a CBAM attention mechanism is embedded in the neck network to reduce interference caused by background noise and irrelevant information. Secondly, an adaptive-feature-extraction module is introduced in the YOLOX network's feature fusion part to prevent the loss of important feature information during the fusion process, thus enhancing the network's feature-learning capability. Lastly, the CIoU loss function is used to replace the original loss function, to address issues such as excessive optimization of negative samples and poor gradient-descent direction, thereby strengthening the network's effective recognition of positive samples. Experimental results show that the improved YOLOX network has better detection performance, with mAP@50 and mAP@50_95 increasing by 6.4% and 2.17%, respectively, compared to the traditional YOLOX network. In multi-target flame and small-target flame scenarios, the improved YOLO model achieved a mAP of 96.3%, outperforming deep learning algorithms such as FasterRCNN, SSD, and YOLOv5 by 33.5%, 7.7%, and 7%, respectively. It has a lower omission rate and higher detection accuracy, and it is capable of handling small-target detection tasks in complex fire environments. This can provide support for UAV patrol and rescue applications from a high-altitude perspective.

2.
Sensors (Basel) ; 24(2)2024 Jan 14.
Artículo en Inglés | MEDLINE | ID: mdl-38257615

RESUMEN

Recent advancements in computer vision technology, developments in sensors and sensor-collecting approaches, and the use of deep and transfer learning approaches have excelled in the development of autonomous vehicles. On-road vehicle detection has become a task of significant importance, especially due to exponentially increasing research on autonomous vehicles during the past few years. With high-end computing resources, a large number of deep learning models have been trained and tested for on-road vehicle detection recently. Vehicle detection may become a challenging process especially due to varying light and weather conditions like night, snow, sand, rain, foggy conditions, etc. In addition, vehicle detection should be fast enough to work in real time. This study investigates the use of the recent YOLO version, YOLOx, to detect vehicles in bad weather conditions including rain, fog, snow, and sandstorms. The model is tested on the publicly available benchmark dataset DAWN containing images containing four bad weather conditions, different illuminations, background, and number of vehicles in a frame. The efficacy of the model is evaluated in terms of precision, recall, and mAP. The results exhibit the better performance of YOLOx-s over YOLOx-m and YOLOx-l variants. YOLOx-s has 0.8983 and 0.8656 mAP for snow and sandstorms, respectively, while its mAP for rain and fog is 0.9509 and 0.9524, respectively. The performance of models is better for snow and foggy weather than rainy weather sandstorms. Further experiments indicate that enhancing image quality using multiscale retinex improves YOLOx performance.

3.
J Sci Food Agric ; 104(6): 3570-3584, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38150568

RESUMEN

BACKGROUND: Tea pests pose a significant threat to tea leaf yield and quality, necessitating fast and accurate detection methods to improve pest control efficiency and reduce economic losses for tea farmers. However, in real tea gardens, some tea pests are small in size and easily camouflaged by complex backgrounds, making it challenging for farmers to promptly and accurately identify them. RESULTS: To address this issue, we propose a real-time detection method based on TP-YOLOX for monitoring tea pests in complex backgrounds. Our approach incorporates the CSBLayer module, which combines convolution and multi-head self-attention mechanisms, to capture global contextual information from images and expand the network's perception field. Additionally, we integrate an efficient multi-scale attention module to enhance the model's ability to perceive fine details in small targets. To expedite model convergence and improve the precision of target localization, we employ the SIOU loss function as the bounding box regression function. Experimental results demonstrate that TP-YOLOX achieves a significant performance improvement with a relatively small additional computational cost (0.98 floating-point operations), resulting in a 4.50% increase in mean average precision (mAP) compared to the original YOLOX-s. When compared with existing object detection algorithms, TP-YOLOX outperforms them in terms of mAP performance. Moreover, the proposed method achieves a frame rate of 82.66 frames per second, meeting real-time requirements. CONCLUSION: TP-YOLOX emerges as a proficient solution, capable of accurately and swiftly identifying tea pests amidst the complex backgrounds of tea gardens. This contribution not only offers valuable insights for tea pest monitoring but also serves as a reference for achieving precise pest control. © 2023 Society of Chemical Industry.


Asunto(s)
Algoritmos , Árboles , Humanos , Agricultores , Jardinería ,
4.
Chin J Traumatol ; 2024 Apr 23.
Artículo en Inglés | MEDLINE | ID: mdl-38762418

RESUMEN

PURPOSE: Intertrochanteric fracture (ITF) classification is crucial for surgical decision-making. However, orthopedic trauma surgeons have shown lower accuracy in ITF classification than expected. The objective of this study was to utilize an artificial intelligence (AI) method to improve the accuracy of ITF classification. METHODS: We trained a network called YOLOX-SwinT, which is based on the You Only Look Once X (YOLOX) object detection network with Swin Transformer (SwinT) as the backbone architecture, using 762 radiographic ITF examinations as the training set. Subsequently, we recruited 5 senior orthopedic trauma surgeons (SOTS) and 5 junior orthopedic trauma surgeons (JOTS) to classify the 85 original images in the test set, as well as the images with the prediction results of the network model in sequence. Statistical analysis was performed using the Statistical Package for the Social Sciences (SPSS) 20.0 (IBM Corp., Armonk, NY, USA) to compare the differences among the SOTS, JOTS, SOTS + AI, JOTS + AI, SOTS + JOTS, and SOTS + JOTS + AI groups. All images were classified according to the AO/OTA 2018 classification system by 2 experienced trauma surgeons and verified by another expert in this field. Based on the actual clinical needs, after discussion, we integrated 8 subgroups into 5 new subgroups, and the dataset was divided into training, validation, and test sets by the ratio of 8:1:1. RESULTS: The mean average precision at the intersection over union (IoU) of 0.5 (mAP50) for subgroup detection reached 90.29%. The classification accuracy values of SOTS, JOTS, SOTS + AI, and JOTS + AI groups were 56.24% ± 4.02%, 35.29% ± 18.07%, 79.53% ± 7.14%, and 71.53% ± 5.22%, respectively. The paired t-test results showed that the difference between the SOTS and SOTS + AI groups was statistically significant, as well as the difference between the JOTS and JOTS + AI groups, and the SOTS + JOTS and SOTS + JOTS + AI groups. Moreover, the difference between the SOTS + JOTS and SOTS + JOTS + AI groups in each subgroup was statistically significant, with all p < 0.05. The independent samples t-test results showed that the difference between the SOTS and JOTS groups was statistically significant, while the difference between the SOTS + AI and JOTS + AI groups was not statistically significant. With the assistance of AI, the subgroup classification accuracy of both SOTS and JOTS was significantly improved, and JOTS achieved the same level as SOTS. CONCLUSION: In conclusion, the YOLOX-SwinT network algorithm enhances the accuracy of AO/OTA subgroups classification of ITF by orthopedic trauma surgeons.

5.
Sensors (Basel) ; 23(17)2023 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-37687991

RESUMEN

According to the survey statistics, most traffic accidents are caused by the driver's behavior and status irregularities. Because there is no multi-level dangerous state grading system at home and abroad, this paper proposes a complex state grading system for real-time detection and dynamic tracking of the driver's state. The system uses OpenMV as the acquisition camera combined with the cradle head tracking system to collect the driver's current driving image in real-time dynamically, combines the YOLOX algorithm with the OpenPose algorithm to judge the driver's dangerous driving behavior by detecting unsafe objects in the cab and the driver's posture, and combines the improved Retinaface face detection algorithm with the Dlib feature-point algorithm to discriminate the fatigue driving state of the driver. The experimental results show that the accuracy of the three driver danger levels (R1, R2, and R3) obtained by the proposed system reaches 95.8%, 94.5%, and 96.3%, respectively. The experimental results of this system have a specific practical significance in driver-distracted driving warnings.

6.
Sensors (Basel) ; 23(5)2023 Mar 05.
Artículo en Inglés | MEDLINE | ID: mdl-36905041

RESUMEN

It is crucial to monitor the status of aquaculture objects in recirculating aquaculture systems (RASs). Due to their high density and a high degree of intensification, aquaculture objects in such systems need to be monitored for a long time period to prevent losses caused by various factors. Object detection algorithms are gradually being used in the aquaculture industry, but it is difficult to achieve good results for scenes with high density and complex environments. This paper proposes a monitoring method for Larimichthys crocea in a RAS, which includes the detection and tracking of abnormal behavior. The improved YOLOX-S is used to detect Larimichthys crocea with abnormal behavior in real time. Aiming to solve the problems of stacking, deformation, occlusion, and too-small objects in a fishpond, the object detection algorithm used is improved by modifying the CSP module, adding coordinate attention, and modifying the part of the structure of the neck. After improvement, the AP50 reaches 98.4% and AP50:95 is also 16.2% higher than the original algorithm. In terms of tracking, due to the similarity in the fish's appearance, Bytetrack is used to track the detected objects, avoiding the ID switching caused by re-identification using appearance features. In the actual RAS environment, both MOTA and IDF1 can reach more than 95% under the premise of fully meeting real-time tracking, and the ID of the tracked Larimichthys crocea with abnormal behavior can be maintained stably. Our work can identify and track the abnormal behavior of fish efficiently, and this will provide data support for subsequent automatic treatment, thus avoiding loss expansion and improving the production efficiency of RASs.


Asunto(s)
Perciformes , Animales , Peces , Acuicultura/métodos
7.
Sensors (Basel) ; 23(6)2023 Mar 09.
Artículo en Inglés | MEDLINE | ID: mdl-36991676

RESUMEN

Computer vision in consideration of automated and robotic systems has come up as a steady and robust platform in sewer maintenance and cleaning tasks. The AI revolution has enhanced the ability of computer vision and is being used to detect problems with underground sewer pipes, such as blockages and damages. A large amount of appropriate, validated, and labeled imagery data is always a key requirement for learning AI-based detection models to generate the desired outcomes. In this paper, a new imagery dataset S-BIRD (Sewer-Blockages Imagery Recognition Dataset) is presented to draw attention to the predominant sewers' blockages issue caused by grease, plastic and tree roots. The need for the S-BIRD dataset and various parameters such as its strength, performance, consistency and feasibility have been considered and analyzed for real-time detection tasks. The YOLOX object detection model has been trained to prove the consistency and viability of the S-BIRD dataset. It also specified how the presented dataset will be used in an embedded vision-based robotic system to detect and remove sewer blockages in real-time. The outcomes of an individual survey conducted at a typical mid-size city in a developing country, Pune, India, give ground for the necessity of the presented work.

8.
Sensors (Basel) ; 23(17)2023 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-37688054

RESUMEN

Accurate and rapid response in complex driving scenarios is a challenging problem in autonomous driving. If a target is detected, the vehicle will not be able to react in time, resulting in fatal safety accidents. Therefore, the application of driver assistance systems requires a model that can accurately detect targets in complex scenes and respond quickly. In this paper, a lightweight feature extraction model, ShuffDet, is proposed to replace the CSPDark53 model used by YOLOX by improving the YOLOX algorithm. At the same time, an attention mechanism is introduced into the path aggregation feature pyramid network (PAFPN) to make the network focus more on important information in the network, thereby improving the accuracy of the model. This model, which combines two methods, is called ShuffYOLOX, and it can improve the accuracy of the model while keeping it lightweight. The performance of the ShuffYOLOX model on the KITTI dataset is tested in this paper, and the experimental results show that compared to the original network, the mean average precision (mAP) of the ShuffYOLOX model on the KITTI dataset reaches 92.20%. In addition, the number of parameters of the ShuffYOLOX model is reduced by 34.57%, the Gflops are reduced by 42.19%, and the FPS is increased by 65%. Therefore, the ShuffYOLOX model is very suitable for autonomous driving applications.

9.
Sensors (Basel) ; 23(8)2023 Apr 07.
Artículo en Inglés | MEDLINE | ID: mdl-37112134

RESUMEN

In self-driving cars, object detection algorithms are becoming increasingly important, and the accurate and fast recognition of objects is critical to realize autonomous driving. The existing detection algorithms are not ideal for the detection of small objects. This paper proposes a YOLOX-based network model for multi-scale object detection tasks in complex scenes. This method adds a CBAM-G module to the backbone of the original network, which performs grouping operations on CBAM. It changes the height and width of the convolution kernel of the spatial attention module to 7 × 1 to improve the ability of the model to extract prominent features. We proposed an object-contextual feature fusion module, which can provide more semantic information and improve the perception of multi-scale objects. Finally, we considered the problem of fewer samples and less loss of small objects and introduced a scaling factor that could increase the loss of small objects to improve the detection ability of small objects. We validated the effectiveness of the proposed method on the KITTI dataset, and the mAP value was 2.46% higher than the original model. Experimental comparisons showed that our model achieved superior detection performance compared to other models.

10.
Sensors (Basel) ; 23(10)2023 May 11.
Artículo en Inglés | MEDLINE | ID: mdl-37430595

RESUMEN

Industrial inspection is crucial for maintaining quality and safety in industrial processes. Deep learning models have recently demonstrated promising results in such tasks. This paper proposes YOLOX-Ray, an efficient new deep learning architecture tailored for industrial inspection. YOLOX-Ray is based on the You Only Look Once (YOLO) object detection algorithms and integrates the SimAM attention mechanism for improved feature extraction in the Feature Pyramid Network (FPN) and Path Aggregation Network (PAN). Moreover, it also employs the Alpha-IoU cost function for enhanced small-scale object detection. YOLOX-Ray's performance was assessed in three case studies: hotspot detection, infrastructure crack detection and corrosion detection. The architecture outperforms all other configurations, achieving mAP50 values of 89%, 99.6% and 87.7%, respectively. For the most challenging metric, mAP50:95, the achieved values were 44.7%, 66.1% and 51.8%, respectively. A comparative analysis demonstrated the importance of combining the SimAM attention mechanism with Alpha-IoU loss function for optimal performance. In conclusion, YOLOX-Ray's ability to detect and to locate multi-scale objects in industrial environments presents new opportunities for effective, efficient and sustainable inspection processes across various industries, revolutionizing the field of industrial inspections.

11.
Sensors (Basel) ; 22(12)2022 Jun 09.
Artículo en Inglés | MEDLINE | ID: mdl-35746145

RESUMEN

To address power transmission line (PTL) traversing complex environments leading to data collection being difficult and costly, we propose a novel auto-synthesis dataset approach for fitting recognition using prior series data. The approach mainly includes three steps: (1) formulates synthesis rules by the prior series data; (2) renders 2D images based on the synthesis rules utilizing advanced virtual 3D techniques; (3) generates the synthetic dataset with images and annotations obtained by processing images using the OpenCV. The trained model using the synthetic dataset was tested by the real dataset (including images and annotations) with a mean average precision (mAP) of 0.98, verifying the feasibility and effectiveness of the proposed approach. The recognition accuracy by the test is comparable with training by real samples and the cost is greatly reduced to generate synthetic datasets. The proposed approach improves the efficiency of establishing a dataset, providing a training data basis for deep learning (DL) of fitting recognition.

12.
Sensors (Basel) ; 22(16)2022 Aug 18.
Artículo en Inglés | MEDLINE | ID: mdl-36015946

RESUMEN

Aerial insulator defect images have some features. For instance, the complex background and small target of defects would make it difficult to detect insulator defects quickly and accurately. To solve the problem of low accuracy of insulator defect detection, this paper concerns the shortcomings of IoU and the sensitivity of small targets to the model regression accuracy. An improved SIoU loss function was proposed based on the regular influence of regression direction on the accuracy. This loss function can accelerate the convergence of the model and make it achieve better results in regressions. For complex backgrounds, ECA (Efficient Channel Attention Module) is embedded between the backbone and the feature fusion layer of the model to reduce the influence of redundant features on the detection accuracy and make progress in the aspect. As a result, these experiments show that the improved model achieved 97.18% mAP which is 2.74% higher than before, and the detection speed could reach 71 fps. To some extent, it can detect insulator and its defects accurately and in real-time.


Asunto(s)
Algoritmos
13.
Sensors (Basel) ; 22(21)2022 Nov 01.
Artículo en Inglés | MEDLINE | ID: mdl-36366094

RESUMEN

Rails play a vital role in the bearing and guidance of high-speed trains, and the normal condition of rail components is the guarantee of the operation and maintenance safety. Fasteners are critical components for fixing the rails, so it is particularly important to detect whether they are in a normal state or not. The current rail-fastener detection models have some drawbacks, including poor generalization ability, large model volume and low detection efficiency. In view of this, an improved YoLoX-Nano rail-fastener-defect-detection method is proposed in this paper. The CA attention mechanism is added to the three output feature maps of CSPDarknet and the enhanced feature extraction part of the Path Aggregation Feature Pyramid Network (PAFPN); the Adaptively Spatial Feature Fusion (ASFF) is added after the PAFPN output feature map, which enables the semantic information of the high-level features and the fine-grained features of the bottom layer to be further enhanced. The improved YoLoX-Nano model has improved the AP value by 27.42% on fractured fasteners, 15.88% on displacement fasteners and 12.96% on normal fasteners. Moreover, the mAP value is improved by 18.75%, and it is 14.75% higher than the two-stage model Faster-RCNN on mAP. In addition, compared with YoLov7-tiny, the improved YoLoX-Nano model achieves 13.56% improvement on mAP. Although the improved model increases a certain amount of calculation, the detection speed of the improved model has been increased by 30.54 fps and by 32.33 fps when compared with that of the Single-Shot Multi-Box Detector (SSD) model and the You Only Look Once v3 (YoLov3) model, reaching 54.35 fps. The improved YoLoX-Nano model enables accurate and rapid identification of the defects of rail fasteners, which can meet the needs of real-time detection. Furthermore, it has advantages in lightweight deployment of terminals for rail-fastener detection, thus providing some reference for image recognition and detection in other fields.

14.
Sensors (Basel) ; 22(24)2022 Dec 16.
Artículo en Inglés | MEDLINE | ID: mdl-36560304

RESUMEN

Steel is one of the most basic ingredients, which plays an important role in the machinery industry. However, the steel surface defects heavily affect its quality. The demand for surface defect detectors draws much attention from researchers all over the world. However, there are still some drawbacks, e.g., the dataset is limited accessible or small-scale public, and related works focus on developing models but do not deeply take into account real-time applications. In this paper, we investigate the feasibility of applying stage-of-the-art deep learning methods based on YOLO models as real-time steel surface defect detectors. Particularly, we compare the performance of YOLOv5, YOLOX, and YOLOv7 while training them with a small-scale open-source NEU-DET dataset on GPU RTX 2080. From the experiment results, YOLOX-s achieves the best accuracy of 89.6% mAP on the NEU-DET dataset. Then, we deploy the weights of trained YOLO models on Nvidia devices to evaluate their real-time performance. Our experiments devices consist of Nvidia Jetson Nano and Jetson Xavier AGX. We also apply some real-time optimization techniques (i.e., exporting to TensorRT, lowering the precision to FP16 or INT8 and reducing the input image size to 320 × 320) to reduce detection speed (fps), thus also reducing the mAP accuracy.


Asunto(s)
Industrias , Investigadores , Humanos , Acero , Aprendizaje Automático
15.
Sensors (Basel) ; 22(24)2022 Dec 19.
Artículo en Inglés | MEDLINE | ID: mdl-36560361

RESUMEN

The detection of road facilities or roadside structures is essential for high-definition (HD) maps and intelligent transportation systems (ITSs). With the rapid development of deep-learning algorithms in recent years, deep-learning-based object detection techniques have provided more accurate and efficient performance, and have become an essential tool for HD map reconstruction and advanced driver-assistance systems (ADASs). Therefore, the performance evaluation and comparison of the latest deep-learning algorithms in this field is indispensable. However, most existing works in this area limit their focus to the detection of individual targets, such as vehicles or pedestrians and traffic signs, from driving view images. In this study, we present a systematic comparison of three recent algorithms for large-scale multi-class road facility detection, namely Mask R-CNN, YOLOx, and YOLOv7, on the Mapillary dataset. The experimental results are evaluated according to the recall, precision, mean F1-score and computational consumption. YOLOv7 outperforms the other two networks in road facility detection, with a precision and recall of 87.57% and 72.60%, respectively. Furthermore, we test the model performance on our custom dataset obtained from the Japanese road environment. The results demonstrate that models trained on the Mapillary dataset exhibit sufficient generalization ability. The comparison presented in this study aids in understanding the strengths and limitations of the latest networks in multiclass object detection on large-scale street-level datasets.


Asunto(s)
Conducción de Automóvil , Peatones , Humanos , Algoritmos , Cultura , Inteligencia
16.
Sensors (Basel) ; 22(18)2022 Sep 15.
Artículo en Inglés | MEDLINE | ID: mdl-36146322

RESUMEN

Domestic trash detection is an essential technology toward achieving a smart city. Due to the complexity and variability of urban trash scenarios, the existing trash detection algorithms suffer from low detection rates and high false positives, as well as the general problem of slow speed in industrial applications. This paper proposes an i-YOLOX model for domestic trash detection based on deep learning algorithms. First, a large number of real-life trash images are collected into a new trash image dataset. Second, the lightweight operator involution is incorporated into the feature extraction structure of the algorithm, which allows the feature extraction layer to establish long-distance feature relationships and adaptively extract channel features. In addition, the ability of the model to distinguish similar trash features is strengthened by adding the convolutional block attention module (CBAM) to the enhanced feature extraction network. Finally, the design of the involution residual head structure in the detection head reduces the gradient disappearance and accelerates the convergence of the model loss values allowing the model to perform better classification and regression of the acquired feature layers. In this study, YOLOX-S is chosen as the baseline for each enhancement experiment. The experimental results show that compared with the baseline algorithm, the mean average precision (mAP) of i-YOLOX is improved by 1.47%, the number of parameters is reduced by 23.3%, and the FPS is improved by 40.4%. In practical applications, this improved model achieves accurate recognition of trash in natural scenes, which further validates the generalization performance of i-YOLOX and provides a reference for future domestic trash detection research.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Ciudades
17.
Sci Rep ; 14(1): 10355, 2024 May 06.
Artículo en Inglés | MEDLINE | ID: mdl-38710770

RESUMEN

Tunnel cracks are thin and narrow linear targets, and their pixel proportions in images are usually very low, less than 6%; therefore, a method is needed to better detect small crack targets. In this study, a crack detection method based on crack characteristics and an anchor-free framework is investigated. First, the characteristics of cracks are analyzed to obtain the real crack texture, interference noise texture, and targets appearing near each crack as the context information for the model to filter and remove noise. We discuss the crack detection performance of anchor-based and anchor-free algorithms. Then, an optimized anchor-free algorithm is proposed in this paper for crack detection. Based on the advantages of YOLOX-x, we add a semantic enhancement module to better use contextual information. The experimental results show that the anchor-free algorithm performs slightly better than other algorithms in crack detection situations. In addition, the proposed method displays better detection performance for slender and inconspicuous cracks, with an average precision of 0.858.

18.
Comput Biol Med ; 169: 107847, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38141452

RESUMEN

PROBLEM: Organoids are 3D cultures that are commonly used for biological and medical research in vitro due to their functional and structural similarity to source organs. The development of organoids can be assessed by morphological tests. However, manual analysis of organoid morphology requires intensive labor from professionals and is prone to observer discrepancies. AIM: Computer-assisted methods alleviate the pressure of manual labor, especially with the development of deep learning, the performance of morphological detection has been further improved. The aim of this paper is to automate the assessment of organoid morphology using deep learning techniques to reduce the labor pressure of professionals. METHODS: Based on the lightweight model YOLOX, a lightweight intestinal organoid detection model named Deep-Orga is proposed. First, the performance of the Deep-Orga model is compared with other classical models on the intestinal organoids dataset. Then, ablation experiments are used to validate the improvement of the model detection performance by the improved module. Finally, Deep-Orga is compared with other methods. RESULTS: Deep-Orga achieves optimal organoid detection with a partial increase in computational effort. Using Deep-Orga to replace the manual analysis process provides a new automated method for organoid morphology evaluation. CONCLUSION: Deep-Orga proposed in this paper is able to accurately assess organoid development, effectively relieving the labor pressure of professionals and avoiding the subjectivity of assessment. This paper demonstrates the potential application of deep learning in the field of organoid morphology analysis.


Asunto(s)
Investigación Biomédica , Aprendizaje Profundo , Trabajo de Parto , Embarazo , Femenino , Humanos , Organoides
19.
Front Plant Sci ; 15: 1338228, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38606066

RESUMEN

The accurate identification of maize crop row navigation lines is crucial for the navigation of intelligent weeding machinery, yet it faces significant challenges due to lighting variations and complex environments. This study proposes an optimized version of the YOLOX-Tiny single-stage detection network model for accurately identifying maize crop row navigation lines. It incorporates adaptive illumination adjustment and multi-scale prediction to enhance dense target detection. Visual attention mechanisms, including Efficient Channel Attention and Cooperative Attention modules, are introduced to better extract maize features. A Fast Spatial Pyramid Pooling module is incorporated to improve target localization accuracy. The Coordinate Intersection over Union loss function is used to further enhance detection accuracy. Experimental results demonstrate that the improved YOLOX-Tiny model achieves an average precision of 92.2 %, with a detection time of 15.6 milliseconds. This represents a 16.4 % improvement over the original model while maintaining high accuracy. The proposed model has a reduced size of 18.6 MB, representing a 7.1 % reduction. It also incorporates the least squares method for accurately fitting crop rows. The model showcases efficiency in processing large amounts of data, achieving a comprehensive fitting time of 42 milliseconds and an average angular error of 0.59°. The improved YOLOX-Tiny model offers substantial support for the navigation of intelligent weeding machinery in practical applications, contributing to increased agricultural productivity and reduced usage of chemical herbicides.

20.
Waste Manag ; 174: 462-475, 2024 Feb 15.
Artículo en Inglés | MEDLINE | ID: mdl-38113671

RESUMEN

Efficient sorting and recycling of decoration waste are crucial for the industry's transformation, upgrading, and high-quality development. However, decoration waste can contain toxic materials and has greatly varying compositions. The traditional method of manual sorting for decoration waste is inefficient and poses health risks to sorting workers. It is therefore imperative to develop an accurate and efficient intelligent classification method to address these issues. To meet the demand for intelligent identification and classification of decoration waste, this paper applied the deep learning method You Only Look Once X (YOLOX) to the task and proposed an identification and classification framework of decoration waste (YOLOX-DW framework). The proposed framework was validated and compared using a multi-label image dataset of decoration waste, and a robot automatic sorting system was constructed for practical sorting experiments. The research results show that the proposed framework achieved a mean average precision (mAP) of 99.16 % for different components of decoration waste, with a detection speed of 39.23 FPS. Its classification efficiency on the robot sorting experimental platform reached 95.06 %, indicating a high potential for application and promotion. This provides a strategy for the intelligent detection, identification, and classification of decoration waste.


Asunto(s)
Aprendizaje Profundo , Humanos , Reciclaje/métodos
SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda