Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 437
Filter
1.
Nano Lett ; 24(14): 4132-4140, 2024 Apr 10.
Article in English | MEDLINE | ID: mdl-38534013

ABSTRACT

Inspired by the retina, artificial optoelectronic synapses have groundbreaking potential for machine vision. The field-effect transistor is a crucial platform for optoelectronic synapses that is highly sensitive to external stimuli and can modulate conductivity. On the basis of the decent optical absorption, perovskite materials have been widely employed for constructing optoelectronic synaptic transistors. However, the reported optoelectronic synaptic transistors focus on the static processing of independent stimuli at different moments, while the natural visual information consists of temporal signals. Here, we report CsPbBrI2 nanowire-based optoelectronic synaptic transistors to study the dynamic responses of artificial synaptic transistors to time-varying visual information for the first time. Moreover, on the basis of the dynamic synaptic behavior, a hardware system with an accuracy of 85% is built to the trajectory of moving objects. This work offers a new way to develop artificial optoelectronic synapses for the construction of dynamic machine vision systems.

2.
Sensors (Basel) ; 24(13)2024 Jun 25.
Article in English | MEDLINE | ID: mdl-39000899

ABSTRACT

The industrial manufacturing model is undergoing a transformation from a product-centric model to a customer-centric one. Driven by customized requirements, the complexity of products and the requirements for quality have increased, which pose a challenge to the applicability of traditional machine vision technology. Extensive research demonstrates the effectiveness of AI-based learning and image processing on specific objects or tasks, but few publications focus on the composite task of the integrated product, the traceability and improvability of methods, as well as the extraction and communication of knowledge between different scenarios or tasks. To address this problem, this paper proposes a common, knowledge-driven, generic vision inspection framework, targeted for standardizing product inspection into a process of information decoupling and adaptive metrics. Task-related object perception is planned into a multi-granularity and multi-pattern progressive alignment based on industry knowledge and structured tasks. Inspection is abstracted as a reconfigurable process of multi-sub-pattern space combination mapping and difference metric under appropriate high-level strategies and experiences. Finally, strategies for knowledge improvement and accumulation based on historical data are presented. The experiment demonstrates the process of generating a detection pipeline for complex products and continuously improving it through failure tracing and knowledge improvement. Compared to the (1.767°, 69.802 mm) and 0.883 obtained by state-of-the-art deep learning methods, the generated pipeline achieves a pose estimation ranging from (2.771°, 153.584 mm) to (1.034°, 52.308 mm) and a detection rate ranging from 0.462 to 0.927. Through verification of other imaging methods and industrial tasks, we prove that the key to adaptability lies in the mining of inherent commonalities of knowledge, multi-dimensional accumulation, and reapplication.

3.
Sensors (Basel) ; 24(14)2024 Jul 11.
Article in English | MEDLINE | ID: mdl-39065894

ABSTRACT

Currently, the intelligent defect detection of massive grid transmission line inspection pictures using AI image recognition technology is an efficient and popular method. Usually, there are two technical routes for the construction of defect detection algorithm models: one is to use a lightweight network, which improves the efficiency, but it can generally only target a few types of defects and may reduce the detection accuracy; the other is to use a complex network model, which improves the accuracy, and can identify multiple types of defects at the same time, but it has a large computational volume and low efficiency. To maintain the model's high detection accuracy as well as its lightweight structure, this paper proposes a lightweight and efficient multi type defect detection method for transmission lines based on DCP-YOLOv8. The method employs deformable convolution (C2f_DCNv3) to enhance the defect feature extraction capability, and designs a re-parameterized cross phase feature fusion structure (RCSP) to optimize and fuse high-level semantic features with low level spatial features, thus improving the capability of the model to recognize defects at different scales while significantly reducing the model parameters; additionally, it combines the dynamic detection head and deformable convolutional v3's detection head (DCNv3-Dyhead) to enhance the feature expression capability and the utilization of contextual information to further improve the detection accuracy. Experimental results show that on a dataset containing 20 real transmission line defects, the method increases the average accuracy (mAP@0.5) to 72.2%, an increase of 4.3%, compared with the lightest baseline YOLOv8n model; the number of model parameters is only 2.8 M, a reduction of 9.15%, and the number of processed frames per second (FPS) reaches 103, which meets the real time detection demand. In the scenario of multi type defect detection, it effectively balances detection accuracy and performance with quantitative generalizability.

4.
Sensors (Basel) ; 24(4)2024 Feb 17.
Article in English | MEDLINE | ID: mdl-38400446

ABSTRACT

This study presents a machine vision-based variable weeding system for plant- protection unmanned ground vehicles (UGVs) to address the issues of pesticide waste and environmental pollution that are readily caused by traditional spraying agricultural machinery. The system utilizes fuzzy rules to achieve adaptive modification of the Kp, Ki, and Kd adjustment parameters of the PID control algorithm and combines them with an interleaved period PWM controller to reduce the impact of nonlinear variations in water pressure on the performance of the system, and to improve the stability and control accuracy of the system. After testing various image threshold segmentation and image graying algorithms, the normalized super green algorithm (2G-R-B) and the fast iterative threshold segmentation method were adopted as the best combination. This combination effectively distinguished between the vegetation and the background, and thus improved the accuracy of the pixel extraction algorithm for vegetation distribution. The results of orthogonal testing by selected four representative spraying duty cycles-25%, 50%, 75%, and 100%-showed that the pressure variation was less than 0.05 MPa, the average spraying error was less than 2%, and the highest error was less than 5% throughout the test. Finally, the performance of the system was comprehensively evaluated through field trials. The evaluation showed that the system was able to adjust the corresponding spraying volume in real time according to the vegetation distribution under the decision-making based on machine vision algorithms, which proved the low cost and effectiveness of the designed variable weed control system.

5.
Sensors (Basel) ; 24(1)2024 Jan 04.
Article in English | MEDLINE | ID: mdl-38203165

ABSTRACT

When the workpiece surface exhibits strong reflectivity, it becomes challenging to obtain accurate key measurements using non-contact, visual measurement techniques due to poor image quality. In this paper, we propose a high-precision measurement method shaft diameter based on an enhanced quality stripe image. By capturing two stripe images with different exposure times, we leverage their different characteristics. The results extracted from the low-exposure image are used to perform grayscale correction on the high-exposure image, improving the distribution of stripe grayscale and resulting in more accurate extraction results for the center points. The incorporation of different measurement positions and angles further enhanced measurement precision and robustness. Additionally, ellipse fitting is employed to derive shaft diameter. This method was applied to the profiles of different cross-sections and angles within the same shaft segment. To reduce the shape error of the shaft measurement, the average of these measurements was taken as the estimate of the average diameter for the shaft segment. In the experiments, the average shaft diameters determined by averaging elliptical estimations were compared with shaft diameters obtained using a coordinate measuring machine (CMM) the maximum error and the minimum error were respectively 18 µm and 7 µm; the average error was 11 µm; and the root mean squared error of the multiple measurement results was 10.98 µm. The measurement accuracy achieved is six times higher than that obtained from the unprocessed stripe images.

6.
Sensors (Basel) ; 24(7)2024 Mar 26.
Article in English | MEDLINE | ID: mdl-38610319

ABSTRACT

Object detection and tracking are pivotal tasks in machine learning, particularly within the domain of computer vision technologies. Despite significant advancements in object detection frameworks, challenges persist in real-world tracking scenarios, including object interactions, occlusions, and background interference. Many algorithms have been proposed to carry out such tasks; however, most struggle to perform well in the face of disturbances and uncertain environments. This research proposes a novel approach by integrating the You Only Look Once (YOLO) architecture for object detection with a robust filter for target tracking, addressing issues of disturbances and uncertainties. The YOLO architecture, known for its real-time object detection capabilities, is employed for initial object detection and centroid location. In combination with the detection framework, the sliding innovation filter, a novel robust filter, is implemented and postulated to improve tracking reliability in the face of disturbances. Specifically, the sliding innovation filter is implemented to enhance tracking performance by estimating the optimal centroid location in each frame and updating the object's trajectory. Target tracking traditionally relies on estimation theory techniques like the Kalman filter, and the sliding innovation filter is introduced as a robust alternative particularly suitable for scenarios where a priori information about system dynamics and noise is limited. Experimental simulations in a surveillance scenario demonstrate that the sliding innovation filter-based tracking approach outperforms existing Kalman-based methods, especially in the presence of disturbances. In all, this research contributes a practical and effective approach to object detection and tracking, addressing challenges in real-world, dynamic environments. The comparative analysis with traditional filters provides practical insights, laying the groundwork for future work aimed at advancing multi-object detection and tracking capabilities in diverse applications.

7.
Sensors (Basel) ; 24(5)2024 Feb 24.
Article in English | MEDLINE | ID: mdl-38475009

ABSTRACT

Detecting parcels accurately and efficiently has always been a challenging task when unloading from trucks onto conveyor belts because of the diverse and complex ways in which parcels are stacked. Conventional methods struggle to quickly and accurately classify the various shapes and surface patterns of unordered parcels. In this paper, we propose a parcel-picking surface detection method based on deep learning and image processing for the efficient unloading of diverse and unordered parcels. Our goal is to develop a systematic image processing algorithm that emphasises the boundaries of parcels regardless of their shape, pattern, or layout. The core of the algorithm is the utilisation of RGB-D technology for detecting the primary boundary lines regardless of obstacles such as adhesive labels, tapes, or parcel surface patterns. For cases where detecting the boundary lines is difficult owing to narrow gaps between parcels, we propose using deep learning-based boundary line detection through the You Only Look at Coefficients (YOLACT) model. Using image segmentation techniques, the algorithm efficiently predicts boundary lines, enabling the accurate detection of irregularly sized parcels with complex surface patterns. Furthermore, even for rotated parcels, we can extract their edges through complex mathematical operations using the depth values of the specified position, enabling the detection of the wider surfaces of the rotated parcels. Finally, we validate the accuracy and real-time performance of our proposed method through various case studies, achieving mAP (50) values of 93.8% and 90.8% for randomly sized and rotationally covered boxes with diverse colours and patterns, respectively.

8.
Sensors (Basel) ; 24(9)2024 Apr 26.
Article in English | MEDLINE | ID: mdl-38732866

ABSTRACT

Electromagnetic actuation can support many fields of technology, such as robotics or biomedical applications. In this context, fully understanding the system behavior and proposing a low-cost package for feedback control is challenging. Modeling the electromagnetic force is particularly tricky because it is a nonlinear function of the actuated object's position and coil's current. Measuring in real time the position of the actuated object with the precision required for accurate motion control is also nontrivial. In this study, we propose a novel, cost-effective electromagnetic set-up to achieve position control via visual feedback. We actuated vertically and under different experimental conditions a 10 mm diameter steel ball hanging on a low-stiffness spring, demonstrating good tracking performance (the position error remained within ±0.5 mm, with a negligible phase delay in the best scenarios). The experimental results confirm the feasibility of the proposed set-up, which is characterized by minimum complexity and realized with off-the-shelf and cost-effective components. For these reasons, such a contribution helps to understand and apply electromagnetic actuation even further.

9.
Sensors (Basel) ; 24(3)2024 Feb 02.
Article in English | MEDLINE | ID: mdl-38339701

ABSTRACT

In the process of industrial production, manual assembly of workpieces exists with low efficiency and high intensity, and some of the assembly process of the human body has a certain degree of danger. At the same time, traditional machine learning algorithms are difficult to adapt to the complexity of the current industrial field environment; the change in the environment will greatly affect the accuracy of the robot's work. Therefore, this paper proposes a method based on the combination of machine vision and the YOLOv5 deep learning model to obtain the disk porous localization information, after coordinate mapping by the ROS communication control robotic arm work, in order to improve the anti-interference ability of the environment and work efficiency but also reduce the danger to the human body. The system utilizes a camera to collect real-time images of targets in complex environments and, then, trains and processes them for recognition such that coordinate localization information can be obtained. This information is converted into coordinates under the robot coordinate system through hand-eye calibration, and the robot is then controlled to complete multi-hole localization and tracking by means of communication between the upper and lower computers. The results show that there is a high accuracy in the training and testing of the target object, and the control accuracy of the robotic arm is also relatively high. The method has strong anti-interference to the complex environment of industry and exhibits a certain feasibility and effectiveness. It lays a foundation for achieving the automated installation of docking disk workpieces in industrial production and also provides a more favorable choice for the production and installation of the process of screw positioning needs.

10.
Sensors (Basel) ; 24(13)2024 Jun 28.
Article in English | MEDLINE | ID: mdl-39000997

ABSTRACT

This paper explores a data augmentation approach for images of rigid bodies, particularly focusing on electrical equipment and analogous industrial objects. By leveraging manufacturer-provided datasheets containing precise equipment dimensions, we employed straightforward algorithms to generate synthetic images, permitting the expansion of the training dataset from a potentially unlimited viewpoint. In scenarios lacking genuine target images, we conducted a case study using two well-known detectors, representing two machine-learning paradigms: the Viola-Jones (VJ) and You Only Look Once (YOLO) detectors, trained exclusively on datasets featuring synthetic images as the positive examples of the target equipment, namely lightning rods and potential transformers. Performances of both detectors were assessed using real images in both visible and infrared spectra. YOLO consistently demonstrates F1 scores below 26% in both spectra, while VJ's scores lie in the interval from 38% to 61%. This performance discrepancy is discussed in view of paradigms' strengths and weaknesses, whereas the relatively high scores of at least one detector are taken as empirical evidence in favor of the proposed data augmentation approach.

11.
Sensors (Basel) ; 24(13)2024 Jul 06.
Article in English | MEDLINE | ID: mdl-39001173

ABSTRACT

Microplastics (MPs, size ≤ 5 mm) have emerged as a significant worldwide concern, threatening marine and freshwater ecosystems, and the lack of MP detection technologies is notable. The main goal of this research is the development of a camera sensor for the detection of MPs and measuring their size and velocity while in motion. This study introduces a novel methodology involving computer vision and artificial intelligence (AI) for the detection of MPs. Three different camera systems, including fixed-focus 2D and autofocus (2D and 3D), were implemented and compared. A YOLOv5-based object detection model was used to detect MPs in the captured image. DeepSORT was then implemented for tracking MPs through consecutive images. In real-time testing in a laboratory flume setting, the precision in MP counting was found to be 97%, and during field testing in a local river, the precision was 96%. This study provides foundational insights into utilizing AI for detecting MPs in different environmental settings, contributing to more effective efforts and strategies for managing and mitigating MP pollution.

12.
Sensors (Basel) ; 24(15)2024 Jul 25.
Article in English | MEDLINE | ID: mdl-39123869

ABSTRACT

Machine vision is a desirable non-contact measurement method for hot forgings, as image segmentation has been a challenging issue in performance and robustness resulting from the diversity of working conditions for hot forgings. Thus, this paper proposes an efficient and robust active contour model and corresponding image segmentation approach for forging images, by which verification experiments are conducted to prove the performance of the segmentation method by measuring geometric parameters for forging parts. Specifically, three types of continuity parameters are defined based on the geometric continuity of equivalent grayscale surfaces for forging images; hence, a new image force and external energy functional are proposed to form a new active contour model, Geometric Continuity Snakes (GC Snakes), which is more percipient to the grayscale distribution characteristics of forging images to improve the convergence for active contour robustly; additionally, a generating strategy for initial control points for GC Snakes is proposed to compose an efficient and robust image segmentation approach. The experimental results show that the proposed GC Snakes has better segmentation performance compared with existing active contour models for forging images of different temperatures and sizes, which provides better performance and efficiency in geometric parameter measurement for hot forgings. The maximum positioning and dimension errors by GC Snakes are 0.5525 mm and 0.3868 mm, respectively, compared with errors of 0.7873 mm and 0.6868 mm by the Snakes model.

13.
J Environ Manage ; 363: 121383, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38843728

ABSTRACT

In the forest industry, interspecific hybridization, such as Eucalyptus urograndis (Eucalyptus grandis × Eucalyptus urophylla) and Corymbia maculata × Corymbia torelliana, has led to the development of high-performing F1 generations. The successful breeding of these hybrids relies on verifying progenitor origins and confirming post-crossing, but conventional genotype identification methods are resource-intensive and result in seed destruction. As an alternative, multispectral imaging analysis has emerged as an efficient and non-destructive tool for seed phenotyping. This approach has demonstrated success in various crop seeds. However, identifying seed species in the context of forest seeds presents unique challenges due to their natural phenotypic variability and the striking resemblance between different species. This study evaluates the efficacy of spectral imaging analysis in distinguishing hybrid seeds of E. urograndis and C. maculata × C. torelliana from their progenitors. Four experiments were conducted: one for Corymbia spp. seeds, one for each Eucalyptus spp. batch separately, and one for pooled batches. Multispectral images were acquired at 19 wavelengths within the spectral range of 365-970 nm. Classification models based on Linear Discriminant Analysis (LDA), Random Forest (RF), and Support Vector Machine (SVM) was created using reflectance and reflectance features, combined with color, shape, and texture features, as well as nCDA transformed features. The LDA algorithm, combining all features, provided the highest accuracy, reaching 98.15% for Corymbia spp., and 92.75%, 85.38, and 86.00 for Eucalyptus batch one, two, and pooled batches, respectively. The study demonstrated the effectiveness of multispectral imaging in distinguishing hybrid seeds of Eucalyptus and Corymbia species. The seeds' spectral signature played a key role in this differentiation. This technology holds great potential for non-invasively classifying forest seeds in breeding programs.


Subject(s)
Eucalyptus , Forests , Seeds , Hybridization, Genetic , Myrtaceae , Discriminant Analysis
14.
Molecules ; 29(16)2024 Aug 11.
Article in English | MEDLINE | ID: mdl-39202891

ABSTRACT

D-K-type bauxite from Guizhou can be used as an unburned ceramic, adsorbent, and geopolymer after low-temperature calcination. It aims to solve the problem where the color of the D-K-type bauxite changes after calcination at different temperatures. Digital image processing technology was used to extract the color characteristics of bauxite images after 10 min of calcination at various temperatures. Then, we analyzed changes in the chemical composition and micromorphology of bauxite before and after calcination and investigated the correlation between the color characteristics of images and composition changes after bauxite calcination. The test results indicated that after calcining bauxite at 500 °C to 1000 °C for 10 min, more obvious dehydration and decarburization reactions occurred. The main component gradually changed from diaspore to Al2O3, the chromaticity value of the image decreased from 0.0980 to 0.0515, the saturation value increased from 0.0161 to 0.2433, and the brightness value increased from 0.5890 to 0.7177. Studies have shown that changes in bauxite color characteristics are strongly correlated with changes in composition. This is important for directing bauxite calcination based on digital image processing from engineering viewpoints.

15.
Morphologie ; 108(360): 100723, 2024 Mar.
Article in English | MEDLINE | ID: mdl-37897941

ABSTRACT

Forensic odontologists use biological patterns to estimate chronological age for the judicial system. The age of majority is a legally significant period with a limited set of reliable oral landmarks. Currently, experts rely on the questionable development of third molars to assess whether litigants can be prosecuted as legal adults. Identification of new and novel patterns may illuminate features more dependably indicative of chronological age, which have, until now, remained unseen. Unfortunately, biased perceptions and limited cognitive capacity compromise the ability of researchers to notice new patterns. The present study demonstrates how artificial intelligence can break through identification barriers and generate new estimation modalities. A convolutional neural network was trained with 4003 panoramic-radiographs to sort subjects into 'under-18' and 'over-18' age categories. The resultant architecture identified legal adults with a high predictive accuracy equally balanced between precision, specificity and recall. Moving forward, AI-based methods could improve courtroom efficiency, stand as automated assessment methods and contribute to our understanding of biological ageing.


Subject(s)
Artificial Intelligence , Adult , Humans , Cell Movement
16.
Prev Med ; 175: 107660, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37573953

ABSTRACT

Basketball players need to frequently engage in various physical movements during the game, which puts a certain burden on their bodies and can easily lead to various sports injuries. Therefore, it is crucial to prevent sports injuries in basketball teaching. This paper also studies basketball motion track capture. Basketball motion capture preserves the motion posture information of the target person in three-dimensional space. Because the motion capture system based on machine vision often encounters problems such as occlusion or self occlusion in the application scene, human motion capture is still a challenging problem in the current research field. This article designs a multi perspective human motion trajectory capture algorithm framework, which uses a two-dimensional human motion pose estimation algorithm based on deep learning to estimate the position distribution of human joint points on the two-dimensional image from each perspective. By combining the knowledge of camera poses from multiple perspectives, the three-dimensional spatial distribution of joint points is transformed, and the final evaluation result of the target human 3D pose is obtained. This article applies the research results of neural networks and IoT devices to basketball motion capture methods, further developing basketball motion capture systems.

17.
Eur Spine J ; 32(11): 3987-3995, 2023 11.
Article in English | MEDLINE | ID: mdl-37428212

ABSTRACT

PURPOSE: To determine if the novel 3D Machine-Vision Image Guided Surgery (MvIGS) (FLASH™) system can reduce intraoperative radiation exposure, while improving surgical outcomes when compared to 2D fluoroscopic navigation. METHODS: Clinical and radiographic records of 128 patients (≤ 18 years of age) who underwent posterior spinal fusion (PSF), utilising either MvIGS or 2D fluoroscopy, for severe idiopathic scoliosis were retrospectively reviewed. Operative time was analysed using the cumulative sum (CUSUM) method to evaluate the learning curve for MvIGS. RESULTS: Between 2017 and 2021, 64 patients underwent PSF using pedicle screws with 2D fluoroscopy and another 64 with the MvIGS. Age, gender, BMI, and scoliosis aetiology were comparable between the two groups. The CUSUM method estimated that the MvIGS learning curve with respect to operative time was 9 cases. This curve consisted of 2 phases: Phase 1 comprises the first 9 cases and Phase 2 the remaining 55 cases. Compared to 2D fluoroscopy, MvIGS reduced intraoperative fluoroscopy time, radiation exposure, estimated blood loss and length of stay by 53%, 62% 44%, and 21% respectively. Scoliosis curve correction was 4% higher in the MvIGS group, without any increase in operative time. CONCLUSION: MvIGS for screw insertion in PSF contributed to a significant reduction in intraoperative radiation exposure and fluoroscopy time, as well as blood loss and length of stay. The real-time feedback and ability to visualize the pedicle in 3D with MvIGS enabled greater curve correction without increasing the operative time.


Subject(s)
Pedicle Screws , Scoliosis , Spinal Fusion , Surgery, Computer-Assisted , Humans , Scoliosis/diagnostic imaging , Scoliosis/surgery , Retrospective Studies , Blood Loss, Surgical/prevention & control , Spinal Fusion/methods , Fluoroscopy/methods , Surgery, Computer-Assisted/methods , Radiation, Ionizing
18.
Sensors (Basel) ; 23(16)2023 Aug 11.
Article in English | MEDLINE | ID: mdl-37631662

ABSTRACT

Although craft and home brewing have fueled the beer renaissance in the last decade, affordable, reliable, and simple sensing equipment for such breweries is limited. Thus, this manuscript is motivated by the improvement of the bottle-filling process in such settings with the objective of developing a liquid level sensor based on a novel application of the known optical phenomena of light refraction. Based on the different refraction indices of liquid and air (and critical angle based on Snell's law), along with a novel LED light source positioning, a reliable liquid level sensor system was built with the aid of an embedded microcontroller. The used operating principle is general and can be used in applications other than the proposed one. The proposed method was extensively tested in a laboratory and limited production settings with a speed of 7 Hz using different liquids and container shapes. It was compared for accuracy to other sensing principles such as ultrasound, infrared, and time-of-flight. It demonstrated comparable or better performance with a height error ranging between -0.1534 mm in static conditions and 1.608 mm for realistic dynamic conditions and good repeatability on the production line with a 4.3 mm standard deviation of the mean.

19.
Sensors (Basel) ; 23(18)2023 Sep 15.
Article in English | MEDLINE | ID: mdl-37765975

ABSTRACT

Sow body condition scoring has been confirmed as a vital procedure in sow management. A timely and accurate assessment of the body condition of a sow is conducive to determining nutritional supply, and it takes on critical significance in enhancing sow reproductive performance. Manual sow body condition scoring methods have been extensively employed in large-scale sow farms, which are time-consuming and labor-intensive. To address the above-mentioned problem, a dual neural network-based automatic scoring method was developed in this study for sow body condition. The developed method aims to enhance the ability to capture local features and global information in sow images by combining CNN and transformer networks. Moreover, it introduces a CBAM module to help the network pay more attention to crucial feature channels while suppressing attention to irrelevant channels. To tackle the problem of imbalanced categories and mislabeling of body condition data, the original loss function was substituted with the optimized focal loss function. As indicated by the model test, the sow body condition classification achieved an average precision of 91.06%, the average recall rate was 91.58%, and the average F1 score reached 91.31%. The comprehensive comparative experimental results suggested that the proposed method yielded optimal performance on this dataset. The method developed in this study is capable of achieving automatic scoring of sow body condition, and it shows broad and promising applications.

20.
Sensors (Basel) ; 23(19)2023 Sep 27.
Article in English | MEDLINE | ID: mdl-37836953

ABSTRACT

This paper discusses a semantic segmentation framework and shows its application in agricultural intelligence, such as providing environmental awareness for agricultural robots to work autonomously and efficiently. We propose an ensemble framework based on the bagging strategy and the UNet network, using RGB and HSV color spaces. We evaluated the framework on our self-built dataset (Maize) and a public dataset (Sugar Beets). Then, we compared it with UNet-based methods (single RGB and single HSV), DeepLab V3+, and SegNet. Experimental results show that our ensemble framework can synthesize the advantages of each color space and obtain the best IoUs (0.8276 and 0.6972) on the datasets (Maize and Sugar Beets), respectively. In addition, including our framework, the UNet-based methods have faster speed and a smaller parameter space than DeepLab V3+ and SegNet, which are more suitable for deployment in resource-constrained environments such as mobile robots.

SELECTION OF CITATIONS
SEARCH DETAIL