Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 17 de 17
Filtrar
Más filtros













Base de datos
Intervalo de año de publicación
1.
Sensors (Basel) ; 24(3)2024 Jan 30.
Artículo en Inglés | MEDLINE | ID: mdl-38339611

RESUMEN

Mechanical weed management is a drudging task that requires manpower and has risks when conducted within rows of orchards. However, intrarow weeding must still be conducted by manual labor due to the restricted movements of riding mowers within the rows of orchards due to their confined structures with nets and poles. However, autonomous robotic weeders still face challenges identifying uncut weeds due to the obstruction of Global Navigation Satellite System (GNSS) signals caused by poles and tree canopies. A properly designed intelligent vision system would have the potential to achieve the desired outcome by utilizing an autonomous weeder to perform operations in uncut sections. Therefore, the objective of this study is to develop a vision module using a custom-trained dataset on YOLO instance segmentation algorithms to support autonomous robotic weeders in recognizing uncut weeds and obstacles (i.e., fruit tree trunks, fixed poles) within rows. The training dataset was acquired from a pear orchard located at the Tsukuba Plant Innovation Research Center (T-PIRC) at the University of Tsukuba, Japan. In total, 5000 images were preprocessed and labeled for training and testing using YOLO models. Four versions of edge-device-dedicated YOLO instance segmentation were utilized in this research-YOLOv5n-seg, YOLOv5s-seg, YOLOv8n-seg, and YOLOv8s-seg-for real-time application with an autonomous weeder. A comparison study was conducted to evaluate all YOLO models in terms of detection accuracy, model complexity, and inference speed. The smaller YOLOv5-based and YOLOv8-based models were found to be more efficient than the larger models, and YOLOv8n-seg was selected as the vision module for the autonomous weeder. In the evaluation process, YOLOv8n-seg had better segmentation accuracy than YOLOv5n-seg, while the latter had the fastest inference time. The performance of YOLOv8n-seg was also acceptable when it was deployed on a resource-constrained device that is appropriate for robotic weeders. The results indicated that the proposed deep learning-based detection accuracy and inference speed can be used for object recognition via edge devices for robotic operation during intrarow weeding operations in orchards.


Asunto(s)
Algoritmos , Cultura , Frutas , Inteligencia , Japón , Malezas
2.
Environ Sci Pollut Res Int ; 31(5): 7902-7933, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38168854

RESUMEN

This study aims to determine the eco-friendliness of microalgae-based renewable energy production in several scenarios based on life cycle assessment (LCA). The LCA provides critical data for sustainable decision-making and energy requirement analysis, including net energy ratio (NER) and cumulative energy demand (CED). The Centrum voor Milieuwetenschappen Leiden (CML) IA-Baseline was used on environmental impact assessment method by SimaPro v9.3.0.3® software and energy analysis of biofuel production using native polyculture microalgae biomass in municipal wastewater treatment plants (WWTP) Bojongsoang, Bandung, Indonesia. The study was analyzed under three scenarios: (1) the current scenario; (2) the algae scenario without waste heat and carbon dioxide (CO2); and (3) the algae scenario with waste heat and carbon dioxide (CO2). Waste heat and CO2 were obtained from an industrial zone near the WWTP. The results disclosed that the microalgae scenario with waste heat and CO2 utilization is the most promising scenario with the lowest environmental impact (- 0.139 kg CO2eq/MJ), positive energy balance of 1.23 MJ/m3 wastewater (NER > 1), and lower CED value across various impact categories. It indicates that utilizing the waste heat and CO2 has a positive impact on energy efficiency. Based on the environmental impact, NER and CED values, this study suggests that the microalgae scenario with waste heat and CO2 is more feasible and sustainable to adopt and could be implemented at the Bojongsoang WWTP.


Asunto(s)
Microalgas , Purificación del Agua , Animales , Dióxido de Carbono , Indonesia , Biocombustibles , Biomasa , Estadios del Ciclo de Vida
3.
Sensors (Basel) ; 23(20)2023 Oct 17.
Artículo en Inglés | MEDLINE | ID: mdl-37896610

RESUMEN

Sooty mold is a common disease found in citrus plants and is characterized by black fungi growth on fruits, leaves, and branches. This mold reduces the plant's ability to carry out photosynthesis. In small leaves, it is very difficult to detect sooty mold at the early stages. Deep learning-based image recognition techniques have the potential to identify and diagnose pest damage and diseases such as sooty mold. Recent studies used advanced and expensive hyperspectral or multispectral cameras attached to UAVs to examine the canopy of the plants and mid-range cameras to capture close-up infected leaf images. To bridge the gap on capturing canopy level images using affordable camera sensors, this study used a low-cost home surveillance camera to monitor and detect sooty mold infection on citrus canopy combined with deep learning algorithms. To overcome the challenges posed by varying light conditions, the main reason for using specialized cameras, images were collected at night, utilizing the camera's built-in night vision feature. A total of 4200 sliced night-captured images were used for training, 200 for validation, and 100 for testing, employed on the YOLOv5m, YOLOv7, and CenterNet models for comparison. The results showed that YOLOv7 was the most accurate in detecting sooty molds at night, with 74.4% mAP compared to YOLOv5m (72%) and CenterNet (70.3%). The models were also tested using preprocessed (unsliced) night images and day-captured sliced and unsliced images. The testing on preprocessed (unsliced) night images demonstrated the same trend as the training results, with YOLOv7 performing best compared to YOLOv5m and CenterNet. In contrast, testing on the day-captured images had underwhelming outcomes for both sliced and unsliced images. In general, YOLOv7 performed best in detecting sooty mold infections at night on citrus canopy and showed promising potential in real-time orchard disease monitoring and detection. Moreover, this study demonstrated that utilizing a cost-effective surveillance camera and deep learning algorithms can accurately detect sooty molds at night, enabling growers to effectively monitor and identify occurrences of the disease at the canopy level.


Asunto(s)
Citrus , Aprendizaje Profundo , Árboles , Hongos , Algoritmos
4.
Sensors (Basel) ; 23(10)2023 May 16.
Artículo en Inglés | MEDLINE | ID: mdl-37430726

RESUMEN

Traditional Japanese orchards control the growth height of fruit trees for the convenience of farmers, which is unfavorable to the operation of medium- and large-sized machinery. A compact, safe, and stable spraying system could offer a solution for orchard automation. Due to the complex orchard environment, the dense tree canopy not only obstructs the GNSS signal but also has effects due to low light, which may impact the recognition of objects by ordinary RGB cameras. To overcome these disadvantages, this study selected LiDAR as a single sensor to achieve a prototype robot navigation system. In this study, density-based spatial clustering of applications with noise (DBSCAN) and K-means and random sample consensus (RANSAC) machine learning algorithms were used to plan the robot navigation path in a facilitated artificial-tree-based orchard system. Pure pursuit tracking and an incremental proportional-integral-derivative (PID) strategy were used to calculate the vehicle steering angle. In field tests on a concrete road, grass field, and a facilitated artificial-tree-based orchard, as indicated by the test data results for several formations of left turns and right turns separately, the position root mean square error (RMSE) of this vehicle was as follows: on the concrete road, the right turn was 12.0 cm and the left turn was 11.6 cm, on grass, the right turn was 12.6 cm and the left turn was 15.5 cm, and in the facilitated artificial-tree-based orchard, the right turn was 13.8 cm and the left turn was 11.4 cm. The vehicle was able to calculate the path in real time based on the position of the objects, operate safely, and complete the task of pesticide spraying.

5.
Sensors (Basel) ; 23(13)2023 Jun 21.
Artículo en Inglés | MEDLINE | ID: mdl-37447645

RESUMEN

Sorting seedlings is laborious and requires attention to identify damage. Separating healthy seedlings from damaged or defective seedlings is a critical task in indoor farming systems. However, sorting seedlings manually can be challenging and time-consuming, particularly under complex lighting conditions. Different indoor lighting conditions can affect the visual appearance of the seedlings, making it difficult for human operators to accurately identify and sort the seedlings consistently. Therefore, the objective of this study was to develop a defective-lettuce-seedling-detection system under different indoor cultivation lighting systems using deep learning algorithms to automate the seedling sorting process. The seedling images were captured under different indoor lighting conditions, including white, blue, and red. The detection approach utilized and compared several deep learning algorithms, specifically CenterNet, YOLOv5, YOLOv7, and faster R-CNN to detect defective seedlings in indoor farming environments. The results demonstrated that the mean average precision (mAP) of YOLOv7 (97.2%) was the highest and could accurately detect defective lettuce seedlings compared to CenterNet (82.8%), YOLOv5 (96.5%), and faster R-CNN (88.6%). In terms of detection under different light variables, YOLOv7 also showed the highest detection rate under white and red/blue/white lighting. Overall, the detection of defective lettuce seedlings by YOLOv7 shows great potential for introducing automated seedling-sorting systems and classification under actual indoor farming conditions. Defective-seedling-detection can improve the efficiency of seedling-management operations in indoor farming.


Asunto(s)
Aprendizaje Profundo , Iluminación , Humanos , Iluminación/métodos , Plantones , Lactuca , Algoritmos
6.
Sensors (Basel) ; 23(8)2023 Apr 07.
Artículo en Inglés | MEDLINE | ID: mdl-37112151

RESUMEN

Recognition and 3D positional estimation of apples during harvesting from a robotic platform in a moving vehicle are still challenging. Fruit clusters, branches, foliage, low resolution, and different illuminations are unavoidable and cause errors in different environmental conditions. Therefore, this research aimed to develop a recognition system based on training datasets from an augmented, complex apple orchard. The recognition system was evaluated using deep learning algorithms established from a convolutional neural network (CNN). The dynamic accuracy of the modern artificial neural networks involving 3D coordinates for deploying robotic arms at different forward-moving speeds from an experimental vehicle was investigated to compare the recognition and tracking localization accuracy. In this study, a Realsense D455 RGB-D camera was selected to acquire 3D coordinates of each detected and counted apple attached to artificial trees placed in the field to propose a specially designed structure for ease of robotic harvesting. A 3D camera, YOLO (You Only Look Once), YOLOv4, YOLOv5, YOLOv7, and EfficienDet state-of-the-art models were utilized for object detection. The Deep SORT algorithm was employed for tracking and counting detected apples using perpendicular, 15°, and 30° orientations. The 3D coordinates were obtained for each tracked apple when the on-board camera in the vehicle passed the reference line and was set in the middle of the image frame. To optimize harvesting at three different speeds (0.052 ms-1, 0.069 ms-1, and 0.098 ms-1), the accuracy of 3D coordinates was compared for three forward-moving speeds and three camera angles (15°, 30°, and 90°). The mean average precision (mAP@0.5) values of YOLOv4, YOLOv5, YOLOv7, and EfficientDet were 0.84, 0.86, 0.905, and 0.775, respectively. The lowest root mean square error (RMSE) was 1.54 cm for the apples detected by EfficientDet at a 15° orientation and a speed of 0.098 ms-1. In terms of counting apples, YOLOv5 and YOLOv7 showed a higher number of detections in outdoor dynamic conditions, achieving a counting accuracy of 86.6%. We concluded that the EfficientDet deep learning algorithm at a 15° orientation in 3D coordinates can be employed for further robotic arm development while harvesting apples in a specially designed orchard.

7.
Sensors (Basel) ; 22(19)2022 Sep 24.
Artículo en Inglés | MEDLINE | ID: mdl-36236351

RESUMEN

Lettuce grown in indoor farms under fully artificial light is susceptible to a physiological disorder known as tip-burn. A vital factor that controls plant growth in indoor farms is the ability to adjust the growing environment to promote faster crop growth. However, this rapid growth process exacerbates the tip-burn problem, especially for lettuce. This paper presents an automated detection of tip-burn lettuce grown indoors using a deep-learning algorithm based on a one-stage object detector. The tip-burn lettuce images were captured under various light and indoor background conditions (under white, red, and blue LEDs). After augmentation, a total of 2333 images were generated and used for training using three different one-stage detectors, namely, CenterNet, YOLOv4, and YOLOv5. In the training dataset, all the models exhibited a mean average precision (mAP) greater than 80% except for YOLOv4. The most accurate model for detecting tip-burns was YOLOv5, which had the highest mAP of 82.8%. The performance of the trained models was also evaluated on the images taken under different indoor farm light settings, including white, red, and blue LEDs. Again, YOLOv5 was significantly better than CenterNet and YOLOv4. Therefore, detecting tip-burn on lettuce grown in indoor farms under different lighting conditions can be recognized by using deep-learning algorithms with a reliable overall accuracy. Early detection of tip-burn can help growers readjust the lighting and controlled environment parameters to increase the freshness of lettuce grown in plant factories.


Asunto(s)
Quemaduras , Aprendizaje Profundo , Algoritmos , Lactuca , Luz , Fotosíntesis/fisiología , Hojas de la Planta
8.
Sensors (Basel) ; 22(20)2022 Oct 11.
Artículo en Inglés | MEDLINE | ID: mdl-36298055

RESUMEN

Freshness is one of the most important parameters for assessing the quality of avian eggs. Available techniques to estimate the degradation of albumen and enlargement of the air cell are either destructive or not suitable for high-throughput applications. The aim of this research was to introduce a new approach to evaluate the air cell of quail eggs for freshness assessment as a fast, noninvasive, and nondestructive method. A new methodology was proposed by using a thermal microcamera and deep learning object detection algorithms. To evaluate the new method, we stored 174 quail eggs and collected thermal images 30, 50, and 60 days after the labeled expiration date. These data, 522 in total, were expanded to 3610 by image augmentation techniques and then split into training and validation samples to produce models of the deep learning algorithms, referred to as "You Only Look Once" version 4 and 5 (YOLOv4 and YOLOv5) and EfficientDet. We tested the models in a new dataset composed of 60 eggs that were kept for 15 days after the labeled expiration label date. The validation of our methodology was performed by measuring the air cell area highlighted in the thermal images at the pixel level; thus, we compared the difference in the weight of eggs between the first day of storage and after 10 days under accelerated aging conditions. The statistical significance showed that the two variables (air cell and weight) were negatively correlated (R2 = 0.676). The deep learning models could predict freshness with F1 scores of 0.69, 0.89, and 0.86 for the YOLOv4, YOLOv5, and EfficientDet models, respectively. The new methodology for freshness assessment demonstrated that the best model reclassified 48.33% of our testing dataset. Therefore, those expired eggs could have their expiration date extended for another 2 weeks from the original label date.


Asunto(s)
Aprendizaje Profundo , Codorniz , Animales , Huevos , Albúminas
9.
Sensors (Basel) ; 22(15)2022 Aug 04.
Artículo en Inglés | MEDLINE | ID: mdl-35957378

RESUMEN

Poultry production utilizes many available technologies in terms of farm-industry automation and sanitary control. However, there is a lack of robust techniques and affordable equipment for avian embryo detection and sexual segregation at the early stages. In this work, we aimed to evaluate the potential use of thermal micro cameras for detecting embryos in quail eggs via thermal images during the first 168 h (7 days) of incubation. We propose a methodology to collect data during the incubation period. Additionally, to support the visual analysis, YOLO deep learning object detection algorithms were applied to detect unfertilized eggs; the results showed its potential to distinguish fertilized eggs from unfertilized eggs during the incubation period, after filtering radiometric images. We compared YOLOv4, YOLOv5 and SSD-MobileNet V2 trained models. The mAP@0.50 of the YOLOv4, YOLOv5 and SSD-MobileNet V2 was 98.62%, 99.5% and 91.8%, respectively. We also compared three testing datasets for different intervals of rotation of eggs, as our hypothesis was that fewer turning periods could improve the visualization of fertilized egg features, and applied three treatments: 1.5 h, 6 h, and 12 h. The results showed that turning eggs in different periods did not exhibit a linear relation, as the F1 Score for YOLOv4 of detection for the 12 h period was 0.569, that for the 6 h period was 0.404 and that for the 1.5 h period was 0.384. YOLOv5 F1 Scores for 12 h, 6 h and 1.5 h were 1, 0.545 and 0.386, respectively. SSD-MobileNet V2 performed F1 scores of 0.60 for 12 h, 0.22 for 6 h and 0 for 1.5 h turning periods.


Asunto(s)
Aprendizaje Profundo , Codorniz , Animales , Huevos
10.
Sensors (Basel) ; 22(11)2022 May 31.
Artículo en Inglés | MEDLINE | ID: mdl-35684807

RESUMEN

In orchard fruit picking systems for pears, the challenge is to identify the full shape of the soft fruit to avoid injuries while using robotic or automatic picking systems. Advancements in computer vision have brought the potential to train for different shapes and sizes of fruit using deep learning algorithms. In this research, a fruit recognition method for robotic systems was developed to identify pears in a complex orchard environment using a 3D stereo camera combined with Mask Region-Convolutional Neural Networks (Mask R-CNN) deep learning technology to obtain targets. This experiment used 9054 RGBA original images (3018 original images and 6036 augmented images) to create a dataset divided into a training, validation, and testing sets. Furthermore, we collected the dataset under different lighting conditions at different times which were high-light (9-10 am) and low-light (6-7 pm) conditions at JST, Tokyo Time, August 2021 (summertime) to prepare training, validation, and test datasets at a ratio of 6:3:1. All the images were taken by a 3D stereo camera which included PERFORMANCE, QUALITY, and ULTRA models. We used the PERFORMANCE model to capture images to make the datasets; the camera on the left generated depth images and the camera on the right generated the original images. In this research, we also compared the performance of different types with the R-CNN model (Mask R-CNN and Faster R-CNN); the mean Average Precisions (mAP) of Mask R-CNN and Faster R-CNN were compared in the same datasets with the same ratio. Each epoch in Mask R-CNN was set at 500 steps with total 80 epochs. And Faster R-CNN was set at 40,000 steps for training. For the recognition of pears, the Mask R-CNN, had the mAPs of 95.22% for validation set and 99.45% was observed for the testing set. On the other hand, mAPs were observed 87.9% in the validation set and 87.52% in the testing set using Faster R-CNN. The different models using the same dataset had differences in performance in gathering clustered pears and individual pear situations. Mask R-CNN outperformed Faster R-CNN when the pears are densely clustered at the complex orchard. Therefore, the 3D stereo camera-based dataset combined with the Mask R-CNN vision algorithm had high accuracy in detecting the individual pears from gathered pears in a complex orchard environment.


Asunto(s)
Pyrus , Algoritmos , Frutas , Redes Neurales de la Computación
11.
Sensors (Basel) ; 22(5)2022 Mar 07.
Artículo en Inglés | MEDLINE | ID: mdl-35271214

RESUMEN

In an orchard automation process, a current challenge is to recognize natural landmarks and tree trunks to localize intelligent robots. To overcome low-light conditions and global navigation satellite system (GNSS) signal interruptions under a dense canopy, a thermal camera may be used to recognize tree trunks using a deep learning system. Therefore, the objective of this study was to use a thermal camera to detect tree trunks at different times of the day under low-light conditions using deep learning to allow robots to navigate. Thermal images were collected from the dense canopies of two types of orchards (conventional and joint training systems) under high-light (12-2 PM), low-light (5-6 PM), and no-light (7-8 PM) conditions in August and September 2021 (summertime) in Japan. The detection accuracy for a tree trunk was confirmed by the thermal camera, which observed an average error of 0.16 m for 5 m, 0.24 m for 15 m, and 0.3 m for 20 m distances under high-, low-, and no-light conditions, respectively, in different orientations of the thermal camera. Thermal imagery datasets were augmented to train, validate, and test using the Faster R-CNN deep learning model to detect tree trunks. A total of 12,876 images were used to train the model, 2318 images were used to validate the training process, and 1288 images were used to test the model. The mAP of the model was 0.8529 for validation and 0.8378 for the testing process. The average object detection time was 83 ms for images and 90 ms for videos with the thermal camera set at 11 FPS. The model was compared with the YOLO v3 with same number of datasets and training conditions. In the comparisons, Faster R-CNN achieved a higher accuracy than YOLO v3 in tree truck detection using the thermal camera. Therefore, the results showed that Faster R-CNN can be used to recognize objects using thermal images to enable robot navigation in orchards under different lighting conditions.


Asunto(s)
Redes Neurales de la Computación , Árboles , Japón
12.
Sensors (Basel) ; 21(14)2021 Jul 14.
Artículo en Inglés | MEDLINE | ID: mdl-34300543

RESUMEN

This study aimed to produce a robust real-time pear fruit counter for mobile applications using only RGB data, the variants of the state-of-the-art object detection model YOLOv4, and the multiple object-tracking algorithm Deep SORT. This study also provided a systematic and pragmatic methodology for choosing the most suitable model for a desired application in agricultural sciences. In terms of accuracy, YOLOv4-CSP was observed as the optimal model, with an AP@0.50 of 98%. In terms of speed and computational cost, YOLOv4-tiny was found to be the ideal model, with a speed of more than 50 FPS and FLOPS of 6.8-14.5. If considering the balance in terms of accuracy, speed and computational cost, YOLOv4 was found to be most suitable and had the highest accuracy metrics while satisfying a real time speed of greater than or equal to 24 FPS. Between the two methods of counting with Deep SORT, the unique ID method was found to be more reliable, with an F1count of 87.85%. This was because YOLOv4 had a very low false negative in detecting pear fruits. The ROI line is more reliable because of its more restrictive nature, but due to flickering in detection it was not able to count some pears despite their being detected.


Asunto(s)
Pyrus , Algoritmos , Frutas
13.
Food Chem ; 360: 129896, 2021 Oct 30.
Artículo en Inglés | MEDLINE | ID: mdl-33989876

RESUMEN

The significant worldwide expansion of the health food market, which includes functional fruits and vegetables, requires a simple and rapid analytical method for the on-site analysis of functional components, such as carotenoids, in fruits and vegetables, and Raman spectroscopy is a powerful candidate. Herein, we clarified the effects of Raman exposure time on quantitative and discriminant analysis accuracies. Raman spectra of intact tomatoes with various carotenoid concentrations were acquired and used to develop partial least squares regression (PLSR) and partial least squares discriminant analysis (PLS-DA) models. The accuracy of the PLSR model was superior (R2 = 0.87) when Raman spectra were acquired 10 s, but decreased with decreasing exposure time (R2 = 0.69; 0.7 s). The accuracy of the PLS-DA model was unaffected by exposure time (hit rate: 90%). We conclude that Raman spectroscopy combined with PLS-DA is useful for the on-site analysis of carotenoids in fruits and vegetables.


Asunto(s)
Carotenoides/química , Solanum lycopersicum/química , Carotenoides/análisis , Análisis Discriminante , Análisis de los Mínimos Cuadrados , Espectrometría Raman/métodos , Factores de Tiempo
14.
Biotechnol Prog ; : e3156, 2021 Apr 18.
Artículo en Inglés | MEDLINE | ID: mdl-33870660

RESUMEN

Native polyculture microalgae is a promising scheme to produce microalgal biomass as biofuel feedstock in an open raceway pond. However, predicting biomass productivity of native polycultures microalgae is incredibly complicated. Therefore, developing polyculture growth model to forecast biomass yield is indispensable for commercial-scale production. This research aims to develop a polyculture growth model for native microalgal communities in the Minamisoma algae plant and to estimate biomass and biocrude oil productivity in a semi-continuous open raceway pond. The model was built based on monoculture growth of polyculture species and it is later formulated using species growth, polyculture factor (k value ), initial concentration, light intensity, and temperature. In order to calculate species growth, a simplified Monod model was applied. In the simulation, 115 samples of the 2014-2015 field dataset were used for model training, and 70 samples of the 2017 field dataset were used for model validation. The model simulation on biomass concentration showed that the polyculture growth model with k value had a root-mean-square error of 0.12, whereas model validation provided a better result with a root-mean-square error of 0.08. Biomass productivity forecast showed maximum productivity of 18.87 g/m2 /d in June with an annual average of 13.59 g/m2 /d. Biocrude oil yield forecast indicated that hydrothermal liquefaction process was more suitable with a maximum productivity of 0.59 g/m2 /d compared with solvent extraction which was only 0.19 g/m2 /d. With satisfactory root mean square errors less than 0.3, this polyculture growth model can be applied to forecast the productivity of native microalgae. This article is protected by copyright. All rights reserved.

15.
Sensors (Basel) ; 19(2)2019 Jan 14.
Artículo en Inglés | MEDLINE | ID: mdl-30646586

RESUMEN

Unmanned aerial vehicle (UAV)-based spraying systems have recently become important for the precision application of pesticides, using machine learning approaches. Therefore, the objective of this research was to develop a machine learning system that has the advantages of high computational speed and good accuracy for recognizing spray and non-spray areas for UAV-based sprayers. A machine learning system was developed by using the mutual subspace method (MSM) for images collected from a UAV. Two target lands: agricultural croplands and orchard areas, were considered in building two classifiers for distinguishing spray and non-spray areas. The field experiments were conducted in target areas to train and test the system by using a commercial UAV (DJI Phantom 3 Pro) with an onboard 4K camera. The images were collected from low (5 m) and high (15 m) altitudes for croplands and orchards, respectively. The recognition system was divided into offline and online systems. In the offline recognition system, 74.4% accuracy was obtained for the classifiers in recognizing spray and non-spray areas for croplands. In the case of orchards, the average classifier recognition accuracy of spray and non-spray areas was 77%. On the other hand, the online recognition system performance had an average accuracy of 65.1% for croplands, and 75.1% for orchards. The computational time for the online recognition system was minimal, with an average of 0.0031 s for classifier recognition. The developed machine learning system had an average recognition accuracy of 70%, which can be implemented in an autonomous UAV spray system for recognizing spray and non-spray areas for real-time applications.

16.
Sensors (Basel) ; 16(4)2016 Apr 22.
Artículo en Inglés | MEDLINE | ID: mdl-27110793

RESUMEN

The aim of this study was to design a navigation system composed of a human-controlled leader vehicle and a follower vehicle. The follower vehicle automatically tracks the leader vehicle. With such a system, a human driver can control two vehicles efficiently in agricultural operations. The tracking system was developed for the leader and the follower vehicle, and control of the follower was performed using a camera vision system. A stable and accurate monocular vision-based sensing system was designed, consisting of a camera and rectangular markers. Noise in the data acquisition was reduced by using the least-squares method. A feedback control algorithm was used to allow the follower vehicle to track the trajectory of the leader vehicle. A proportional-integral-derivative (PID) controller was introduced to maintain the required distance between the leader and the follower vehicle. Field experiments were conducted to evaluate the sensing and tracking performances of the leader-follower system while the leader vehicle was driven at an average speed of 0.3 m/s. In the case of linear trajectory tracking, the RMS errors were 6.5 cm, 8.9 cm and 16.4 cm for straight, turning and zigzag paths, respectively. Again, for parallel trajectory tracking, the root mean square (RMS) errors were found to be 7.1 cm, 14.6 cm and 14.0 cm for straight, turning and zigzag paths, respectively. The navigation performances indicated that the autonomous follower vehicle was able to follow the leader vehicle, and the tracking accuracy was found to be satisfactory. Therefore, the developed leader-follower system can be implemented for the harvesting of grains, using a combine as the leader and an unloader as the autonomous follower vehicle.

17.
Food Chem ; 191: 7-11, 2016 Jan 15.
Artículo en Inglés | MEDLINE | ID: mdl-26258695

RESUMEN

A simple and rapid method for the determination of free fatty acid (FFA) content in brown rice using Fourier transform infrared spectroscopy (FTIR) in conjunction with second-derivative treatment was proposed. Ground brown rice (10g) was soaked in toluene (20mL) for 30min, and the filtrate of the extract was placed in a 1mm CaF2 liquid cell. The transmittance spectrum of the filtrate was recorded using toluene for the background spectrum. The absorption band due to the CO stretching mode of FFAs was detected at 1710cm(-1), and the Savitzky-Golay second-derivative treatment was performed for band separation. A single linear regression model for FFA was developed using the 1710cm(-1) band in the second-derivative spectra of oleic acid in toluene (0.25-2.50gL(-1)), and the model displayed high prediction accuracy with a determination coefficient of 0.998 and a root mean square error of 0.03gL(-1).


Asunto(s)
Ácidos Grasos no Esterificados/análisis , Oryza/química , Espectroscopía Infrarroja por Transformada de Fourier/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA