Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sensors (Basel) ; 22(22)2022 Nov 15.
Artigo em Inglês | MEDLINE | ID: mdl-36433404

RESUMO

Robust and automated image segmentation in high-throughput image-based plant phenotyping has received considerable attention in the last decade. The possibility of this approach has not been well studied due to the time-consuming manual segmentation and lack of appropriate datasets. Segmenting images of greenhouse and open-field grown crops from the background is a challenging task linked to various factors such as complex background (presence of humans, equipment, devices, and machinery for crop management practices), environmental conditions (humidity, cloudy/sunny, fog, rain), occlusion, low-contrast and variability in crops and pose over time. This paper presents a new ubiquitous deep learning architecture ThelR547v1 (Thermal RGB 547 layers version 1) that segmented each pixel as crop or crop canopy from the background (non-crop) in real time by abstracting multi-scale contextual information with reduced memory cost. By evaluating over 37,328 augmented images (aug1: thermal RGB and RGB), our method achieves mean IoU of 0.94 and 0.87 for leaves and background and mean Bf scores of 0.93 and 0.86, respectively. ThelR547v1 has a training accuracy of 96.27%, a training loss of 0.09, a validation accuracy of 96.15%, and a validation loss of 0.10. Qualitative analysis further shows that despite the low resolution of training data, ThelR547v1 successfully distinguishes leaf/canopy pixels from complex and noisy background pixels, enabling it to be used for real-time semantic segmentation of horticultural crops.


Assuntos
Processamento de Imagem Assistida por Computador , Semântica , Humanos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Produtos Agrícolas , Horticultura
2.
Sensors (Basel) ; 22(7)2022 Mar 23.
Artigo em Inglês | MEDLINE | ID: mdl-35408071

RESUMO

Automated crop monitoring using image analysis is commonly used in horticulture. Image-processing technologies have been used in several studies to monitor growth, determine harvest time, and estimate yield. However, accurate monitoring of flowers and fruits in addition to tracking their movements is difficult because of their location on an individual plant among a cluster of plants. In this study, an automated clip-type Internet of Things (IoT) camera-based growth monitoring and harvest date prediction system was proposed and designed for tomato cultivation. Multiple clip-type IoT cameras were installed on trusses inside a greenhouse, and the growth of tomato flowers and fruits was monitored using deep learning-based blooming flower and immature fruit detection. In addition, the harvest date was calculated using these data and temperatures inside the greenhouse. Our system was tested over three months. Harvest dates measured using our system were comparable with the data manually recorded. These results suggest that the system could accurately detect anthesis, number of immature fruits, and predict the harvest date within an error range of ±2.03 days in tomato plants. This system can be used to support crop growth management in greenhouses.


Assuntos
Internet das Coisas , Solanum lycopersicum , Flores , Frutas , Instrumentos Cirúrgicos
3.
Sensors (Basel) ; 21(23)2021 Nov 27.
Artigo em Inglês | MEDLINE | ID: mdl-34883927

RESUMO

The biggest challenge in the classification of plant water stress conditions is the similar appearance of different stress conditions. We introduce HortNet417v1 with 417 layers for rapid recognition, classification, and visualization of plant stress conditions, such as no stress, low stress, middle stress, high stress, and very high stress, in real time with higher accuracy and a lower computing condition. We evaluated the classification performance by training more than 50,632 augmented images and found that HortNet417v1 has 90.77% training, 90.52% cross validation, and 93.00% test accuracy without any overfitting issue, while other networks like Xception, ShuffleNet, and MobileNetv2 have an overfitting issue, although they achieved 100% training accuracy. This research will motivate and encourage the further use of deep learning techniques to automatically detect and classify plant stress conditions and provide farmers with the necessary information to manage irrigation practices in a timely manner.


Assuntos
Aprendizado Profundo , Prunus persica , Desidratação , Humanos
4.
Front Plant Sci ; 12: 630425, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34276715

RESUMO

The real challenge for separating leaf pixels from background pixels in thermal images is associated with various factors such as the amount of emitted and reflected thermal radiation from the targeted plant, absorption of reflected radiation by the humidity of the greenhouse, and the outside environment. We proposed TheLNet270v1 (thermal leaf network with 270 layers version 1) to recover the leaf canopy from its background in real time with higher accuracy than previous systems. The proposed network had an accuracy of 91% (mean boundary F1 score or BF score) to distinguish canopy pixels from background pixels and then segment the image into two classes: leaf and background. We evaluated the classification (segment) performance by using more than 13,766 images and obtained 95.75% training and 95.23% validation accuracies without overfitting issues. This research aimed to develop a deep learning technique for the automatic segmentation of thermal images to continuously monitor the canopy surface temperature inside a greenhouse.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...