Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Plant Dis ; 108(3): 711-724, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37755420

RESUMO

Rhizoctonia crown and root rot (RCRR), caused by Rhizoctonia solani, can cause severe yield and quality losses in sugar beet. The most common strategy to control the disease is the development of resistant varieties. In the breeding process, field experiments with artificial inoculation are carried out to evaluate the performance of genotypes and varieties. The phenotyping process in breeding trials requires constant monitoring and scoring by skilled experts. This work is time demanding and shows bias and heterogeneity according to the experience and capacity of each individual person. Optical sensors and artificial intelligence have demonstrated great potential to achieve higher accuracy than human raters and the possibility to standardize phenotyping applications. A workflow combining red-green-blue and multispectral imagery coupled to an unmanned aerial vehicle (UAV), as well as machine learning techniques, was applied to score diseased plants and plots affected by RCRR. Georeferenced annotation of UAV-orthorectified images was carried out. With the annotated images, five convolutional neural networks were trained to score individual plants. The training was carried out with different image analysis strategies and data augmentation. The custom convolutional neural network trained from scratch together with pretrained MobileNet showed the best precision in scoring RCRR (0.73 to 0.85). The average per plot of spectral information was used to score the plots, and the benefit of adding the information obtained from the score of individual plants was compared. For this purpose, machine learning models were trained together with data management strategies, and the best-performing model was chosen. A combined pipeline of random forest and k-nearest neighbors has shown the best weighted precision (0.67). This research provides a reliable workflow for detecting and scoring RCRR based on aerial imagery. RCRR is often distributed heterogeneously in trial plots; therefore, considering the information from individual plants of the plots showed a significant improvement in UAV-based automated monitoring routines.


Assuntos
Beta vulgaris , Dispositivos Aéreos não Tripulados , Humanos , Rhizoctonia , Inteligência Artificial , Melhoramento Vegetal , Aprendizado de Máquina , Verduras , Açúcares
2.
Gigascience ; 112022 06 17.
Artigo em Inglês | MEDLINE | ID: mdl-35715875

RESUMO

BACKGROUND: Unmanned aerial vehicle (UAV)-based image retrieval in modern agriculture enables gathering large amounts of spatially referenced crop image data. In large-scale experiments, however, UAV images suffer from containing a multitudinous amount of crops in a complex canopy architecture. Especially for the observation of temporal effects, this complicates the recognition of individual plants over several images and the extraction of relevant information tremendously. RESULTS: In this work, we present a hands-on workflow for the automatized temporal and spatial identification and individualization of crop images from UAVs abbreviated as "cataloging" based on comprehensible computer vision methods. We evaluate the workflow on 2 real-world datasets. One dataset is recorded for observation of Cercospora leaf spot-a fungal disease-in sugar beet over an entire growing cycle. The other one deals with harvest prediction of cauliflower plants. The plant catalog is utilized for the extraction of single plant images seen over multiple time points. This gathers a large-scale spatiotemporal image dataset that in turn can be applied to train further machine learning models including various data layers. CONCLUSION: The presented approach improves analysis and interpretation of UAV data in agriculture significantly. By validation with some reference data, our method shows an accuracy that is similar to more complex deep learning-based recognition techniques. Our workflow is able to automatize plant cataloging and training image extraction, especially for large datasets.


Assuntos
Agricultura , Tecnologia de Sensoriamento Remoto , Agricultura/métodos , Computadores , Produtos Agrícolas , Tecnologia de Sensoriamento Remoto/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA