Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Front Plant Sci ; 14: 1204791, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38053768

RESUMO

Estimation of biophysical vegetation variables is of interest for diverse applications, such as monitoring of crop growth and health or yield prediction. However, remote estimation of these variables remains challenging due to the inherent complexity of plant architecture, biology and surrounding environment, and the need for features engineering. Recent advancements in deep learning, particularly convolutional neural networks (CNN), offer promising solutions to address this challenge. Unfortunately, the limited availability of labeled data has hindered the exploration of CNNs for regression tasks, especially in the frame of crop phenotyping. In this study, the effectiveness of various CNN models in predicting wheat dry matter, nitrogen uptake, and nitrogen concentration from RGB and multispectral images taken from tillering to maturity was examined. To overcome the scarcity of labeled data, a training pipeline was devised. This pipeline involves transfer learning, pseudo-labeling of unlabeled data and temporal relationship correction. The results demonstrated that CNN models significantly benefit from the pseudolabeling method, while the machine learning approach employing a PLSr did not show comparable performance. Among the models evaluated, EfficientNetB4 achieved the highest accuracy for predicting above-ground biomass, with an R² value of 0.92. In contrast, Resnet50 demonstrated superior performance in predicting LAI, nitrogen uptake, and nitrogen concentration, with R² values of 0.82, 0.73, and 0.80, respectively. Moreover, the study explored multi-output models to predict the distribution of dry matter and nitrogen uptake between stem, inferior leaves, flag leaf, and ear. The findings indicate that CNNs hold promise as accessible and promising tools for phenotyping quantitative biophysical variables of crops. However, further research is required to harness their full potential.

2.
Plant Phenomics ; 5: 0083, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37681000

RESUMO

The utilization of high-throughput in-field phenotyping systems presents new opportunities for evaluating crop stress. However, existing studies have primarily focused on individual stresses, overlooking the fact that crops in field conditions frequently encounter multiple stresses, which can display similar symptoms or interfere with the detection of other stress factors. Therefore, this study aimed to investigate the impact of wheat yellow rust on reflectance measurements and nitrogen status assessment. A multi-sensor mobile platform was utilized to capture RGB and multispectral images throughout a 2-year fertilization-fungicide trial. To identify disease-induced damage, the SegVeg approach, which combines a U-NET architecture and a pixel-wise classifier, was applied to RGB images, generating a mask capable of distinguishing between healthy and damaged areas of the leaves. The observed proportion of damage in the images demonstrated similar effectiveness to visual scoring methods in explaining grain yield. Furthermore, the study discovered that the disease not only affected reflectance through leaf damage but also influenced the reflectance of healthy areas by disrupting the overall nitrogen status of the plants. This emphasizes the importance of incorporating disease impact into reflectance-based decision support tools to account for its effects on spectral data. This effect was successfully mitigated by employing the NDRE vegetation index calculated exclusively from the healthy portions of the leaves or by incorporating the proportion of damage into the model. However, these findings also highlight the necessity for further research specifically addressing the challenges presented by multiple stresses in crop phenotyping.

3.
Sensors (Basel) ; 22(9)2022 Apr 27.
Artigo em Inglês | MEDLINE | ID: mdl-35591041

RESUMO

The reflectance of wheat crops provides information on their architecture or physiology. However, the methods currently used for close-range reflectance computation do not allow for the separation of the wheat canopy organs: the leaves and the ears. This study details a method to achieve high-throughput measurements of wheat reflectance at the organ scale. A nadir multispectral camera array and an incident light spectrometer were used to compute bi-directional reflectance factor (BRF) maps. Image thresholding and deep learning ear detection allowed for the segmentation of the ears and the leaves in the maps. The results showed that the BRF measured on reference targets was constant throughout the day but varied with the acquisition date. The wheat organ BRF was constant throughout the day in very cloudy conditions and with high sun altitudes but showed gradual variations in the morning under sunny or partially cloudy sky. As a consequence, measurements should be performed close to solar noon and the reference panel should be captured at the beginning and end of each field trip to correct the BRF. The method, with such precautions, was tested all throughout the wheat growing season on two varieties and various canopy architectures generated by a fertilization gradient. The method yielded consistent reflectance dynamics in all scenarios.


Assuntos
Folhas de Planta , Triticum , Produtos Agrícolas , Folhas de Planta/fisiologia , Refratometria , Estações do Ano
4.
Plant Phenomics ; 2022: 9841985, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35169713

RESUMO

The automatic segmentation of ears in wheat canopy images is an important step to measure ear density or extract relevant plant traits separately for the different organs. Recent deep learning algorithms appear as promising tools to accurately detect ears in a wide diversity of conditions. However, they remain complicated to implement and necessitate a huge training database. This paper is aimed at proposing an easy and quick to train and robust alternative to segment wheat ears from heading to maturity growth stage. The tested method was based on superpixel classification exploiting features from RGB and multispectral cameras. Three classifiers were trained with wheat images acquired from heading to maturity on two cultivars at different levels of fertilizer. The best classifier, the support vector machine (SVM), yielded satisfactory segmentation and reached 94% accuracy. However, the segmentation at the pixel level could not be assessed only by the superpixel classification accuracy. For this reason, a second assessment method was proposed to consider the entire process. A simple graphical tool was developed to annotate pixels. The strategy was to annotate a few pixels per image to be able to quickly annotate the entire image set, and thus account for very diverse conditions. Results showed a lesser segmentation score (F1-score) for the heading and flowering stages and for the zero nitrogen input object. The methodology appeared appropriate for further work on the growth dynamics of the different wheat organs and in the frame of other segmentation challenges.

5.
Plant Phenomics ; 2021: 9846158, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34778804

RESUMO

The Global Wheat Head Detection (GWHD) dataset was created in 2020 and has assembled 193,634 labelled wheat heads from 4700 RGB images acquired from various acquisition platforms and 7 countries/institutions. With an associated competition hosted in Kaggle, GWHD_2020 has successfully attracted attention from both the computer vision and agricultural science communities. From this first experience, a few avenues for improvements have been identified regarding data size, head diversity, and label reliability. To address these issues, the 2020 dataset has been reexamined, relabeled, and complemented by adding 1722 images from 5 additional countries, allowing for 81,553 additional wheat heads. We now release in 2021 a new version of the Global Wheat Head Detection dataset, which is bigger, more diverse, and less noisy than the GWHD_2020 version.

6.
Front Plant Sci ; 11: 96, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32133023

RESUMO

Stereo vision is a 3D imaging method that allows quick measurement of plant architecture. Historically, the method has mainly been developed in controlled conditions. This study identified several challenges to adapt the method to natural field conditions and propose solutions. The plant traits studied were leaf area, mean leaf angle, leaf angle distribution, and canopy height. The experiment took place in a winter wheat, Triticum aestivum L., field dedicated to fertilization trials at Gembloux (Belgium). Images were acquired thanks to two nadir cameras. A machine learning algorithm using RGB and HSV color spaces is proposed to perform soil-plant segmentation robust to light conditions. The matching between images of the two cameras and the leaf area computation was improved if the number of pixels in the image of a scene was binned from 2560 × 2048 to 1280 × 1024 pixels, for a distance of 1 m between the cameras and the canopy. Height descriptors such as median or 95th percentile of plant heights were useful to precisely compare the development of different canopies. Mean spike top height was measured with an accuracy of 97.1 %. The measurement of leaf area was affected by overlaps between leaves so that a calibration curve was necessary. The leaf area estimation presented a root mean square error (RMSE) of 0.37. The impact of wind on the variability of leaf area measurement was inferior to 3% except at the stem elongation stage. Mean leaf angles ranging from 53° to 62° were computed for the whole growing season. For each acquisition date during the vegetative stages, the variability of mean angle measurement was inferior to 1.5% which underpins that the method is precise.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA