Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Sci Data ; 10(1): 302, 2023 05 19.
Artigo em Inglês | MEDLINE | ID: mdl-37208401

RESUMO

Applying deep learning to images of cropping systems provides new knowledge and insights in research and commercial applications. Semantic segmentation or pixel-wise classification, of RGB images acquired at the ground level, into vegetation and background is a critical step in the estimation of several canopy traits. Current state of the art methodologies based on convolutional neural networks (CNNs) are trained on datasets acquired under controlled or indoor environments. These models are unable to generalize to real-world images and hence need to be fine-tuned using new labelled datasets. This motivated the creation of the VegAnn - Vegetation Annotation - dataset, a collection of 3775 multi-crop RGB images acquired for different phenological stages using different systems and platforms in diverse illumination conditions. We anticipate that VegAnn will help improving segmentation algorithm performances, facilitate benchmarking and promote large-scale crop vegetation segmentation research.

2.
Plant Phenomics ; 5: 0017, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37040294

RESUMO

Head (panicle) density is a major component in understanding crop yield, especially in crops that produce variable numbers of tillers such as sorghum and wheat. Use of panicle density both in plant breeding and in the agronomy scouting of commercial crops typically relies on manual counts observation, which is an inefficient and tedious process. Because of the easy availability of red-green-blue images, machine learning approaches have been applied to replacing manual counting. However, much of this research focuses on detection per se in limited testing conditions and does not provide a general protocol to utilize deep-learning-based counting. In this paper, we provide a comprehensive pipeline from data collection to model deployment in deep-learning-assisted panicle yield estimation for sorghum. This pipeline provides a basis from data collection and model training, to model validation and model deployment in commercial fields. Accurate model training is the foundation of the pipeline. However, in natural environments, the deployment dataset is frequently different from the training data (domain shift) causing the model to fail, so a robust model is essential to build a reliable solution. Although we demonstrate our pipeline in a sorghum field, the pipeline can be generalized to other grain species. Our pipeline provides a high-resolution head density map that can be utilized for diagnosis of agronomic variability within a field, in a pipeline built without commercial software.

3.
Plant Phenomics ; 2022: 9803570, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36451876

RESUMO

Pixel segmentation of high-resolution RGB images into chlorophyll-active or nonactive vegetation classes is a first step often required before estimating key traits of interest. We have developed the SegVeg approach for semantic segmentation of RGB images into three classes (background, green, and senescent vegetation). This is achieved in two steps: A U-net model is first trained on a very large dataset to separate whole vegetation from background. The green and senescent vegetation pixels are then separated using SVM, a shallow machine learning technique, trained over a selection of pixels extracted from images. The performances of the SegVeg approach is then compared to a 3-class U-net model trained using weak supervision over RGB images segmented with SegVeg as groundtruth masks. Results show that the SegVeg approach allows to segment accurately the three classes. However, some confusion is observed mainly between the background and senescent vegetation, particularly over the dark and bright regions of the images. The U-net model achieves similar performances, with slight degradation over the green vegetation: the SVM pixel-based approach provides more precise delineation of the green and senescent patches as compared to the convolutional nature of U-net. The use of the components of several color spaces allows to better classify the vegetation pixels into green and senescent. Finally, the models are used to predict the fraction of three classes over whole images or regularly spaced grid-pixels. Results show that green fraction is very well estimated (R 2 = 0.94) by the SegVeg model, while the senescent and background fractions show slightly degraded performances (R 2 = 0.70 and 0.73, respectively) with a mean 95% confidence error interval of 2.7% and 2.1% for the senescent vegetation and background, versus 1% for green vegetation. We have made SegVeg publicly available as a ready-to-use script and model, along with the entire annotated grid-pixels dataset. We thus hope to render segmentation accessible to a broad audience by requiring neither manual annotation nor knowledge or, at least, offering a pretrained model for more specific use.

4.
Plant Phenomics ; 2021: 9892647, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34957414

RESUMO

Multispectral observations from unmanned aerial vehicles (UAVs) are currently used for precision agriculture and crop phenotyping applications to monitor a series of traits allowing the characterization of the vegetation status. However, the limited autonomy of UAVs makes the completion of flights difficult when sampling large areas. Increasing the throughput of data acquisition while not degrading the ground sample distance (GSD) is, therefore, a critical issue to be solved. We propose here a new image acquisition configuration based on the combination of two focal length (f) optics: an optics with f = 4.2 mm is added to the standard f = 8 mm (SS: single swath) of the multispectral camera (DS: double swath, double of the standard one). Two flights were completed consecutively in 2018 over a maize field using the AIRPHEN multispectral camera at 52 m altitude. The DS flight plan was designed to get 80% overlap with the 4.2 mm optics, while the SS one was designed to get 80% overlap with the 8 mm optics. As a result, the time required to cover the same area is halved for the DS as compared to the SS. The georeferencing accuracy was improved for the DS configuration, particularly for the Z dimension due to the larger view angles available with the small focal length optics. Application to plant height estimates demonstrates that the DS configuration provides similar results as the SS one. However, for both the DS and SS configurations, degrading the quality level used to generate the 3D point cloud significantly decreases the plant height estimates.

5.
Plant Phenomics ; 2021: 9846158, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34778804

RESUMO

The Global Wheat Head Detection (GWHD) dataset was created in 2020 and has assembled 193,634 labelled wheat heads from 4700 RGB images acquired from various acquisition platforms and 7 countries/institutions. With an associated competition hosted in Kaggle, GWHD_2020 has successfully attracted attention from both the computer vision and agricultural science communities. From this first experience, a few avenues for improvements have been identified regarding data size, head diversity, and label reliability. To address these issues, the 2020 dataset has been reexamined, relabeled, and complemented by adding 1722 images from 5 additional countries, allowing for 81,553 additional wheat heads. We now release in 2021 a new version of the Global Wheat Head Detection dataset, which is bigger, more diverse, and less noisy than the GWHD_2020 version.

6.
Plant Phenomics ; 2020: 3521852, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33313551

RESUMO

The detection of wheat heads in plant images is an important task for estimating pertinent wheat traits including head population density and head characteristics such as health, size, maturity stage, and the presence of awns. Several studies have developed methods for wheat head detection from high-resolution RGB imagery based on machine learning algorithms. However, these methods have generally been calibrated and validated on limited datasets. High variability in observational conditions, genotypic differences, development stages, and head orientation makes wheat head detection a challenge for computer vision. Further, possible blurring due to motion or wind and overlap between heads for dense populations make this task even more complex. Through a joint international collaborative effort, we have built a large, diverse, and well-labelled dataset of wheat images, called the Global Wheat Head Detection (GWHD) dataset. It contains 4700 high-resolution RGB images and 190000 labelled wheat heads collected from several countries around the world at different growth stages with a wide range of genotypes. Guidelines for image acquisition, associating minimum metadata to respect FAIR principles, and consistent head labelling methods are proposed when developing new head detection datasets. The GWHD dataset is publicly available at http://www.global-wheat.com/and aimed at developing and benchmarking methods for wheat head detection.

7.
Plant Methods ; 15: 150, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31857821

RESUMO

BACKGROUND: Grain yield of wheat is greatly associated with the population of wheat spikes, i.e., s p i k e n u m b e r m - 2 . To obtain this index in a reliable and efficient way, it is necessary to count wheat spikes accurately and automatically. Currently computer vision technologies have shown great potential to automate this task effectively in a low-end manner. In particular, counting wheat spikes is a typical visual counting problem, which is substantially studied under the name of object counting in Computer Vision. TasselNet, which represents one of the state-of-the-art counting approaches, is a convolutional neural network-based local regression model, and currently benchmarks the best record on counting maize tassels. However, when applying TasselNet to wheat spikes, it cannot predict accurate counts when spikes partially present. RESULTS: In this paper, we make an important observation that the counting performance of local regression networks can be significantly improved via adding visual context to the local patches. Meanwhile, such context can be treated as part of the receptive field without increasing the model capacity. We thus propose a simple yet effective contextual extension of TasselNet-TasselNetv2. If implementing TasselNetv2 in a fully convolutional form, both training and inference can be greatly sped up by reducing redundant computations. In particular, we collected and labeled a large-scale wheat spikes counting (WSC) dataset, with 1764 high-resolution images and 675,322 manually-annotated instances. Extensive experiments show that, TasselNetv2 not only achieves state-of-the-art performance on the WSC dataset ( 91.01 % counting accuracy) but also is more than an order of magnitude faster than TasselNet (13.82 fps on 912 × 1216 images). The generality of TasselNetv2 is further demonstrated by advancing the state of the art on both the Maize Tassels Counting and ShanghaiTech Crowd Counting datasets. CONCLUSIONS: This paper describes TasselNetv2 for counting wheat spikes, which simultaneously addresses two important use cases in plant counting: improving the counting accuracy without increasing model capacity, and improving efficiency without sacrificing accuracy. It is promising to be deployed in a real-time system with high-throughput demand. In particular, TasselNetv2 can achieve sufficiently accurate results when training from scratch with small networks, and adopting larger pre-trained networks can further boost accuracy. In practice, one can trade off the performance and efficiency according to certain application scenarios. Code and models are made available at: https://tinyurl.com/TasselNetv2.

8.
Plant Phenomics ; 2019: 4820305, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-33313528

RESUMO

Total above-ground biomass at harvest and ear density are two important traits that characterize wheat genotypes. Two experiments were carried out in two different sites where several genotypes were grown under contrasted irrigation and nitrogen treatments. A high spatial resolution RGB camera was used to capture the residual stems standing straight after the cutting by the combine machine during harvest. It provided a ground spatial resolution better than 0.2 mm. A Faster Regional Convolutional Neural Network (Faster-RCNN) deep-learning model was first trained to identify the stems cross section. Results showed that the identification provided precision and recall close to 95%. Further, the balance between precision and recall allowed getting accurate estimates of the stem density with a relative RMSE close to 7% and robustness across the two experimental sites. The estimated stem density was also compared with the ear density measured in the field with traditional methods. A very high correlation was found with almost no bias, indicating that the stem density could be a good proxy of the ear density. The heritability/repeatability evaluated over 16 genotypes in one of the two experiments was slightly higher (80%) than that of the ear density (78%). The diameter of each stem was computed from the profile of gray values in the extracts of the stem cross section. Results show that the stem diameters follow a gamma distribution over each microplot with an average diameter close to 2.0 mm. Finally, the biovolume computed as the product of the average stem diameter, the stem density, and plant height is closely related to the above-ground biomass at harvest with a relative RMSE of 6%. Possible limitations of the findings and future applications are finally discussed.

9.
J Exp Bot ; 69(10): 2705-2716, 2018 04 27.
Artigo em Inglês | MEDLINE | ID: mdl-29617837

RESUMO

Leaf rolling in maize crops is one of the main plant reactions to water stress that can be visually scored in the field. However, leaf-scoring techniques do not meet the high-throughput requirements needed by breeders for efficient phenotyping. Consequently, this study investigated the relationship between leaf-rolling scores and changes in canopy structure that can be determined by high-throughput remote-sensing techniques. Experiments were conducted in 2015 and 2016 on maize genotypes subjected to water stress. Leaf-rolling was scored visually over the whole day around the flowering stage. Concurrent digital hemispherical photographs were taken to evaluate the impact of leaf-rolling on canopy structure using the computed fraction of intercepted diffuse photosynthetically active radiation, FIPARdif. The results showed that leaves started to roll due to water stress around 09:00 h and leaf-rolling reached its maximum around 15:00 h (the photoperiod was about 05:00-20:00 h). In contrast, plants maintained under well-watered conditions did not show any significant rolling during the same day. A canopy-level index of rolling (CLIR) is proposed to quantify the diurnal changes in canopy structure induced by leaf-rolling. It normalizes for the differences in FIPARdif between genotypes observed in the early morning when leaves are unrolled, as well as for yearly effects linked to environmental conditions. Leaf-level rolling score was very strongly correlated with changes in canopy structure as described by the CLIR (r2=0.86, n=370). The daily time course of rolling was characterized using the amplitude of variation, and the rate and the timing of development computed at both the leaf and canopy levels. Results obtained from eight genotypes common between the two years of experiments showed that the amplitude of variation of the CLIR was the more repeatable trait (Spearman coefficient ρ=0.62) as compared to the rate (ρ=0.29) and the timing of development (ρ=0.33). The potential of these findings for the development of a high-throughput method for determining leaf-rolling based on aerial drone observations are considered.


Assuntos
Dessecação , Ensaios de Triagem em Larga Escala/métodos , Fenótipo , Folhas de Planta/fisiologia , Zea mays/fisiologia , Fotossíntese
10.
Front Plant Sci ; 8: 2002, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-29230229

RESUMO

The capacity of LiDAR and Unmanned Aerial Vehicles (UAVs) to provide plant height estimates as a high-throughput plant phenotyping trait was explored. An experiment over wheat genotypes conducted under well watered and water stress modalities was conducted. Frequent LiDAR measurements were performed along the growth cycle using a phénomobile unmanned ground vehicle. UAV equipped with a high resolution RGB camera was flying the experiment several times to retrieve the digital surface model from structure from motion techniques. Both techniques provide a 3D dense point cloud from which the plant height can be estimated. Plant height first defined as the z-value for which 99.5% of the points of the dense cloud are below. This provides good consistency with manual measurements of plant height (RMSE = 3.5 cm) while minimizing the variability along each microplot. Results show that LiDAR and structure from motion plant height values are always consistent. However, a slight under-estimation is observed for structure from motion techniques, in relation with the coarser spatial resolution of UAV imagery and the limited penetration capacity of structure from motion as compared to LiDAR. Very high heritability values (H2> 0.90) were found for both techniques when lodging was not present. The dynamics of plant height shows that it carries pertinent information regarding the period and magnitude of the plant stress. Further, the date when the maximum plant height is reached was found to be very heritable (H2> 0.88) and a good proxy of the flowering stage. Finally, the capacity of plant height as a proxy for total above ground biomass and yield is discussed.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA