Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros

Banco de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Plant Phenomics ; 5: 0046, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37228515

RESUMEN

The sowing pattern has an important impact on light interception efficiency in maize by determining the spatial distribution of leaves within the canopy. Leaves orientation is an important architectural trait determining maize canopies light interception. Previous studies have indicated how maize genotypes may adapt leaves orientation to avoid mutual shading with neighboring plants as a plastic response to intraspecific competition. The goal of the present study is 2-fold: firstly, to propose and validate an automatic algorithm (Automatic Leaf Azimuth Estimation from Midrib detection [ALAEM]) based on leaves midrib detection in vertical red green blue (RGB) images to describe leaves orientation at the canopy level; and secondly, to describe genotypic and environmental differences in leaves orientation in a panel of 5 maize hybrids sowing at 2 densities (6 and 12 plants.m-2) and 2 row spacing (0.4 and 0.8 m) over 2 different sites in southern France. The ALAEM algorithm was validated against in situ annotations of leaves orientation, showing a satisfactory agreement (root mean square [RMSE] error = 0.1, R2 = 0.35) in the proportion of leaves oriented perpendicular to rows direction across sowing patterns, genotypes, and sites. The results from ALAEM permitted to identify significant differences in leaves orientation associated to leaves intraspecific competition. In both experiments, a progressive increase in the proportion of leaves oriented perpendicular to the row is observed when the rectangularity of the sowing pattern increases from 1 (6 plants.m-2, 0.4 m row spacing) towards 8 (12 plants.m-2, 0.8 m row spacing). Significant differences among the 5 cultivars were found, with 2 hybrids exhibiting, systematically, a more plastic behavior with a significantly higher proportion of leaves oriented perpendicularly to avoid overlapping with neighbor plants at high rectangularity. Differences in leaves orientation were also found between experiments in a squared sowing pattern (6 plants.m-2, 0.4 m row spacing), indicating a possible contribution of illumination conditions inducing a preferential orientation toward east-west direction when intraspecific competition is low.

2.
Sci Data ; 10(1): 302, 2023 05 19.
Artículo en Inglés | MEDLINE | ID: mdl-37208401

RESUMEN

Applying deep learning to images of cropping systems provides new knowledge and insights in research and commercial applications. Semantic segmentation or pixel-wise classification, of RGB images acquired at the ground level, into vegetation and background is a critical step in the estimation of several canopy traits. Current state of the art methodologies based on convolutional neural networks (CNNs) are trained on datasets acquired under controlled or indoor environments. These models are unable to generalize to real-world images and hence need to be fine-tuned using new labelled datasets. This motivated the creation of the VegAnn - Vegetation Annotation - dataset, a collection of 3775 multi-crop RGB images acquired for different phenological stages using different systems and platforms in diverse illumination conditions. We anticipate that VegAnn will help improving segmentation algorithm performances, facilitate benchmarking and promote large-scale crop vegetation segmentation research.

3.
Plant Phenomics ; 2022: 9803570, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36451876

RESUMEN

Pixel segmentation of high-resolution RGB images into chlorophyll-active or nonactive vegetation classes is a first step often required before estimating key traits of interest. We have developed the SegVeg approach for semantic segmentation of RGB images into three classes (background, green, and senescent vegetation). This is achieved in two steps: A U-net model is first trained on a very large dataset to separate whole vegetation from background. The green and senescent vegetation pixels are then separated using SVM, a shallow machine learning technique, trained over a selection of pixels extracted from images. The performances of the SegVeg approach is then compared to a 3-class U-net model trained using weak supervision over RGB images segmented with SegVeg as groundtruth masks. Results show that the SegVeg approach allows to segment accurately the three classes. However, some confusion is observed mainly between the background and senescent vegetation, particularly over the dark and bright regions of the images. The U-net model achieves similar performances, with slight degradation over the green vegetation: the SVM pixel-based approach provides more precise delineation of the green and senescent patches as compared to the convolutional nature of U-net. The use of the components of several color spaces allows to better classify the vegetation pixels into green and senescent. Finally, the models are used to predict the fraction of three classes over whole images or regularly spaced grid-pixels. Results show that green fraction is very well estimated (R 2 = 0.94) by the SegVeg model, while the senescent and background fractions show slightly degraded performances (R 2 = 0.70 and 0.73, respectively) with a mean 95% confidence error interval of 2.7% and 2.1% for the senescent vegetation and background, versus 1% for green vegetation. We have made SegVeg publicly available as a ready-to-use script and model, along with the entire annotated grid-pixels dataset. We thus hope to render segmentation accessible to a broad audience by requiring neither manual annotation nor knowledge or, at least, offering a pretrained model for more specific use.

4.
Plant Phenomics ; 2021: 9846158, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34778804

RESUMEN

The Global Wheat Head Detection (GWHD) dataset was created in 2020 and has assembled 193,634 labelled wheat heads from 4700 RGB images acquired from various acquisition platforms and 7 countries/institutions. With an associated competition hosted in Kaggle, GWHD_2020 has successfully attracted attention from both the computer vision and agricultural science communities. From this first experience, a few avenues for improvements have been identified regarding data size, head diversity, and label reliability. To address these issues, the 2020 dataset has been reexamined, relabeled, and complemented by adding 1722 images from 5 additional countries, allowing for 81,553 additional wheat heads. We now release in 2021 a new version of the Global Wheat Head Detection dataset, which is bigger, more diverse, and less noisy than the GWHD_2020 version.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA