Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Front Plant Sci ; 15: 1353110, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38708393

RESUMO

Background: Autofluorescence-based imaging has the potential to non-destructively characterize the biochemical and physiological properties of plants regulated by genotypes using optical properties of the tissue. A comparative study of stress tolerant and stress susceptible genotypes of Brassica rapa with respect to newly introduced stress-based phenotypes using machine learning techniques will contribute to the significant advancement of autofluorescence-based plant phenotyping research. Methods: Autofluorescence spectral images have been used to design a stress detection classifier with two classes, stressed and non-stressed, using machine learning algorithms. The benchmark dataset consisted of time-series image sequences from three Brassica rapa genotypes (CC, R500, and VT), extreme in their morphological and physiological traits captured at the high-throughput plant phenotyping facility at the University of Nebraska-Lincoln, USA. We developed a set of machine learning-based classification models to detect the percentage of stressed tissue derived from plant images and identified the best classifier. From the analysis of the autofluorescence images, two novel stress-based image phenotypes were computed to determine the temporal variation in stressed tissue under progressive drought across different genotypes, i.e., the average percentage stress and the moving average percentage stress. Results: The study demonstrated that both the computed phenotypes consistently discriminated against stressed versus non-stressed tissue, with oilseed type (R500) being less prone to drought stress relative to the other two Brassica rapa genotypes (CC and VT). Conclusion: Autofluorescence signals from the 365/400 nm excitation/emission combination were able to segregate genotypic variation during a progressive drought treatment under a controlled greenhouse environment, allowing for the exploration of other meaningful phenotypes using autofluorescence image sequences with significance in the context of plant science.

2.
Genome Biol ; 25(1): 8, 2024 01 03.
Artigo em Inglês | MEDLINE | ID: mdl-38172911

RESUMO

Dramatic improvements in measuring genetic variation across agriculturally relevant populations (genomics) must be matched by improvements in identifying and measuring relevant trait variation in such populations across many environments (phenomics). Identifying the most critical opportunities and challenges in genome to phenome (G2P) research is the focus of this paper. Previously (Genome Biol, 23(1):1-11, 2022), we laid out how Agricultural Genome to Phenome Initiative (AG2PI) will coordinate activities with USA federal government agencies expand public-private partnerships, and engage with external stakeholders to achieve a shared vision of future the AG2PI. Acting on this latter step, AG2PI organized the "Thinking Big: Visualizing the Future of AG2PI" two-day workshop held September 9-10, 2022, in Ames, Iowa, co-hosted with the United State Department of Agriculture's National Institute of Food and Agriculture (USDA NIFA). During the meeting, attendees were asked to use their experience and curiosity to review the current status of agricultural genome to phenome (AG2P) work and envision the future of the AG2P field. The topic summaries composing this paper are distilled from two 1.5-h small group discussions. Challenges and solutions identified across multiple topics at the workshop were explored. We end our discussion with a vision for the future of agricultural progress, identifying two areas of innovation needed: (1) innovate in genetic improvement methods development and evaluation and (2) innovate in agricultural research processes to solve societal problems. To address these needs, we then provide six specific goals that we recommend be implemented immediately in support of advancing AG2P research.


Assuntos
Agricultura , Fenômica , Estados Unidos , Genômica
3.
Front Plant Sci ; 14: 1003150, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36844082

RESUMO

The paper introduces two novel algorithms for predicting and propagating drought stress in plants using image sequences captured by cameras in two modalities, i.e., visible light and hyperspectral. The first algorithm, VisStressPredict, computes a time series of holistic phenotypes, e.g., height, biomass, and size, by analyzing image sequences captured by a visible light camera at discrete time intervals and then adapts dynamic time warping (DTW), a technique for measuring similarity between temporal sequences for dynamic phenotypic analysis, to predict the onset of drought stress. The second algorithm, HyperStressPropagateNet, leverages a deep neural network for temporal stress propagation using hyperspectral imagery. It uses a convolutional neural network to classify the reflectance spectra at individual pixels as either stressed or unstressed to determine the temporal propagation of stress in the plant. A very high correlation between the soil water content, and the percentage of the plant under stress as computed by HyperStressPropagateNet on a given day demonstrates its efficacy. Although VisStressPredict and HyperStressPropagateNet fundamentally differ in their goals and hence in the input image sequences and underlying approaches, the onset of stress as predicted by stress factor curves computed by VisStressPredict correlates extremely well with the day of appearance of stress pixels in the plants as computed by HyperStressPropagateNet. The two algorithms are evaluated on a dataset of image sequences of cotton plants captured in a high throughput plant phenotyping platform. The algorithms may be generalized to any plant species to study the effect of abiotic stresses on sustainable agriculture practices.

4.
Front Plant Sci ; 14: 1084778, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36818836

RESUMO

The emergence timing of a plant, i.e., the time at which the plant is first visible from the surface of the soil, is an important phenotypic event and is an indicator of the successful establishment and growth of a plant. The paper introduces a novel deep-learning based model called EmergeNet with a customized loss function that adapts to plant growth for coleoptile (a rigid plant tissue that encloses the first leaves of a seedling) emergence timing detection. It can also track its growth from a time-lapse sequence of images with cluttered backgrounds and extreme variations in illumination. EmergeNet is a novel ensemble segmentation model that integrates three different but promising networks, namely, SEResNet, InceptionV3, and VGG19, in the encoder part of its base model, which is the UNet model. EmergeNet can correctly detect the coleoptile at its first emergence when it is tiny and therefore barely visible on the soil surface. The performance of EmergeNet is evaluated using a benchmark dataset called the University of Nebraska-Lincoln Maize Emergence Dataset (UNL-MED). It contains top-view time-lapse images of maize coleoptiles starting before the occurrence of their emergence and continuing until they are about one inch tall. EmergeNet detects the emergence timing with 100% accuracy compared with human-annotated ground-truth. Furthermore, it significantly outperforms UNet by generating very high-quality segmented masks of the coleoptiles in both natural light and dark environmental conditions.

5.
Front Plant Sci ; 14: 1211409, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38023863

RESUMO

Cosegmentation and coattention are extensions of traditional segmentation methods aimed at detecting a common object (or objects) in a group of images. Current cosegmentation and coattention methods are ineffective for objects, such as plants, that change their morphological state while being captured in different modalities and views. The Object State Change using Coattention-Cosegmentation (OSC-CO2) is an end-to-end unsupervised deep-learning framework that enhances traditional segmentation techniques, processing, analyzing, selecting, and combining suitable segmentation results that may contain most of our target object's pixels, and then displaying a final segmented image. The framework leverages coattention-based convolutional neural networks (CNNs) and cosegmentation-based dense Conditional Random Fields (CRFs) to address segmentation accuracy in high-dimensional plant imagery with evolving plant objects. The efficacy of OSC-CO2 is demonstrated using plant growth sequences imaged with infrared, visible, and fluorescence cameras in multiple views using a remote sensing, high-throughput phenotyping platform, and is evaluated using Jaccard index and precision measures. We also introduce CosegPP+, a dataset that is structured and can provide quantitative information on the efficacy of our framework. Results show that OSC-CO2 out performed state-of-the art segmentation and cosegmentation methods by improving segementation accuracy by 3% to 45%.

6.
Front Plant Sci ; 13: 844522, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35665165

RESUMO

Deep learning-based methods have recently provided a means to rapidly and effectively extract various plant traits due to their powerful ability to depict a plant image across a variety of species and growth conditions. In this study, we focus on dealing with two fundamental tasks in plant phenotyping, i.e., plant segmentation and leaf counting, and propose a two-steam deep learning framework for segmenting plants and counting leaves with various size and shape from two-dimensional plant images. In the first stream, a multi-scale segmentation model using spatial pyramid is developed to extract leaves with different size and shape, where the fine-grained details of leaves are captured using deep feature extractor. In the second stream, a regression counting model is proposed to estimate the number of leaves without any pre-detection, where an auxiliary binary mask from segmentation stream is introduced to enhance the counting performance by effectively alleviating the influence of complex background. Extensive pot experiments are conducted CVPPP 2017 Leaf Counting Challenge dataset, which contains images of Arabidopsis and tobacco plants. The experimental results demonstrate that the proposed framework achieves a promising performance both in plant segmentation and leaf counting, providing a reference for the automatic analysis of plant phenotypes.

7.
Front Plant Sci ; 11: 521431, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33362806

RESUMO

High throughput image-based plant phenotyping facilitates the extraction of morphological and biophysical traits of a large number of plants non-invasively in a relatively short time. It facilitates the computation of advanced phenotypes by considering the plant as a single object (holistic phenotypes) or its components, i.e., leaves and the stem (component phenotypes). The architectural complexity of plants increases over time due to variations in self-occlusions and phyllotaxy, i.e., arrangements of leaves around the stem. One of the central challenges to computing phenotypes from 2-dimensional (2D) single view images of plants, especially at the advanced vegetative stage in presence of self-occluding leaves, is that the information captured in 2D images is incomplete, and hence, the computed phenotypes are inaccurate. We introduce a novel algorithm to compute 3-dimensional (3D) plant phenotypes from multiview images using voxel-grid reconstruction of the plant (3DPhenoMV). The paper also presents a novel method to reliably detect and separate the individual leaves and the stem from the 3D voxel-grid of the plant using voxel overlapping consistency check and point cloud clustering techniques. To evaluate the performance of the proposed algorithm, we introduce the University of Nebraska-Lincoln 3D Plant Phenotyping Dataset (UNL-3DPPD). A generic taxonomy of 3D image-based plant phenotypes are also presented to promote 3D plant phenotyping research. A subset of these phenotypes are computed using computer vision algorithms with discussion of their significance in the context of plant science. The central contributions of the paper are (a) an algorithm for 3D voxel-grid reconstruction of maize plants at the advanced vegetative stages using images from multiple 2D views; (b) a generic taxonomy of 3D image-based plant phenotypes and a public benchmark dataset, i.e., UNL-3DPPD, to promote the development of 3D image-based plant phenotyping research; and (c) novel voxel overlapping consistency check and point cloud clustering techniques to detect and isolate individual leaves and stem of the maize plants to compute the component phenotypes. Detailed experimental analyses demonstrate the efficacy of the proposed method, and also show the potential of 3D phenotypes to explain the morphological characteristics of plants regulated by genetic and environmental interactions.

8.
Front Plant Sci ; 10: 508, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31068958

RESUMO

The complex interaction between a genotype and its environment controls the biophysical properties of a plant, manifested in observable traits, i.e., plant's phenome, which influences resources acquisition, performance, and yield. High-throughput automated image-based plant phenotyping refers to the sensing and quantifying plant traits non-destructively by analyzing images captured at regular intervals and with precision. While phenomic research has drawn significant attention in the last decade, extracting meaningful and reliable numerical phenotypes from plant images especially by considering its individual components, e.g., leaves, stem, fruit, and flower, remains a critical bottleneck to the translation of advances of phenotyping technology into genetic insights due to various challenges including lighting variations, plant rotations, and self-occlusions. The paper provides (1) a framework for plant phenotyping in a multimodal, multi-view, time-lapsed, high-throughput imaging system; (2) a taxonomy of phenotypes that may be derived by image analysis for better understanding of morphological structure and functional processes in plants; (3) a brief discussion on publicly available datasets to encourage algorithm development and uniform comparison with the state-of-the-art methods; (4) an overview of the state-of-the-art image-based high-throughput plant phenotyping methods; and (5) open problems for the advancement of this research field.

9.
Plant Methods ; 14: 35, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29760766

RESUMO

BACKGROUND: Image-based plant phenotyping facilitates the extraction of traits noninvasively by analyzing large number of plants in a relatively short period of time. It has the potential to compute advanced phenotypes by considering the whole plant as a single object (holistic phenotypes) or as individual components, i.e., leaves and the stem (component phenotypes), to investigate the biophysical characteristics of the plants. The emergence timing, total number of leaves present at any point of time and the growth of individual leaves during vegetative stage life cycle of the maize plants are significant phenotypic expressions that best contribute to assess the plant vigor. However, image-based automated solution to this novel problem is yet to be explored. RESULTS: A set of new holistic and component phenotypes are introduced in this paper. To compute the component phenotypes, it is essential to detect the individual leaves and the stem. Thus, the paper introduces a novel method to reliably detect the leaves and the stem of the maize plants by analyzing 2-dimensional visible light image sequences captured from the side using a graph based approach. The total number of leaves are counted and the length of each leaf is measured for all images in the sequence to monitor leaf growth. To evaluate the performance of the proposed algorithm, we introduce University of Nebraska-Lincoln Component Plant Phenotyping Dataset (UNL-CPPD) and provide ground truth to facilitate new algorithm development and uniform comparison. The temporal variation of the component phenotypes regulated by genotypes and environment (i.e., greenhouse) are experimentally demonstrated for the maize plants on UNL-CPPD. Statistical models are applied to analyze the greenhouse environment impact and demonstrate the genetic regulation of the temporal variation of the holistic phenotypes on the public dataset called Panicoid Phenomap-1. CONCLUSION: The central contribution of the paper is a novel computer vision based algorithm for automated detection of individual leaves and the stem to compute new component phenotypes along with a public release of a benchmark dataset, i.e., UNL-CPPD. Detailed experimental analyses are performed to demonstrate the temporal variation of the holistic and component phenotypes in maize regulated by environment and genetic variation with a discussion on their significance in the context of plant science.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA