Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Front Plant Sci ; 15: 1410249, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38872880

RESUMO

Integrating high-throughput phenotyping (HTP) based traits into phenomic and genomic selection (GS) can accelerate the breeding of high-yielding and climate-resilient wheat cultivars. In this study, we explored the applicability of Unmanned Aerial Vehicles (UAV)-assisted HTP combined with deep learning (DL) for the phenomic or multi-trait (MT) genomic prediction of grain yield (GY), test weight (TW), and grain protein content (GPC) in winter wheat. Significant correlations were observed between agronomic traits and HTP-based traits across different growth stages of winter wheat. Using a deep neural network (DNN) model, HTP-based phenomic predictions showed robust prediction accuracies for GY, TW, and GPC for a single location with R2 of 0.71, 0.62, and 0.49, respectively. Further prediction accuracies increased (R2 of 0.76, 0.64, and 0.75) for GY, TW, and GPC, respectively when advanced breeding lines from multi-locations were used in the DNN model. Prediction accuracies for GY varied across growth stages, with the highest accuracy at the Feekes 11 (Milky ripe) stage. Furthermore, forward prediction of GY in preliminary breeding lines using DNN trained on multi-location data from advanced breeding lines improved the prediction accuracy by 32% compared to single-location data. Next, we evaluated the potential of incorporating HTP-based traits in multi-trait genomic selection (MT-GS) models in the prediction of GY, TW, and GPC. MT-GS, models including UAV data-based anthocyanin reflectance index (ARI), green chlorophyll index (GCI), and ratio vegetation index 2 (RVI_2) as covariates demonstrated higher predictive ability (0.40, 0.40, and 0.37, respectively) as compared to single-trait model (0.23) for GY. Overall, this study demonstrates the potential of integrating HTP traits into DL-based phenomic or MT-GS models for enhancing breeding efficiency.

2.
Plant Genome ; : e20470, 2024 Jun 09.
Artigo em Inglês | MEDLINE | ID: mdl-38853339

RESUMO

Fusarium head blight (FHB) remains one of the most destructive diseases of wheat (Triticum aestivum L.), causing considerable losses in yield and end-use quality. Phenotyping of FHB resistance traits, Fusarium-damaged kernels (FDK), and deoxynivalenol (DON), is either prone to human biases or resource expensive, hindering the progress in breeding for FHB-resistant cultivars. Though genomic selection (GS) can be an effective way to select these traits, inaccurate phenotyping remains a hurdle in exploiting this approach. Here, we used an artificial intelligence (AI)-based precise FDK estimation that exhibits high heritability and correlation with DON. Further, GS using AI-based FDK (FDK_QVIS/FDK_QNIR) showed a two-fold increase in predictive ability (PA) compared to GS for traditionally estimated FDK (FDK_V). Next, the AI-based FDK was evaluated along with other traits in multi-trait (MT) GS models to predict DON. The inclusion of FDK_QNIR and FDK_QVIS with days to heading as covariates improved the PA for DON by 58% over the baseline single-trait GS model. We next used hyperspectral imaging of FHB-infected wheat kernels as a novel avenue to improve the MT GS for DON. The PA for DON using selected wavebands derived from hyperspectral imaging in MT GS models surpassed the single-trait GS model by around 40%. Finally, we evaluated phenomic prediction for DON by integrating hyperspectral imaging with deep learning to directly predict DON in FHB-infected wheat kernels and observed an accuracy (R2 = 0.45) comparable to best-performing MT GS models. This study demonstrates the potential application of AI and vision-based platforms to improve PA for FHB-related traits using genomic and phenomic selection.

3.
Sensors (Basel) ; 23(24)2023 Dec 08.
Artigo em Inglês | MEDLINE | ID: mdl-38139554

RESUMO

Accurate and timely monitoring of biomass in breeding nurseries is essential for evaluating plant performance and selecting superior genotypes. Traditional methods for phenotyping above-ground biomass in field conditions requires significant time, cost, and labor. Unmanned Aerial Vehicles (UAVs) offer a rapid and non-destructive approach for phenotyping multiple field plots at a low cost. While Vegetation Indices (VIs) extracted from remote sensing imagery have been widely employed for biomass estimation, they mainly capture spectral information and disregard the 3D canopy structure and spatial pixel relationships. Addressing these limitations, this study, conducted in 2020 and 2021, aimed to explore the potential of integrating UAV multispectral imagery-derived canopy spectral, structural, and textural features with machine learning algorithms for accurate oat biomass estimation. Six oat genotypes planted at two seeding rates were evaluated in two South Dakota locations at multiple growth stages. Plot-level canopy spectral, structural, and textural features were extracted from the multispectral imagery and used as input variables for three machine learning models: Partial Least Squares Regression (PLSR), Support Vector Regression (SVR), and Random Forest Regression (RFR). The results showed that (1) in addition to canopy spectral features, canopy structural and textural features are also important indicators for oat biomass estimation; (2) combining spectral, structural, and textural features significantly improved biomass estimation accuracy over using a single feature type; (3) machine learning algorithms showed good predictive ability with slightly better estimation accuracy shown by RFR (R2 = 0.926 and relative root mean square error (RMSE%) = 15.97%). This study demonstrated the benefits of UAV imagery-based multi-feature fusion using machine learning for above-ground biomass estimation in oat breeding nurseries, holding promise for enhancing the efficiency of oat breeding through UAV-based phenotyping and crop management practices.


Assuntos
Avena , Tecnologia de Sensoriamento Remoto , Tecnologia de Sensoriamento Remoto/métodos , Biomassa , Melhoramento Vegetal , Aprendizado de Máquina
4.
Sensors (Basel) ; 22(2)2022 Jan 13.
Artigo em Inglês | MEDLINE | ID: mdl-35062559

RESUMO

Current strategies for phenotyping above-ground biomass in field breeding nurseries demand significant investment in both time and labor. Unmanned aerial vehicles (UAV) can be used to derive vegetation indices (VIs) with high throughput and could provide an efficient way to predict forage yield with high accuracy. The main objective of the study is to investigate the potential of UAV-based multispectral data and machine learning approaches in the estimation of oat biomass. UAV equipped with a multispectral sensor was flown over three experimental oat fields in Volga, South Shore, and Beresford, South Dakota, USA, throughout the pre- and post-heading growth phases of oats in 2019. A variety of vegetation indices (VIs) derived from UAV-based multispectral imagery were employed to build oat biomass estimation models using four machine-learning algorithms: partial least squares (PLS), support vector machine (SVM), Artificial neural network (ANN), and random forest (RF). The results showed that several VIs derived from the UAV collected images were significantly positively correlated with dry biomass for Volga and Beresford (r = 0.2-0.65), however, in South Shore, VIs were either not significantly or weakly correlated with biomass. For Beresford, approximately 70% of the variance was explained by PLS, RF, and SVM validation models using data collected during the post-heading phase. Likewise for Volga, validation models had lower coefficient of determination (R2 = 0.20-0.25) and higher error (RMSE = 700-800 kg/ha) than training models (R2 = 0.50-0.60; RMSE = 500-690 kg/ha). In South Shore, validation models were only able to explain approx. 15-20% of the variation in biomass, which is possibly due to the insignificant correlation values between VIs and biomass. Overall, this study indicates that airborne remote sensing with machine learning has potential for above-ground biomass estimation in oat breeding nurseries. The main limitation was inconsistent accuracy in model prediction across locations. Multiple-year spectral data, along with the inclusion of textural features like crop surface model (CSM) derived height and volumetric indicators, should be considered in future studies while estimating biophysical parameters like biomass.


Assuntos
Avena , Tecnologia de Sensoriamento Remoto , Biomassa , Aprendizado de Máquina , Melhoramento Vegetal , Dispositivos Aéreos não Tripulados
5.
Sensors (Basel) ; 21(3)2021 Jan 22.
Artigo em Inglês | MEDLINE | ID: mdl-33499335

RESUMO

Early detection of grapevine viral diseases is critical for early interventions in order to prevent the disease from spreading to the entire vineyard. Hyperspectral remote sensing can potentially detect and quantify viral diseases in a nondestructive manner. This study utilized hyperspectral imagery at the plant level to identify and classify grapevines inoculated with the newly discovered DNA virus grapevine vein-clearing virus (GVCV) at the early asymptomatic stages. An experiment was set up at a test site at South Farm Research Center, Columbia, MO, USA (38.92 N, -92.28 W), with two grapevine groups, namely healthy and GVCV-infected, while other conditions were controlled. Images of each vine were captured by a SPECIM IQ 400-1000 nm hyperspectral sensor (Oulu, Finland). Hyperspectral images were calibrated and preprocessed to retain only grapevine pixels. A statistical approach was employed to discriminate two reflectance spectra patterns between healthy and GVCV vines. Disease-centric vegetation indices (VIs) were established and explored in terms of their importance to the classification power. Pixel-wise (spectral features) classification was performed in parallel with image-wise (joint spatial-spectral features) classification within a framework involving deep learning architectures and traditional machine learning. The results showed that: (1) the discriminative wavelength regions included the 900-940 nm range in the near-infrared (NIR) region in vines 30 days after sowing (DAS) and the entire visual (VIS) region of 400-700 nm in vines 90 DAS; (2) the normalized pheophytization index (NPQI), fluorescence ratio index 1 (FRI1), plant senescence reflectance index (PSRI), anthocyanin index (AntGitelson), and water stress and canopy temperature (WSCT) measures were the most discriminative indices; (3) the support vector machine (SVM) was effective in VI-wise classification with smaller feature spaces, while the RF classifier performed better in pixel-wise and image-wise classification with larger feature spaces; and (4) the automated 3D convolutional neural network (3D-CNN) feature extractor provided promising results over the 2D convolutional neural network (2D-CNN) in learning features from hyperspectral data cubes with a limited number of samples.


Assuntos
Badnavirus , Aprendizado Profundo , Doenças das Plantas/virologia , Vírus de Plantas , Finlândia , Imageamento Hiperespectral
6.
Sensors (Basel) ; 19(6)2019 Mar 14.
Artigo em Inglês | MEDLINE | ID: mdl-30875732

RESUMO

Urban areas feature complex and heterogeneous land covers which create challenging issues for tree species classification. The increased availability of high spatial resolution multispectral satellite imagery and LiDAR datasets combined with the recent evolution of deep learning within remote sensing for object detection and scene classification, provide promising opportunities to map individual tree species with greater accuracy and resolution. However, there are knowledge gaps that are related to the contribution of Worldview-3 SWIR bands, very high resolution PAN band and LiDAR data in detailed tree species mapping. Additionally, contemporary deep learning methods are hampered by lack of training samples and difficulties of preparing training data. The objective of this study was to examine the potential of a novel deep learning method, Dense Convolutional Network (DenseNet), to identify dominant individual tree species in a complex urban environment within a fused image of WorldView-2 VNIR, Worldview-3 SWIR and LiDAR datasets. DenseNet results were compared against two popular machine classifiers in remote sensing image analysis, Random Forest (RF) and Support Vector Machine (SVM). Our results demonstrated that: (1) utilizing a data fusion approach beginning with VNIR and adding SWIR, LiDAR, and panchromatic (PAN) bands increased the overall accuracy of the DenseNet classifier from 75.9% to 76.8%, 81.1% and 82.6%, respectively. (2) DenseNet significantly outperformed RF and SVM for the classification of eight dominant tree species with an overall accuracy of 82.6%, compared to 51.8% and 52% for SVM and RF classifiers, respectively. (3) DenseNet maintained superior performance over RF and SVM classifiers under restricted training sample quantities which is a major limiting factor for deep learning techniques. Overall, the study reveals that DenseNet is more effective for urban tree species classification as it outperforms the popular RF and SVM techniques when working with highly complex image scenes regardless of training sample size.


Assuntos
Aprendizado Profundo , Máquina de Vetores de Suporte , Humanos , Redes Neurais de Computação
7.
Front Big Data ; 2: 37, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-33693360

RESUMO

The recently developed OPtical TRApezoid Model (OPTRAM) has been successfully applied for watershed scale soil moisture (SM) estimation based on remotely sensed shortwave infrared (SWIR) transformed reflectance (TRSWIR) and the normalized difference vegetation index (NDVI). This study is aimed at the evaluation of OPTRAM for field scale precision agriculture applications using ultrahigh spatial resolution optical observations obtained with one of the world's largest field robotic phenotyping scanners located in Maricopa, Arizona. We replaced NDVI with the soil adjusted vegetation index (SAVI), which has been shown to be more accurate for cropped agricultural fields that transition from bare soil to dense vegetation cover. The OPTRAM was parameterized based on the trapezoidal geometry of the pixel distribution within the TRSWIR-SAVI space, from which wet- and dry-edge parameters were determined. The accuracy of the resultant SM estimates is evaluated based on a comparison with ground reference measurements obtained with Time Domain Reflectometry (TDR) sensors deployed to monitor surface, near-surface and root zone SM. The obtained results indicate an SM estimation error between 0.045 and 0.057 cm3 cm-3 for the near-surface and root zone, respectively. The high resolution SM maps clearly capture the spatial SM variability at the sensor locations. These findings and the presented framework can be applied in conjunction with Unmanned Aerial System (UAS) observations to assist with farm scale precision irrigation management to improve water use efficiency of cropping systems and conserve water in water-limited regions of the world.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...