Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 24
Filtrar
1.
Sensors (Basel) ; 24(17)2024 Aug 23.
Artigo em Inglês | MEDLINE | ID: mdl-39275384

RESUMO

Accurate 6DoF (degrees of freedom) pose and focal length estimation are important in extended reality (XR) applications, enabling precise object alignment and projection scaling, thereby enhancing user experiences. This study focuses on improving 6DoF pose estimation using single RGB images of unknown camera metadata. Estimating the 6DoF pose and focal length from an uncontrolled RGB image, obtained from the internet, is challenging because it often lacks crucial metadata. Existing methods such as FocalPose and Focalpose++ have made progress in this domain but still face challenges due to the projection scale ambiguity between the translation of an object along the z-axis (tz) and the camera's focal length. To overcome this, we propose a two-stage strategy that decouples the projection scaling ambiguity in the estimation of z-axis translation and focal length. In the first stage, tz is set arbitrarily, and we predict all the other pose parameters and focal length relative to the fixed tz. In the second stage, we predict the true value of tz while scaling the focal length based on the tz update. The proposed two-stage method reduces projection scale ambiguity in RGB images and improves pose estimation accuracy. The iterative update rules constrained to the first stage and tailored loss functions including Huber loss in the second stage enhance the accuracy in both 6DoF pose and focal length estimation. Experimental results using benchmark datasets show significant improvements in terms of median rotation and translation errors, as well as better projection accuracy compared to the existing state-of-the-art methods. In an evaluation across the Pix3D datasets (chair, sofa, table, and bed), the proposed two-stage method improves projection accuracy by approximately 7.19%. Additionally, the incorporation of Huber loss resulted in a significant reduction in translation and focal length errors by 20.27% and 6.65%, respectively, in comparison to the Focalpose++ method.

2.
Planta ; 257(2): 36, 2023 Jan 10.
Artigo em Inglês | MEDLINE | ID: mdl-36627492

RESUMO

MAIN CONCLUSION: A low-cost dynamic image capturing and analysis pipeline using color-based deep learning segmentation was developed for direct leaf area estimation of multiple crop types in a commercial environment. Crop yield is largely driven by intercepted radiation of the leaf canopy, making the leaf area index (LAI) a critical parameter for estimating yields. The growth rate of leaves at different growth stages determines the overall LAI, which is used by crop growth models (CGM) for simulating yield. Consequently, precise phenotyping of the leaves can help elucidate phenological processes relating to resource capturing. A stable system for acquiring images and a strong data processing backend play a vital role in reducing throughput time and increasing accuracy of calculations, compared to manual analysis. However, most available solutions are not dynamic, as they use color-based segmentation, which fails to capture leaves with varying shades and shapes. We have developed a system that uses a low-cost setup to acquire images and an automated pipeline to manage the data storage on the device and in the cloud. The system is powered by virtual machines that run multiple custom-trained deep learning models to segment out leaves, calculate leaf area (LA) for the whole set and at the individual leaf level, overlays important information on the images, and appends them on a compatible file used for CGMs with very high accuracy. The pipeline is dynamic and can be used for multiple crops. The use of open-source hardware, platforms, and algorithms makes this system affordable and reproducible.


Assuntos
Aprendizado Profundo , Produtos Agrícolas , Algoritmos , Folhas de Planta
3.
Sensors (Basel) ; 23(2)2023 Jan 10.
Artigo em Inglês | MEDLINE | ID: mdl-36679603

RESUMO

Previous research has demonstrated the potential to reconstruct human facial skin spectra based on the responses of RGB cameras to achieve high-fidelity color reproduction of human facial skin in various industrial applications. Nonetheless, the level of precision is still expected to improve. Inspired by the asymmetricity of human facial skin color in the CIELab* color space, we propose a practical framework, HPCAPR, for skin facial reflectance reconstruction based on calibrated datasets which reconstruct the facial spectra in subsets derived from clustering techniques in several spectrometric and colorimetric spaces, i.e., the spectral reflectance space, Principal Component (PC) space, CIELab*, and its three 2D subordinate color spaces, La*, Lb*, and ab*. The spectra reconstruction algorithm is optimized by combining state-of-art algorithms and thoroughly scanning the parameters. The results show that the hybrid of PCA and RGB polynomial regression algorithm with 3PCs plus 1st-order polynomial extension gives the best results. The performance can be improved substantially by operating the spectral reconstruction framework within the subset classified in the La* color subspace. Comparing with not conducting the clustering technique, it attains values of 25.2% and 57.1% for the median and maximum errors for the best cluster, respectively; for the worst, the maximum error was reduced by 42.2%.


Assuntos
Algoritmos , Pele , Humanos , Cor , Colorimetria/métodos , Face/fisiologia
4.
Sensors (Basel) ; 22(19)2022 Oct 07.
Artigo em Inglês | MEDLINE | ID: mdl-36236700

RESUMO

Monitoring the status of culture fish is an essential task for precision aquaculture using a smart underwater imaging device as a non-intrusive way of sensing to monitor freely swimming fish even in turbid or low-ambient-light waters. This paper developed a two-mode underwater surveillance camera system consisting of a sonar imaging device and a stereo camera. The sonar imaging device has two cloud-based Artificial Intelligence (AI) functions that estimate the quantity and the distribution of the length and weight of fish in a crowded fish school. Because sonar images can be noisy and fish instances of an overcrowded fish school are often overlapped, machine learning technologies, such as Mask R-CNN, Gaussian mixture models, convolutional neural networks, and semantic segmentation networks were employed to address the difficulty in the analysis of fish in sonar images. Furthermore, the sonar and stereo RGB images were aligned in the 3D space, offering an additional AI function for fish annotation based on RGB images. The proposed two-mode surveillance camera was tested to collect data from aquaculture tanks and off-shore net cages using a cloud-based AIoT system. The accuracy of the proposed AI functions based on human-annotated fish metric data sets were tested to verify the feasibility and suitability of the smart camera for the estimation of remote underwater fish metrics.


Assuntos
Inteligência Artificial , Redes Neurais de Computação , Animais , Aquicultura , Humanos , Som , Tecnologia
5.
Entropy (Basel) ; 24(11)2022 Oct 31.
Artigo em Inglês | MEDLINE | ID: mdl-36359667

RESUMO

In the domain of computer vision, entropy-defined as a measure of irregularity-has been proposed as an effective method for analyzing the texture of images. Several studies have shown that, with specific parameter tuning, entropy-based approaches achieve high accuracy in terms of classification results for texture images, when associated with machine learning classifiers. However, few entropy measures have been extended to studying color images. Moreover, the literature is missing comparative analyses of entropy-based and modern deep learning-based classification methods for RGB color images. In order to address this matter, we first propose a new entropy-based measure for RGB images based on a multivariate approach. This multivariate approach is a bi-dimensional extension of the methods that have been successfully applied to multivariate signals (unidimensional data). Then, we compare the classification results of this new approach with those obtained from several deep learning methods. The entropy-based method for RGB image classification that we propose leads to promising results. In future studies, the measure could be extended to study other color spaces as well.

6.
Sensors (Basel) ; 21(2)2021 Jan 19.
Artigo em Inglês | MEDLINE | ID: mdl-33477949

RESUMO

Timely and accurate crop growth monitoring and yield estimation are important for field management. The traditional sampling method used for estimation of ramie yield is destructive. Thus, this study proposed a new method for estimating ramie yield based on field phenotypic data obtained from unmanned aerial vehicle (UAV) images. A UAV platform carrying RGB cameras was employed to collect ramie canopy images during the whole growth period. The vegetation indices (VIs), plant number, and plant height were extracted from UAV-based images, and then, these data were incorporated to establish yield estimation model. Among all of the UAV-based image data, we found that the structure features (plant number and plant height) could better reflect the ramie yield than the spectral features, and in structure features, the plant number was found to be the most useful index to monitor the yield, with a correlation coefficient of 0.6. By fusing multiple characteristic parameters, the yield estimation model based on the multiple linear regression was obviously more accurate than the stepwise linear regression model, with a determination coefficient of 0.66 and a relative root mean square error of 1.592 kg. Our study reveals that it is feasible to monitor crop growth based on UAV images and that the fusion of phenotypic data can improve the accuracy of yield estimations.


Assuntos
Boehmeria , Tecnologia de Sensoriamento Remoto
7.
Sensors (Basel) ; 21(23)2021 Nov 27.
Artigo em Inglês | MEDLINE | ID: mdl-34883915

RESUMO

An improved spectral reflectance estimation method was developed to transform captured RGB images to spectral reflectance. The novelty of our method is an iteratively reweighted regulated model that combines polynomial expansion signals, which was developed for spectral reflectance estimation, and a cross-polarized imaging system, which is used to eliminate glare and specular highlights. Two RGB images are captured under two illumination conditions. The method was tested using ColorChecker charts. The results demonstrate that the proposed method could make a significant improvement of the accuracy in both spectral and colorimetric: it can achieve 23.8% improved accuracy in mean CIEDE2000 color difference, while it achieves 24.6% improved accuracy in RMS error compared with classic regularized least squares (RLS) method. The proposed method is sufficiently accurate in predicting the spectral properties and their performance within an acceptable range, i.e., typical customer tolerance of less than 3 DE units in the graphic arts industry.


Assuntos
Colorimetria , Iluminação , Algoritmos
8.
Sensors (Basel) ; 21(21)2021 Oct 21.
Artigo em Inglês | MEDLINE | ID: mdl-34770306

RESUMO

Monitoring fruit growth is useful when estimating final yields in advance and predicting optimum harvest times. However, observing fruit all day at the farm via RGB images is not an easy task because the light conditions are constantly changing. In this paper, we present CROP (Central Roundish Object Painter). The method involves image segmentation by deep learning, and the architecture of the neural network is a deeper version of U-Net. CROP identifies different types of central roundish fruit in an RGB image in varied light conditions, and creates a corresponding mask. Counting the mask pixels gives the relative two-dimensional size of the fruit, and in this way, time-series images may provide a non-contact means of automatically monitoring fruit growth. Although our measurement unit is different from the traditional one (length), we believe that shape identification potentially provides more information. Interestingly, CROP can have a more general use, working even for some other roundish objects. For this reason, we hope that CROP and our methodology yield big data to promote scientific advancements in horticultural science and other fields.


Assuntos
Aprendizado Profundo , Frutas , Redes Neurais de Computação
9.
Mikrochim Acta ; 186(11): 690, 2019 10 09.
Artigo em Inglês | MEDLINE | ID: mdl-31595372

RESUMO

This work describes an aptamer-based capillary assay for ethanolamine (EA). It is making use of strand displacement format and magnetic particles. The capillary tubes are coated with three layers, viz. (a) first with short oligonucleotides complementary to the aptamer (EA-comp.); (b) then with magnetic particles (Dynabeads) coated with EA-binding aptamer (EA-aptamer), and (c) with short oligonucleotide-coated magnetic particles (EA-comp.). On exposure to a sample containing ethanolamine, the DNA-coated magnetic particles are released and subsequently collected and spatially separated using a permanent magnet. This results in the formation of a characteristic black/brown spots. The assay has a visual limit of detection of 5 nM and only requires 5 min of incubation. Quantification is possible through capture and analysis of digital (RGB) photos in the 5 to 75 nM EA concentration range. Furthermore, results from tap water and serum spiked with EA samples showed that the platform performs well in complex samples and can be applied to real sample analysis. The combined use of plastic capillaries, visual detection and passive flow make the method suited for implementation into a point-of-care device. Graphical abstract Schematic representation of the capillary assay steps.


Assuntos
Aptâmeros de Nucleotídeos/química , Técnicas Biossensoriais/métodos , DNA/química , Etanolamina/sangue , Aptâmeros de Nucleotídeos/genética , Sequência de Bases , Técnicas Biossensoriais/instrumentação , DNA/genética , Água Potável/análise , Etanolamina/química , Humanos , Limite de Detecção , Fenômenos Magnéticos , Hibridização de Ácido Nucleico , Testes Imediatos
10.
Mikrochim Acta ; 186(8): 496, 2019 07 03.
Artigo em Inglês | MEDLINE | ID: mdl-31270596

RESUMO

Carboxylic acids (CAs) have been reported as potential biomarkers of specific diseases or human body odors. A visual sensor array is described here that is based on indicator displacement assays (IDAs). The arrays were prepared by spotting solutions of the following metal complexes: Murexide-Ni(II), murexide-Cu(II), zincon-Zn(II) and xylenol orange-Cu(II), with the capability of discrimination of 15 carboxylic acids (CAs) and the quantitation of pyruvic acid (PA). Clear differences can be observed through distinctive difference maps obtained within 5 min by subtraction of red, green and blue (RGB) values of digital images after and before exposure to analytes. After an analysis of multidimensional data by pattern recognition algorithms including HCA, PCA and LDA, excellent classification specificity, and accuracy of >96% were obtained for all samples. The IDA array exhibited a linear range from 10 to 1500 µM with a theoretical detection limit of 3.5 µM towards PA. Recoveries of real samples varied from 84.8% to 114.3%. As-fabricated IDA sensor array showed an excellent selectivity among other organic interfering substances and a good batch to batch reproducibility, demonstrating its robustness. All these observations suggested that the IDA sensor array is one of the most promising paths for the discrimination of CAs. Graphical abstract Schematic diagram of indicator displacement assay (a), the procedure for acquisition of difference maps (b), and pattern recognitions for CAs (c). The method uses hierarchical cluster analysis (HCA), principal component analysis (PCA) and linear discriminant analysis (LDA).


Assuntos
Ácidos Carboxílicos/análise , Ácidos Carboxílicos/sangue , Ácidos Carboxílicos/química , Análise por Conglomerados , Colorimetria , Cobre/química , Análise Discriminante , Corantes Fluorescentes/química , Humanos , Murexida/química , Níquel/química , Fenóis/química , Análise de Componente Principal , Sulfóxidos/química , Zinco/química
11.
Sensors (Basel) ; 19(13)2019 Jul 09.
Artigo em Inglês | MEDLINE | ID: mdl-31323927

RESUMO

The power transmission lines are the link between power plants and the points of consumption, through substations. Most importantly, the assessment of damaged aerial power lines and rusted conductors is of extreme importance for public safety; hence, power lines and associated components must be periodically inspected to ensure a continuous supply and to identify any fault and defect. To achieve these objectives, recently, Unmanned Aerial Vehicles (UAVs) have been widely used; in fact, they provide a safe way to bring sensors close to the power transmission lines and their associated components without halting the equipment during the inspection, and reducing operational cost and risk. In this work, a drone, equipped with multi-modal sensors, captures images in the visible and infrared domain and transmits them to the ground station. We used state-of-the-art computer vision methods to highlight expected faults (i.e., hot spots) or damaged components of the electrical infrastructure (i.e., damaged insulators). Infrared imaging, which is invariant to large scale and illumination changes in the real operating environment, supported the identification of faults in power transmission lines; while a neural network is adapted and trained to detect and classify insulators from an optical video stream. We demonstrate our approach on data captured by a drone in Parma, Italy.

12.
Mikrochim Acta ; 185(4): 235, 2018 03 22.
Artigo em Inglês | MEDLINE | ID: mdl-29594673

RESUMO

It is shown that triangular silver nanoplates (TAgNPs) are viable colorimetric probes for the fast, sensitive and selective detection of Hg(II). Detection is accomplished by reducing Hg(II) ions to elemental Hg so that an Ag/Hg amalgam is formed on the surface of the TAgNPs. This leads to the inhibition of the etching TAgNPs by chloride ions. Correspondingly, a distinct color transition can be observed that goes from yellow to brown, purple, and blue. The color alterations extracted from the red, green, and blue part of digital (RGB) images can be applied to the determination of Hg(II). The relationship between the Euclidean distances (EDs), i.e. the square roots of the sums of the squares of the ΔRGB values, vary in the 5 nM to 100 nM Hg(II) concentration range, and the limit of detection is as low as 0.35 nM. The color changes also allow for a visual estimation of the concentrations of Hg(II). The method is simple in that it only requires a digital camera for data acquisition and a Photoshop software for extracting RGB variations and data processing. Graphical abstract Hg2+ detection was achieved by anti-etching of TAgNPs caused by the formation of silver amalgam, along with vivid multicolor variations from yellow to brown, purple, and eventually to be blue.

13.
Front Plant Sci ; 15: 1445490, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39309178

RESUMO

Introduction: Monitoring the leaf area index (LAI), which is directly related to the growth status of rice, helps to optimize and meet the crop's fertilizer requirements for achieving high quality, high yield, and environmental sustainability. The remote sensing technology of the unmanned aerial vehicle (UAV) has great potential in precision monitoring applications in agriculture due to its efficient, nondestructive, and rapid characteristics. The spectral information currently widely used is susceptible to the influence of factors such as soil background and canopy structure, leading to low accuracy in estimating the LAI in rice. Methods: In this paper, the RGB and multispectral images of the critical period were acquired through rice field experiments. Based on the remote sensing images above, the spectral indices and texture information of the rice canopy were extracted. Furthermore, the texture information of various images at multiple scales was acquired through resampling, which was utilized to assess the estimation capacity of LAI. Results and discussion: The results showed that the spectral indices (SI) based on RGB and multispectral imagery saturated in the middle and late stages of rice, leading to low accuracy in estimating LAI. Moreover, multiscale texture analysis revealed that the texture of multispectral images derived from the 680 nm band is less affected by resolution, whereas the texture of RGB images is resolution dependent. The fusion of spectral and texture features using random forest and multiple stepwise regression algorithms revealed that the highest accuracy in estimating LAI can be achieved based on SI and texture features (0.48 m) from multispectral imagery. This approach yielded excellent prediction results for both high and low LAI values. With the gradual improvement of satellite image resolution, the results of this study are expected to enable accurate monitoring of rice LAI on a large scale.

14.
Water Res ; 260: 121861, 2024 Aug 15.
Artigo em Inglês | MEDLINE | ID: mdl-38875854

RESUMO

The rapid and efficient quantification of Escherichia coli concentrations is crucial for monitoring water quality. Remote sensing techniques and machine learning algorithms have been used to detect E. coli in water and estimate its concentrations. The application of these approaches, however, is challenged by limited sample availability and unbalanced water quality datasets. In this study, we estimated the E. coli concentration in an irrigation pond in Maryland, USA, during the summer season using demosaiced natural color (red, green, and blue: RGB) imagery in the visible and infrared spectral ranges, and a set of 14 water quality parameters. We did this by deploying four machine learning models - Random Forest (RF), Gradient Boosting Machine (GBM), Extreme Gradient Boosting (XGB), and K-nearest Neighbor (KNN) - under three data utilization scenarios: water quality parameters only, combined water quality and small unmanned aircraft system (sUAS)-based RGB data, and RGB data only. To select the training and test datasets, we applied two data-splitting methods: ordinary and quantile data splitting. These methods provided a constant splitting ratio in each decile of the E. coli concentration distribution. Quantile data splitting resulted in better model performance metrics and smaller differences between the metrics for both the training and testing datasets. When trained with quantile data splitting after hyperparameter optimization, models RF, GBM, and XGB had R2 values above 0.847 for the training dataset and above 0.689 for the test dataset. The combination of water quality and RGB imagery data resulted in a higher R2 value (>0.896) for the test dataset. Shapley additive explanations (SHAP) of the relative importance of variables revealed that the visible blue spectrum intensity and water temperature were the most influential parameters in the RF model. Demosaiced RGB imagery served as a useful predictor of E. coli concentration in the studied irrigation pond.


Assuntos
Irrigação Agrícola , Escherichia coli , Aprendizado de Máquina , Lagoas , Qualidade da Água , Lagoas/microbiologia , Microbiologia da Água , Monitoramento Ambiental/métodos , Maryland
15.
Foods ; 12(11)2023 May 30.
Artigo em Inglês | MEDLINE | ID: mdl-37297436

RESUMO

Saffron (Crocus sativus L.) is the most expensive spice in the world, known for its unique aroma and coloring in the food industry. Hence, its high price is frequently adulterated. In the current study, a variety of soft computing methods, including classifiers (i.e., RBF, MLP, KNN, SVM, SOM, and LVQ), were employed to classify four samples of fake saffron (dyed citrus blossom, safflower, dyed fibers, and mixed stigma with stamens) and three samples of genuine saffron (dried by different methods). RGB and spectral images (near-infrared and red bands) were captured from prepared samples for analysis. The amount of crocin, safranal, and picrocrocin were measured chemically to compare the images' analysis results. The comparison results of the classifiers indicated that KNN could classify RGB and NIR images of samples in the training phase with 100% accuracy. However, KNN's accuracy for different samples in the test phase was between 71.31% and 88.10%. The RBF neural network achieved the highest accuracy in training, test, and total phases. The accuracy of 99.52% and 94.74% was obtained using the features extracted from RGB and spectral images, respectively. So, soft computing models are helpful tools for detecting and classifying fake and genuine saffron based on RGB and spectral images.

16.
Data Brief ; 48: 109230, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37383825

RESUMO

The grapevine is vulnerable to diseases, deficiencies, and pests, leading to significant yield losses. Current disease controls involve monitoring and spraying phytosanitary products at the vineyard block scale. However, automatic detection of disease symptoms could reduce the use of these products and treat diseases before they spread. Flavescence dorée (FD), a highly infectious disease that causes significant yield losses, is only diagnosed by identifying symptoms on three grapevine organs: leaf, shoot, and bunch. Its diagnosis is carried out by scouting experts, as many other diseases and stresses, either biotic or abiotic, imply similar symptoms (but not all at the same time). These experts need a decision support tool to improve their scouting efficiency. To address this, a dataset of 1483 RGB images of grapevines affected by various diseases and stresses, including FD, was acquired by proximal sensing. The images were taken in the field at a distance of 1-2 meters to capture entire grapevines and an industrial flash was ensuring a constant luminance on the images regardless of the environmental circumstances. Images of 5 grape varieties (Cabernet sauvignon, Cabernet franc, Merlot, Ugni blanc and Sauvignon blanc) were acquired during 2 years (2020 and 2021). Two types of annotations were made: expert diagnosis at the grapevine scale in the field and symptom annotations at the leaf, shoot, and bunch levels on computer. On 744 images, the leaves were annotated and divided into three classes: 'FD symptomatic leaves', 'Esca symptomatic leaves', and 'Confounding leaves'. Symptomatic bunches and shoots were, in addition of leaves, annotated on 110 images using bounding boxes and broken lines, respectively. Additionally, 128 segmentation masks were created to allow the detection of the symptomatic shoots and bunches by segmentation algorithms and compare the results to those of the detection algorithms.

17.
Methods Mol Biol ; 2539: 37-48, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35895194

RESUMO

High-throughput phenotyping platforms for growth chamber and greenhouse-grown plants enable nondestructive, automated measurements of plant traits including shape, aboveground architecture, length, and biomass over time. However, to establish these platforms, many of these methods require expensive equipment or phenotyping expertise. Here we present a relatively inexpensive and simple phenotyping method for imaging hundreds of small- to medium-sized growth chamber or greenhouse-grown plants with a digital camera. Using this method, we image hundreds of tomato plants in 1 day.


Assuntos
Plantas , Solanum lycopersicum , Biomassa , Fenótipo
18.
Data Brief ; 42: 108316, 2022 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35677455

RESUMO

In this data paper, we propose an open dataset (named SROADEX) containing more than 527,000 image tiles of 256 × 256 pixels stored in the lossless PNG format, tagged at pixel level with road information. The dataset covers approximately 8650 km2 of the Spanish territory, is divided in train, validation and test sets and can be used by researchers and professionals for training other extraction solutions and benchmarking additional models. The SROADEX dataset is available under a CC-BY 4.0 licence and can be freely downloaded from the Zenodo repository.

19.
Front Plant Sci ; 13: 938216, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36092445

RESUMO

Obtaining crop above-ground biomass (AGB) information quickly and accurately is beneficial to farmland production management and the optimization of planting patterns. Many studies have confirmed that, due to canopy spectral saturation, AGB is underestimated in the multi-growth period of crops when using only optical vegetation indices. To solve this problem, this study obtains textures and crop height directly from ultrahigh-ground-resolution (GDS) red-green-blue (RGB) images to estimate the potato AGB in three key growth periods. Textures include a grayscale co-occurrence matrix texture (GLCM) and a Gabor wavelet texture. GLCM-based textures were extracted from seven-GDS (1, 5, 10, 30, 40, 50, and 60 cm) RGB images. Gabor-based textures were obtained from magnitude images on five scales (scales 1-5, labeled S1-S5, respectively). Potato crop height was extracted based on the generated crop height model. Finally, to estimate potato AGB, we used (i) GLCM-based textures from different GDS and their combinations, (ii) Gabor-based textures from different scales and their combinations, (iii) all GLCM-based textures combined with crop height, (iv) all Gabor-based textures combined with crop height, and (v) two types of textures combined with crop height by least-squares support vector machine (LSSVM), extreme learning machine, and partial least squares regression techniques. The results show that (i) potato crop height and AGB first increase and then decrease over the growth period; (ii) GDS and scales mainly affect the correlation between GLCM- and Gabor-based textures and AGB; (iii) to estimate AGB, GLCM-based textures of GDS1 and GDS30 work best when the GDS is between 1 and 5 cm and 10 and 60 cm, respectively (however, estimating potato AGB based on Gabor-based textures gradually deteriorates as the Gabor convolution kernel scale increases); (iv) the AGB estimation based on a single-type texture is not as good as estimates based on multi-resolution GLCM-based and multiscale Gabor-based textures (with the latter being the best); (v) different forms of textures combined with crop height using the LSSVM technique improved by 22.97, 14.63, 9.74, and 8.18% (normalized root mean square error) compared with using only all GLCM-based textures, all Gabor-based textures, the former combined with crop height, and the latter combined with crop height, respectively. Therefore, different forms of texture features obtained from RGB images acquired from unmanned aerial vehicles and combined with crop height improve the accuracy of potato AGB estimates under high coverage.

20.
Plants (Basel) ; 11(23)2022 Nov 27.
Artigo em Inglês | MEDLINE | ID: mdl-36501299

RESUMO

Buckwheat is an important minor grain crop with medicinal and edible functions. The accurate judgment of buckwheat maturity is beneficial to reduce harvest losses and improve yield. With the rapid development of unmanned aerial vehicle (UAV) technology, it has been widely used to predict the maturity of agricultural products. This paper proposed a method using recursive feature elimination cross-validation (RFECV) combined with multiple regression models to predict the maturity of buckwheat in UAV-RGB images. The images were captured in the buckwheat experimental field of Shanxi Agricultural University in Jinzhong, Northern China, from September to October in 2021. The variety was sweet buckwheat of "Jinqiao No. 1". In order to deeply mine the feature vectors that highly correlated with the prediction of buckwheat maturity, 22 dimensional features with 5 vegetation indexes, 9 color features, and 8 texture features of buckwheat were selected initially. The RFECV method was adopted to obtain the optimal feature vector dimensions and combinations with six regression models of decision tree regression, linear regression, random forest regression, AdaBoost regression, gradient lifting regression, and extreme random tree regression. The coefficient of determination (R2) and root mean square error (RMSE) were used to analyze the different combinations of the six regression models with different feature spaces. The experimental results show that the single vegetation index performed poorly in the prediction of buckwheat maturity; the prediction result of feature space "5" combined with the gradient lifting regression model performed the best; and the R2 and RMSE were 0.981 and 1.70 respectively. The research results can provide an important theoretical basis for the prediction of the regional maturity of crops.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA