Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Sensors (Basel) ; 24(6)2024 Mar 18.
Artigo em Inglês | MEDLINE | ID: mdl-38544205

RESUMO

Automated precision weed control requires visual methods to discriminate between crops and weeds. State-of-the-art plant detection methods fail to reliably detect weeds, especially in dense and occluded scenes. In the past, using hand-crafted detection models, both color (RGB) and depth (D) data were used for plant detection in dense scenes. Remarkably, the combination of color and depth data is not widely used in current deep learning-based vision systems in agriculture. Therefore, we collected an RGB-D dataset using a stereo vision camera. The dataset contains sugar beet crops in multiple growth stages with a varying weed densities. This dataset was made publicly available and was used to evaluate two novel plant detection models, the D-model, using the depth data as the input, and the CD-model, using both the color and depth data as inputs. For ease of use, for existing 2D deep learning architectures, the depth data were transformed into a 2D image using color encoding. As a reference model, the C-model, which uses only color data as the input, was included. The limited availability of suitable training data for depth images demands the use of data augmentation and transfer learning. Using our three detection models, we studied the effectiveness of data augmentation and transfer learning for depth data transformed to 2D images. It was found that geometric data augmentation and transfer learning were equally effective for both the reference model and the novel models using the depth data. This demonstrates that combining color-encoded depth data with geometric data augmentation and transfer learning can improve the RGB-D detection model. However, when testing our detection models on the use case of volunteer potato detection in sugar beet farming, it was found that the addition of depth data did not improve plant detection at high vegetation densities.


Assuntos
Plantas Daninhas , Controle de Plantas Daninhas , Humanos , Agricultura , Produtos Agrícolas , Açúcares
2.
Heliyon ; 10(7): e28487, 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38596044

RESUMO

In this study, we assess the feasibility of using Fourier Transform Infrared Photoacoustic Spectroscopy (FTIR-PAS) to predict macro- and micro-nutrients in a diverse set of manures and digestates. Furthermore, the prediction capabilities of FTIR-PAS were assessed using a novel error tolerance-based interval method in view of the accuracy required for application in agricultural practices. Partial Least-Squares Regression (PLSR) was used to correlate the FTIR-PAS spectra with nutrient contents. The prediction results were then assessed with conventional assessment methods (root mean square error (RMSE), coefficient of determination R2, and the ratio of prediction to deviation (RPD)). The results show the potential of FTIR-PAS to be used as a rapid analysis technique, with promising prediction results (R2 > 0.91 and RPD >2.5) for all elements except for bicarbonate-extractable P, K, and NH4+-N (0.8 < R2 < 0.9 and 2 < RPD <2.5). The results for nitrogen and phosphorus were further evaluated using the proposed error tolerance-based interval method. The probability of prediction for nitrogen within the allowed limit is calculated to be 94.6 % and for phosphorus 83.8 %. The proposed error tolerance-based interval method provides a better measure to decide if the FTIR-PAS in its current state could be used to meet the required accuracy in agriculture for the quantification of nutrient content in manure and digestate.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA