Your browser doesn't support javascript.
loading
Stereo Vision for Plant Detection in Dense Scenes.
Ruigrok, Thijs; van Henten, Eldert J; Kootstra, Gert.
Afiliação
  • Ruigrok T; Farm Technology, Department of Plant Sciences, Wageningen University and Research, 6700 AA Wageningen, The Netherlands.
  • van Henten EJ; Farm Technology, Department of Plant Sciences, Wageningen University and Research, 6700 AA Wageningen, The Netherlands.
  • Kootstra G; Farm Technology, Department of Plant Sciences, Wageningen University and Research, 6700 AA Wageningen, The Netherlands.
Sensors (Basel) ; 24(6)2024 Mar 18.
Article em En | MEDLINE | ID: mdl-38544205
ABSTRACT
Automated precision weed control requires visual methods to discriminate between crops and weeds. State-of-the-art plant detection methods fail to reliably detect weeds, especially in dense and occluded scenes. In the past, using hand-crafted detection models, both color (RGB) and depth (D) data were used for plant detection in dense scenes. Remarkably, the combination of color and depth data is not widely used in current deep learning-based vision systems in agriculture. Therefore, we collected an RGB-D dataset using a stereo vision camera. The dataset contains sugar beet crops in multiple growth stages with a varying weed densities. This dataset was made publicly available and was used to evaluate two novel plant detection models, the D-model, using the depth data as the input, and the CD-model, using both the color and depth data as inputs. For ease of use, for existing 2D deep learning architectures, the depth data were transformed into a 2D image using color encoding. As a reference model, the C-model, which uses only color data as the input, was included. The limited availability of suitable training data for depth images demands the use of data augmentation and transfer learning. Using our three detection models, we studied the effectiveness of data augmentation and transfer learning for depth data transformed to 2D images. It was found that geometric data augmentation and transfer learning were equally effective for both the reference model and the novel models using the depth data. This demonstrates that combining color-encoded depth data with geometric data augmentation and transfer learning can improve the RGB-D detection model. However, when testing our detection models on the use case of volunteer potato detection in sugar beet farming, it was found that the addition of depth data did not improve plant detection at high vegetation densities.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Plantas Daninhas / Controle de Plantas Daninhas Limite: Humans Idioma: En Revista: Sensors (Basel) Ano de publicação: 2024 Tipo de documento: Article País de afiliação: Holanda

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Plantas Daninhas / Controle de Plantas Daninhas Limite: Humans Idioma: En Revista: Sensors (Basel) Ano de publicação: 2024 Tipo de documento: Article País de afiliação: Holanda