Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Data Brief ; 54: 110430, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38698801

RESUMO

The rationale for this data article is to provide resources which could facilitate the studies focussed over weed detection and segmentation in precision farming using computer vision. We have curated Multispectral (MS) images over crop fields of Triticum Aestivum containing heterogenous mix of Raphanus raphanistrum in both uniform and random crop spacing. This dataset is designed to facilitate weed detection and segmentation based on manual and automatically annotated Raphanus raphanistrum, commonly known as wild radish. The dataset is publicly available through the Zenodo data library and provides annotated pixel-level information that is crucial for registration and segmentation purposes. The dataset consists of 85 original MS images captured over 17 scenes covering various spectra including Blue, Green, Red, NIR (Near-Infrared), and RedEdge. Each image has a dimension of 1280 × 960 pixels and serves as the basis for the specific weed detection and segmentation. Manual annotations were performed using Visual Geometry Group Image Annotator (VIA) and the results were saved in Common Objects in Context (COCO) segmentation format. To facilitate this resource-intensive task of annotation, a Grounding DINO + Segment Anything Model (SAM) was trained with this manually annotated data to obtain automated Visual Object Classes Extended Markup Language (PASCAL VOC) annotations for 80 MS images. The dataset emphasizes quality control, validating both the 'manual" and 'automated" repositories by extracting and evaluating binary masks. The codes used for these processes are accessible to ensure transparency and reproducibility. This dataset is the first-of-its-kind public resource providing manual and automatically annotated weed information over close-ranged MS images in heterogenous agriculture environment. Researchers and practitioners in the fields of precision agriculture and computer vision can use this dataset to improve MS image registration and segmentation at close range photogrammetry with a focus on wild radish. The dataset not only helps with intra-subject registration to improve segmentation accuracy, but also provides valuable spectral information for training and refining machine learning models.

2.
Data Brief ; 54: 110506, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38813239

RESUMO

This research introduces an extensive dataset of unprocessed aerial RGB images and orthomosaics of Brassica oleracea crops, captured via a DJI Phantom 4. The dataset, publicly accessible, comprises 244 raw RGB images, acquired over six distinct dates in October and November of 2020 as well as 6 orthomosaics from an experimental farm located in Portici, Italy. The images, uniformly distributed across crop spaces, have undergone both manual and automatic annotations, to facilitate the detection, segmentation, and growth modelling of crops. Manual annotations were performed using bounding boxes via the Visual Geometry Group Image Annotator (VIA) and exported in the Common Objects in Context (COCO) segmentation format. The automated annotations were generated using a framework of Grounding DINO + Segment Anything Model (SAM) facilitated by YOLOv8x-seg pretrained weights obtained after training manually annotated images dated 8 October, 21 October, and 29 October 2020. The automated annotations were archived in Pascal Visual Object Classes (PASCAL VOC) format. Seven classes, designated as Row 1 through Row 7, have been identified for crop labelling. Additional attributes such as individual crop ID and the repetitiveness of individual crop specimens are delineated in the Comma Separated Values (CSV) version of the manual annotation. This dataset not only furnishes annotation information but also assists in the refinement of various machine learning models, thereby contributing significantly to the field of smart agriculture. The transparency and reproducibility of the processes are ensured by making the utilized codes accessible. This research marks a significant stride in leveraging technology for vision-based crop growth monitoring.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA