Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Más filtros

Banco de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Sensors (Basel) ; 21(20)2021 Oct 09.
Artículo en Inglés | MEDLINE | ID: mdl-34695919

RESUMEN

In agriculture, explainable deep neural networks (DNNs) can be used to pinpoint the discriminative part of weeds for an imagery classification task, albeit at a low resolution, to control the weed population. This paper proposes the use of a multi-layer attention procedure based on a transformer combined with a fusion rule to present an interpretation of the DNN decision through a high-resolution attention map. The fusion rule is a weighted average method that is used to combine attention maps from different layers based on saliency. Attention maps with an explanation for why a weed is or is not classified as a certain class help agronomists to shape the high-resolution weed identification keys (WIK) that the model perceives. The model is trained and evaluated on two agricultural datasets that contain plants grown under different conditions: the Plant Seedlings Dataset (PSD) and the Open Plant Phenotyping Dataset (OPPD). The model represents attention maps with highlighted requirements and information about misclassification to enable cross-dataset evaluations. State-of-the-art comparisons represent classification developments after applying attention maps. Average accuracies of 95.42% and 96% are gained for the negative and positive explanations of the PSD test sets, respectively. In OPPD evaluations, accuracies of 97.78% and 97.83% are obtained for negative and positive explanations, respectively. The visual comparison between attention maps also shows high-resolution information.


Asunto(s)
Atención , Redes Neurales de la Computación , Agricultura , Malezas , Plantones
2.
Sensors (Basel) ; 21(18)2021 Sep 13.
Artículo en Inglés | MEDLINE | ID: mdl-34577335

RESUMEN

Invasive alien plant species (IAPS) pose a threat to biodiversity as they propagate and outcompete natural vegetation. In this study, a system for monitoring IAPS on the roadside is presented. The system consists of a camera that acquires images at high speed mounted on a vehicle that follows the traffic. Images of seven IAPS (Cytisus scoparius, Heracleum, Lupinus polyphyllus, Pastinaca sativa, Reynoutria, Rosa rugosa, and Solidago) were collected on Danish motorways. Three deep convolutional neural networks for classification (ResNet50V2 and MobileNetV2) and object detection (YOLOv3) were trained and evaluated at different image sizes. The results showed that the performance of the networks varied with the input image size and also the size of the IAPS in the images. Binary classification of IAPS vs. non-IAPS showed an increased performance, compared to the classification of individual IAPS. This study shows that automatic detection and mapping of invasive plants along the roadside is possible at high speeds.


Asunto(s)
Aprendizaje Profundo , Especies Introducidas , Biodiversidad , Redes Neurales de la Computación , Plantas
3.
Sensors (Basel) ; 21(1)2020 Dec 29.
Artículo en Inglés | MEDLINE | ID: mdl-33383904

RESUMEN

Crop mixtures are often beneficial in crop rotations to enhance resource utilization and yield stability. While targeted management, dependent on the local species composition, has the potential to increase the crop value, it comes at a higher expense in terms of field surveys. As fine-grained species distribution mapping of within-field variation is typically unfeasible, the potential of targeted management remains an open research area. In this work, we propose a new method for determining the biomass species composition from high resolution color images using a DeepLabv3+ based convolutional neural network. Data collection has been performed at four separate experimental plot trial sites over three growing seasons. The method is thoroughly evaluated by predicting the biomass composition of different grass clover mixtures using only an image of the canopy. With a relative biomass clover content prediction of R2 = 0.91, we present new state-of-the-art results across the largely varying sites. Combining the algorithm with an all terrain vehicle (ATV)-mounted image acquisition system, we demonstrate a feasible method for robust coverage and species distribution mapping of 225 ha of mixed crops at a median capacity of 17 ha per hour at 173 images per hectare.

4.
Sensors (Basel) ; 18(5)2018 May 18.
Artículo en Inglés | MEDLINE | ID: mdl-29783642

RESUMEN

Determining the individual location of a plant, besides evaluating sowing performance, would make subsequent treatment for each plant across a field possible. In this study, a system for locating cereal plant stem emerging points (PSEPs) has been developed. In total, 5719 images were gathered from several cereal fields. In 212 of these images, the PSEPs of the cereal plants were marked manually and used to train a fully-convolutional neural network. In the training process, a cost function was made, which incorporates predefined penalty regions and PSEPs. The penalty regions were defined based on fault prediction of the trained model without penalty region assignment. By adding penalty regions to the training, the network's ability to precisely locate emergence points of the cereal plants was enhanced significantly. A coefficient of determination of about 87 percent between the predicted PSEP number of each image and the manually marked one implies the ability of the system to count PSEPs. With regard to the obtained results, it was concluded that the developed model can give a reliable clue about the quality of PSEPs' distribution and the performance of seed drills in fields.

5.
Sensors (Basel) ; 18(5)2018 May 16.
Artículo en Inglés | MEDLINE | ID: mdl-29772666

RESUMEN

This study outlines a new method of automatically estimating weed species and growth stages (from cotyledon until eight leaves are visible) of in situ images covering 18 weed species or families. Images of weeds growing within a variety of crops were gathered across variable environmental conditions with regards to soil types, resolution and light settings. Then, 9649 of these images were used for training the computer, which automatically divided the weeds into nine growth classes. The performance of this proposed convolutional neural network approach was evaluated on a further set of 2516 images, which also varied in term of crop, soil type, image resolution and light conditions. The overall performance of this approach achieved a maximum accuracy of 78% for identifying Polygonum spp. and a minimum accuracy of 46% for blackgrass. In addition, it achieved an average 70% accuracy rate in estimating the number of leaves and 96% accuracy when accepting a deviation of two leaves. These results show that this new method of using deep convolutional neural networks has a relatively high ability to estimate early growth stages across a wide variety of weed species.


Asunto(s)
Redes Neurales de la Computación , Poaceae/crecimiento & desarrollo , Polygonum/crecimiento & desarrollo , Procesamiento de Imagen Asistido por Computador , Hojas de la Planta/anatomía & histología , Hojas de la Planta/fisiología , Poaceae/anatomía & histología , Poaceae/fisiología , Polygonum/anatomía & histología , Polygonum/fisiología
6.
Sensors (Basel) ; 17(12)2017 Dec 17.
Artículo en Inglés | MEDLINE | ID: mdl-29258215

RESUMEN

Optimal fertilization of clover-grass fields relies on knowledge of the clover and grass fractions. This study shows how knowledge can be obtained by analyzing images collected in fields automatically. A fully convolutional neural network was trained to create a pixel-wise classification of clover, grass, and weeds in red, green, and blue (RGB) images of clover-grass mixtures. The estimated clover fractions of the dry matter from the images were found to be highly correlated with the real clover fractions of the dry matter, making this a cheap and non-destructive way of monitoring clover-grass fields. The network was trained solely on simulated top-down images of clover-grass fields. This enables the network to distinguish clover, grass, and weed pixels in real images. The use of simulated images for training reduces the manual labor to a few hours, as compared to more than 3000 h when all the real images are annotated for training. The network was tested on images with varied clover/grass ratios and achieved an overall pixel classification accuracy of 83.4%, while estimating the dry matter clover fraction with a standard deviation of 7.8%.

7.
F1000Res ; 13: 360, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39045173

RESUMEN

Invasive plant species pose ecological threats to native ecosystems, particularly in areas adjacent to roadways, considering that roadways represent lengthy corridors through which invasive species can propagate. Traditional manual survey methods for monitoring invasive plants are labor-intensive and limited in coverage. This paper introduces a high-speed camera system, named CamAlien, designed to be mounted on vehicles for efficient invasive plant species monitoring along roadways. The camera system captures high-quality images at rapid intervals, to monitor the full roadside when following traffic speed. The system utilizes a global shutter sensor to reduce distortion and geotagging for precise localistion. The camera system makes it possible to collect extensive data sets, which can be used for a digital library of the invasive species and their locations, but also subsequent training of machine learning algorithms for automated species recognition.


Asunto(s)
Especies Introducidas , Plantas , Monitoreo del Ambiente/métodos , Monitoreo del Ambiente/instrumentación , Fotograbar/instrumentación , Fotograbar/métodos , Ecosistema
8.
PeerJ ; 10: e13837, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36032940

RESUMEN

Image-based methods for species identification offer cost-efficient solutions for biomonitoring. This is particularly relevant for invertebrate studies, where bulk samples often represent insurmountable workloads for sorting, identifying, and counting individual specimens. On the other hand, image-based classification using deep learning tools have strict requirements for the amount of training data, which is often a limiting factor. Here, we examine how classification accuracy increases with the amount of training data using the BIODISCOVER imaging system constructed for image-based classification and biomass estimation of invertebrate specimens. We use a balanced dataset of 60 specimens of each of 16 taxa of freshwater macroinvertebrates to systematically quantify how classification performance of a convolutional neural network (CNN) increases for individual taxa and the overall community as the number of specimens used for training is increased. We show a striking 99.2% classification accuracy when the CNN (EfficientNet-B6) is trained on 50 specimens of each taxon, and also how the lower classification accuracy of models trained on less data is particularly evident for morphologically similar species placed within the same taxonomic order. Even with as little as 15 specimens used for training, classification accuracy reached 97%. Our results add to a recent body of literature showing the huge potential of image-based methods and deep learning for specimen-based research, and furthermore offers a perspective to future automatized approaches for deriving ecological data from bulk arthropod samples.


Asunto(s)
Artrópodos , Aprendizaje Profundo , Animales , Redes Neurales de la Computación , Monitoreo Biológico , Agua Dulce
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA