Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Sensors (Basel) ; 21(12)2021 Jun 09.
Artigo em Inglês | MEDLINE | ID: mdl-34207543

RESUMO

Forage dry matter is the main source of nutrients in the diet of ruminant animals. Thus, this trait is evaluated in most forage breeding programs with the objective of increasing the yield. Novel solutions combining unmanned aerial vehicles (UAVs) and computer vision are crucial to increase the efficiency of forage breeding programs, to support high-throughput phenotyping (HTP), aiming to estimate parameters correlated to important traits. The main goal of this study was to propose a convolutional neural network (CNN) approach using UAV-RGB imagery to estimate dry matter yield traits in a guineagrass breeding program. For this, an experiment composed of 330 plots of full-sib families and checks conducted at Embrapa Beef Cattle, Brazil, was used. The image dataset was composed of images obtained with an RGB sensor embedded in a Phantom 4 PRO. The traits leaf dry matter yield (LDMY) and total dry matter yield (TDMY) were obtained by conventional agronomic methodology and considered as the ground-truth data. Different CNN architectures were analyzed, such as AlexNet, ResNeXt50, DarkNet53, and two networks proposed recently for related tasks named MaCNN and LF-CNN. Pretrained AlexNet and ResNeXt50 architectures were also studied. Ten-fold cross-validation was used for training and testing the model. Estimates of DMY traits by each CNN architecture were considered as new HTP traits to compare with real traits. Pearson correlation coefficient r between real and HTP traits ranged from 0.62 to 0.79 for LDMY and from 0.60 to 0.76 for TDMY; root square mean error (RSME) ranged from 286.24 to 366.93 kg·ha-1 for LDMY and from 413.07 to 506.56 kg·ha-1 for TDMY. All the CNNs generated heritable HTP traits, except LF-CNN for LDMY and AlexNet for TDMY. Genetic correlations between real and HTP traits were high but varied according to the CNN architecture. HTP trait from ResNeXt50 pretrained achieved the best results for indirect selection regardless of the dry matter trait. This demonstrates that CNNs with remote sensing data are highly promising for HTP for dry matter yield traits in forage breeding programs.


Assuntos
Redes Neurais de Computação , Tecnologia de Sensoriamento Remoto , Animais , Brasil , Bovinos , Fenótipo
2.
Sensors (Basel) ; 20(16)2020 Aug 10.
Artigo em Inglês | MEDLINE | ID: mdl-32784983

RESUMO

As key-components of the urban-drainage system, storm-drains and manholes are essential to the hydrological modeling of urban basins. Accurately mapping of these objects can help to improve the storm-drain systems for the prevention and mitigation of urban floods. Novel Deep Learning (DL) methods have been proposed to aid the mapping of these urban features. The main aim of this paper is to evaluate the state-of-the-art object detection method RetinaNet to identify storm-drain and manhole in urban areas in street-level RGB images. The experimental assessment was performed using 297 mobile mapping images captured in 2019 in the streets in six regions in Campo Grande city, located in Mato Grosso do Sul state, Brazil. Two configurations of training, validation, and test images were considered. ResNet-50 and ResNet-101 were adopted in the experimental assessment as the two distinct feature extractor networks (i.e., backbones) for the RetinaNet method. The results were compared with the Faster R-CNN method. The results showed a higher detection accuracy when using RetinaNet with ResNet-50. In conclusion, the assessed DL method is adequate to detect storm-drain and manhole from mobile mapping RGB images, outperforming the Faster R-CNN method. The labeled dataset used in this study is available for future research.

3.
Sensors (Basel) ; 20(21)2020 Oct 26.
Artigo em Inglês | MEDLINE | ID: mdl-33114475

RESUMO

Mapping utility poles using side-view images acquired with car-mounted cameras is a time-consuming task, mainly in larger areas due to the need for street-by-street surveying. Aerial images cover larger areas and can be feasible alternatives although the detection and mapping of the utility poles in urban environments using top-view images is challenging. Thus, we propose the use of Adaptive Training Sample Selection (ATSS) for detecting utility poles in urban areas since it is a novel method and has not yet investigated in remote sensing applications. Here, we compared ATSS with Faster Region-based Convolutional Neural Networks (Faster R-CNN) and Focal Loss for Dense Object Detection (RetinaNet ), currently used in remote sensing applications, to assess the performance of the proposed methodology. We used 99,473 patches of 256 × 256 pixels with ground sample distance (GSD) of 10 cm. The patches were divided into training, validation and test datasets in approximate proportions of 60%, 20% and 20%, respectively. As the utility pole labels are point coordinates and the object detection methods require a bounding box, we assessed the influence of the bounding box size on the ATSS method by varying the dimensions from 30×30 to 70×70 pixels. For the proposal task, our findings show that ATSS is, on average, 5% more accurate than Faster R-CNN and RetinaNet. For a bounding box size of 40×40, we achieved Average Precision with intersection over union of 50% (AP50) of 0.913 for ATSS, 0.875 for Faster R-CNN and 0.874 for RetinaNet. Regarding the influence of the bounding box size on ATSS, our results indicate that the AP50 is about 6.5% higher for 60×60 compared to 30×30. For AP75, this margin reaches 23.1% in favor of the 60×60 bounding box size. In terms of computational costs, all the methods tested remain at the same level, with an average processing time around of 0.048 s per patch. Our findings show that ATSS outperforms other methodologies and is suitable for developing operation tools that can automatically detect and map utility poles.

4.
Sensors (Basel) ; 20(2)2020 Jan 20.
Artigo em Inglês | MEDLINE | ID: mdl-31968589

RESUMO

This study proposes and evaluates five deep fully convolutional networks (FCNs) for the semantic segmentation of a single tree species: SegNet, U-Net, FC-DenseNet, and two DeepLabv3+ variants. The performance of the FCN designs is evaluated experimentally in terms of classification accuracy and computational load. We also verify the benefits of fully connected conditional random fields (CRFs) as a post-processing step to improve the segmentation maps. The analysis is conducted on a set of images captured by an RGB camera aboard a UAV flying over an urban area. The dataset also contains a mask that indicates the occurrence of an endangered species called Dipteryx alata Vogel, also known as cumbaru, taken as the species to be identified. The experimental analysis shows the effectiveness of each design and reports average overall accuracy ranging from 88.9% to 96.7%, an F1-score between 87.0% and 96.1%, and IoU from 77.1% to 92.5%. We also realize that CRF consistently improves the performance, but at a high computational cost.

5.
Sensors (Basel) ; 20(17)2020 Aug 26.
Artigo em Inglês | MEDLINE | ID: mdl-32858803

RESUMO

Monitoring biomass of forages in experimental plots and livestock farms is a time-consuming, expensive, and biased task. Thus, non-destructive, accurate, precise, and quick phenotyping strategies for biomass yield are needed. To promote high-throughput phenotyping in forages, we propose and evaluate the use of deep learning-based methods and UAV (Unmanned Aerial Vehicle)-based RGB images to estimate the value of biomass yield by different genotypes of the forage grass species Panicum maximum Jacq. Experiments were conducted in the Brazilian Cerrado with 110 genotypes with three replications, totaling 330 plots. Two regression models based on Convolutional Neural Networks (CNNs) named AlexNet and ResNet18 were evaluated, and compared to VGGNet-adopted in previous work in the same thematic for other grass species. The predictions returned by the models reached a correlation of 0.88 and a mean absolute error of 12.98% using AlexNet considering pre-training and data augmentation. This proposal may contribute to forage biomass estimation in breeding populations and livestock areas, as well as to reduce the labor in the field.


Assuntos
Ração Animal , Biomassa , Aprendizado Profundo , Plantas/classificação , Tecnologia de Sensoriamento Remoto , Animais , Brasil , Gado , Fenótipo
6.
Sensors (Basel) ; 19(16)2019 Aug 18.
Artigo em Inglês | MEDLINE | ID: mdl-31426597

RESUMO

Detection and classification of tree species from remote sensing data were performed using mainly multispectral and hyperspectral images and Light Detection And Ranging (LiDAR) data. Despite the comparatively lower cost and higher spatial resolution, few studies focused on images captured by Red-Green-Blue (RGB) sensors. Besides, the recent years have witnessed an impressive progress of deep learning methods for object detection. Motivated by this scenario, we proposed and evaluated the usage of Convolutional Neural Network (CNN)-based methods combined with Unmanned Aerial Vehicle (UAV) high spatial resolution RGB imagery for the detection of law protected tree species. Three state-of-the-art object detection methods were evaluated: Faster Region-based Convolutional Neural Network (Faster R-CNN), YOLOv3 and RetinaNet. A dataset was built to assess the selected methods, comprising 392 RBG images captured from August 2018 to February 2019, over a forested urban area in midwest Brazil. The target object is an important tree species threatened by extinction known as Dipteryx alata Vogel (Fabaceae). The experimental analysis delivered average precision around 92% with an associated processing times below 30 miliseconds.


Assuntos
Fabaceae/fisiologia , Redes Neurais de Computação , Aprendizado Profundo , Análise Discriminante , Fabaceae/química , Funções Verossimilhança , Fotografação , Tecnologia de Sensoriamento Remoto
7.
An Acad Bras Cienc ; 90(2): 1293-1308, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29898097

RESUMO

Pantanal da Nhecolândia is one of the most well-preserved areas in the State of Mato Grosso do Sul. Located in the southern part of the Taquari River megafan, it is in tectonic contact with the fault escarpments of the Maracaju-Campo Grande plateaus to the east and with the fault escarpments of the Bodoquena plateau to the west, which continue to north. To the south and to the north, the limits are marked respectively by the lineaments of the Negro and Taquari Rivers. Nhecolândia is characterized by the existence of at least 17,631 lagoons, 17,050 (96.70 %) of which are of fresh water (baías) and 577 (3.3 %) of salty water (salinas). Studies based on (Landsat) satellite images and use of free software (QGIS, version 2.8.3) and GIS (Geographic Information Systems) revealed that the major axes of the lagoons are aligned along two directions, NE (62.49 %) and NW (37.51 %), with modes concentrated between N30-40E and N30-40W, suggesting in both cases the role played by tectonic control (neotectonics) in their formation. Evidences of fluvial origin are presented for these groups of lagoons, as well as for their tectonic alignment.

8.
Sci Rep ; 11(1): 19619, 2021 10 04.
Artigo em Inglês | MEDLINE | ID: mdl-34608181

RESUMO

Accurately mapping individual tree species in densely forested environments is crucial to forest inventory. When considering only RGB images, this is a challenging task for many automatic photogrammetry processes. The main reason for that is the spectral similarity between species in RGB scenes, which can be a hindrance for most automatic methods. This paper presents a deep learning-based approach to detect an important multi-use species of palm trees (Mauritia flexuosa; i.e., Buriti) on aerial RGB imagery. In South-America, this palm tree is essential for many indigenous and local communities because of its characteristics. The species is also a valuable indicator of water resources, which comes as a benefit for mapping its location. The method is based on a Convolutional Neural Network (CNN) to identify and geolocate singular tree species in a high-complexity forest environment. The results returned a mean absolute error (MAE) of 0.75 trees and an F1-measure of 86.9%. These results are better than Faster R-CNN and RetinaNet methods considering equal experiment conditions. In conclusion, the method presented is efficient to deal with a high-density forest scenario and can accurately map the location of single species like the M. flexuosa palm tree and may be useful for future frameworks.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA