Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Asunto de la revista
País de afiliación
Intervalo de año de publicación
1.
Sensors (Basel) ; 22(11)2022 May 28.
Artículo en Inglés | MEDLINE | ID: mdl-35684736

RESUMEN

We assessed the performance of Convolutional Neural Network (CNN)-based approaches using mobile phone images to estimate regrowth density in tropical forages. We generated a dataset composed of 1124 labeled images with 2 mobile phones 7 days after the harvest of the forage plants. Six architectures were evaluated, including AlexNet, ResNet (18, 34, and 50 layers), ResNeXt101, and DarkNet. The best regression model showed a mean absolute error of 7.70 and a correlation of 0.89. Our findings suggest that our proposal using deep learning on mobile phone images can successfully be used to estimate regrowth density in forages.


Asunto(s)
Teléfono Celular , Aprendizaje Profundo , Redes Neurales de la Computación
2.
Sensors (Basel) ; 21(12)2021 Jun 09.
Artículo en Inglés | MEDLINE | ID: mdl-34207543

RESUMEN

Forage dry matter is the main source of nutrients in the diet of ruminant animals. Thus, this trait is evaluated in most forage breeding programs with the objective of increasing the yield. Novel solutions combining unmanned aerial vehicles (UAVs) and computer vision are crucial to increase the efficiency of forage breeding programs, to support high-throughput phenotyping (HTP), aiming to estimate parameters correlated to important traits. The main goal of this study was to propose a convolutional neural network (CNN) approach using UAV-RGB imagery to estimate dry matter yield traits in a guineagrass breeding program. For this, an experiment composed of 330 plots of full-sib families and checks conducted at Embrapa Beef Cattle, Brazil, was used. The image dataset was composed of images obtained with an RGB sensor embedded in a Phantom 4 PRO. The traits leaf dry matter yield (LDMY) and total dry matter yield (TDMY) were obtained by conventional agronomic methodology and considered as the ground-truth data. Different CNN architectures were analyzed, such as AlexNet, ResNeXt50, DarkNet53, and two networks proposed recently for related tasks named MaCNN and LF-CNN. Pretrained AlexNet and ResNeXt50 architectures were also studied. Ten-fold cross-validation was used for training and testing the model. Estimates of DMY traits by each CNN architecture were considered as new HTP traits to compare with real traits. Pearson correlation coefficient r between real and HTP traits ranged from 0.62 to 0.79 for LDMY and from 0.60 to 0.76 for TDMY; root square mean error (RSME) ranged from 286.24 to 366.93 kg·ha-1 for LDMY and from 413.07 to 506.56 kg·ha-1 for TDMY. All the CNNs generated heritable HTP traits, except LF-CNN for LDMY and AlexNet for TDMY. Genetic correlations between real and HTP traits were high but varied according to the CNN architecture. HTP trait from ResNeXt50 pretrained achieved the best results for indirect selection regardless of the dry matter trait. This demonstrates that CNNs with remote sensing data are highly promising for HTP for dry matter yield traits in forage breeding programs.


Asunto(s)
Redes Neurales de la Computación , Tecnología de Sensores Remotos , Animales , Brasil , Bovinos , Fenotipo
3.
Sensors (Basel) ; 20(21)2020 Oct 26.
Artículo en Inglés | MEDLINE | ID: mdl-33114475

RESUMEN

Mapping utility poles using side-view images acquired with car-mounted cameras is a time-consuming task, mainly in larger areas due to the need for street-by-street surveying. Aerial images cover larger areas and can be feasible alternatives although the detection and mapping of the utility poles in urban environments using top-view images is challenging. Thus, we propose the use of Adaptive Training Sample Selection (ATSS) for detecting utility poles in urban areas since it is a novel method and has not yet investigated in remote sensing applications. Here, we compared ATSS with Faster Region-based Convolutional Neural Networks (Faster R-CNN) and Focal Loss for Dense Object Detection (RetinaNet ), currently used in remote sensing applications, to assess the performance of the proposed methodology. We used 99,473 patches of 256 × 256 pixels with ground sample distance (GSD) of 10 cm. The patches were divided into training, validation and test datasets in approximate proportions of 60%, 20% and 20%, respectively. As the utility pole labels are point coordinates and the object detection methods require a bounding box, we assessed the influence of the bounding box size on the ATSS method by varying the dimensions from 30×30 to 70×70 pixels. For the proposal task, our findings show that ATSS is, on average, 5% more accurate than Faster R-CNN and RetinaNet. For a bounding box size of 40×40, we achieved Average Precision with intersection over union of 50% (AP50) of 0.913 for ATSS, 0.875 for Faster R-CNN and 0.874 for RetinaNet. Regarding the influence of the bounding box size on ATSS, our results indicate that the AP50 is about 6.5% higher for 60×60 compared to 30×30. For AP75, this margin reaches 23.1% in favor of the 60×60 bounding box size. In terms of computational costs, all the methods tested remain at the same level, with an average processing time around of 0.048 s per patch. Our findings show that ATSS outperforms other methodologies and is suitable for developing operation tools that can automatically detect and map utility poles.

4.
Sensors (Basel) ; 20(17)2020 Aug 26.
Artículo en Inglés | MEDLINE | ID: mdl-32858803

RESUMEN

Monitoring biomass of forages in experimental plots and livestock farms is a time-consuming, expensive, and biased task. Thus, non-destructive, accurate, precise, and quick phenotyping strategies for biomass yield are needed. To promote high-throughput phenotyping in forages, we propose and evaluate the use of deep learning-based methods and UAV (Unmanned Aerial Vehicle)-based RGB images to estimate the value of biomass yield by different genotypes of the forage grass species Panicum maximum Jacq. Experiments were conducted in the Brazilian Cerrado with 110 genotypes with three replications, totaling 330 plots. Two regression models based on Convolutional Neural Networks (CNNs) named AlexNet and ResNet18 were evaluated, and compared to VGGNet-adopted in previous work in the same thematic for other grass species. The predictions returned by the models reached a correlation of 0.88 and a mean absolute error of 12.98% using AlexNet considering pre-training and data augmentation. This proposal may contribute to forage biomass estimation in breeding populations and livestock areas, as well as to reduce the labor in the field.


Asunto(s)
Alimentación Animal , Biomasa , Aprendizaje Profundo , Plantas/clasificación , Tecnología de Sensores Remotos , Animales , Brasil , Ganado , Fenotipo
5.
Sensors (Basel) ; 19(16)2019 Aug 18.
Artículo en Inglés | MEDLINE | ID: mdl-31426597

RESUMEN

Detection and classification of tree species from remote sensing data were performed using mainly multispectral and hyperspectral images and Light Detection And Ranging (LiDAR) data. Despite the comparatively lower cost and higher spatial resolution, few studies focused on images captured by Red-Green-Blue (RGB) sensors. Besides, the recent years have witnessed an impressive progress of deep learning methods for object detection. Motivated by this scenario, we proposed and evaluated the usage of Convolutional Neural Network (CNN)-based methods combined with Unmanned Aerial Vehicle (UAV) high spatial resolution RGB imagery for the detection of law protected tree species. Three state-of-the-art object detection methods were evaluated: Faster Region-based Convolutional Neural Network (Faster R-CNN), YOLOv3 and RetinaNet. A dataset was built to assess the selected methods, comprising 392 RBG images captured from August 2018 to February 2019, over a forested urban area in midwest Brazil. The target object is an important tree species threatened by extinction known as Dipteryx alata Vogel (Fabaceae). The experimental analysis delivered average precision around 92% with an associated processing times below 30 miliseconds.


Asunto(s)
Fabaceae/fisiología , Redes Neurales de la Computación , Aprendizaje Profundo , Análisis Discriminante , Fabaceae/química , Funciones de Verosimilitud , Fotograbar , Tecnología de Sensores Remotos
6.
PLoS One ; 19(9): e0307569, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39250439

RESUMEN

Smart indoor tourist attractions, such as smart museums and aquariums, require a significant investment in indoor localization devices. The use of Global Positioning Systems on smartphones is unsuitable for scenarios where dense materials such as concrete and metal blocks weaken GPS signals, which is most often the case in indoor tourist attractions. With the help of deep learning, indoor localization can be done region by region using smartphone images. This approach requires no investment in infrastructure and reduces the cost and time needed to turn museums and aquariums into smart museums or smart aquariums. In this paper, we propose using deep learning algorithms to classify locations based on smartphone camera images for indoor tourist attractions. We evaluate our proposal in a real-world scenario in Brazil. We extensively collect images from ten different smartphones to classify biome-themed fish tanks in the Pantanal Biopark, creating a new dataset of 3654 images. We tested seven state-of-the-art neural networks, three of them based on transformers. On average, we achieved a precision of about 90% and a recall and f-score of about 89%. The results show that the proposal is suitable for most indoor tourist attractions.


Asunto(s)
Aprendizaje Profundo , Teléfono Inteligente , Turismo , Humanos , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Sistemas de Información Geográfica , Brasil
7.
PLoS One ; 16(3): e0248574, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33735277

RESUMEN

The Dendrocephalus brasiliensis, a native species from South America, is a freshwater crustacean well explored in conservational and productive activities. Its main characteristics are its rusticity and resistance cysts production, in which the hatching requires a period of dehydration. Independent of the species utilization nature, it is essential to manipulate its cysts, such as the counting using microscopes. Manually counting is a difficult task, prone to errors, and that also very time-consuming. In this paper, we propose an automatized approach for the detection and counting of Dendrocephalus brasiliensis cysts from images captured by a digital microscope. For this purpose, we built the DBrasiliensis dataset, a repository with 246 images containing 5141 cysts of Dendrocephalus brasiliensis. Then, we trained two state-of-the-art object detection methods, YOLOv3 (You Only Look Once) and Faster R-CNN (Region-based Convolutional Neural Networks), on DBrasiliensis dataset in order to compare them under both cyst detection and counting tasks. Experiments showed evidence that YOLOv3 is superior to Faster R-CNN, achieving an accuracy rate of 83,74%, R2 of 0.88, RMSE (Root Mean Square Error) of 3.49, and MAE (Mean Absolute Error) of 2.24 on cyst detection and counting. Moreover, we showed that is possible to infer the number of cysts of a substrate, with known weight, by performing the automated counting of some of its samples. In conclusion, the proposed approach using YOLOv3 is adequate to detect and count Dendrocephalus brasiliensis cysts. The DBrasiliensis dataset can be accessed at: https://doi.org/10.6084/m9.figshare.13073240.


Asunto(s)
Anostraca , Aprendizaje Profundo , Seguimiento de Parámetros Ecológicos/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Animales , Agua Dulce , América del Sur
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA