Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Front Robot AI ; 10: 1052509, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37008985

RESUMEN

Introduction: Wearable assistive devices for the visually impaired whose technology is based on video camera devices represent a challenge in rapid evolution, where one of the main problems is to find computer vision algorithms that can be implemented in low-cost embedded devices. Objectives and Methods: This work presents a Tiny You Only Look Once architecture for pedestrian detection, which can be implemented in low-cost wearable devices as an alternative for the development of assistive technologies for the visually impaired. Results: The recall results of the proposed refined model represent an improvement of 71% working with four anchor boxes and 66% with six anchor boxes compared to the original model. The accuracy achieved on the same data set shows an increase of 14% and 25%, respectively. The F1 calculation shows a refinement of 57% and 55%. The average accuracy of the models achieved an improvement of 87% and 99%. The number of correctly detected objects was 3098 and 2892 for four and six anchor boxes, respectively, whose performance is better by 77% and 65% compared to the original, which correctly detected 1743 objects. Discussion: Finally, the model was optimized for the Jetson Nano embedded system, a case study for low-power embedded devices, and in a desktop computer. In both cases, the graphics processing unit (GPU) and central processing unit were tested, and a documented comparison of solutions aimed at serving visually impaired people was performed. Conclusion: We performed the desktop tests with a RTX 2070S graphics card, and the image processing took about 2.8 ms. The Jetson Nano board could process an image in about 110 ms, offering the opportunity to generate alert notification procedures in support of visually impaired mobility.

2.
Animals (Basel) ; 12(17)2022 Aug 28.
Artículo en Inglés | MEDLINE | ID: mdl-36077932

RESUMEN

The compost barn system has become popular in recent years for providing greater animal well-being and quality of life, favoring productivity and longevity. With the increase in the use of compost barn in dairy farms, studies related to the thermal environment and behavior are of paramount importance to assess the well-being of animals and improve management, if necessary. This work aimed to characterize the thermal environment inside a compost barn during the four seasons of a year and to evaluate the standing and lying behavior of the cows through images. The experiment was carried out during March (summer), June (autumn), August (winter), and November (spring). Dry bulb temperature (tdb, °C), dew point temperature (tdp, °C), and relative humidity (RH,%) data were collected every 10 minutes during all analyzed days, and the temperature and humidity index (THI) was subsequently calculated. In order to analyze the behavior of the cows, filming of the barn interior was carried out during the evaluated days. Subsequently, these films were analyzed visually, and in an automated way to evaluate the behavior of these animals. For the automated analysis, an algorithm was developed using artificial intelligence tools, YOLOv3, so that the evaluation process could be automated and fast. It was observed that during the experimental period, the highest mean values of THI were observed during the afternoon and the autumn. The animals' preference to lie down on the bed for most of the day was verified. It was observed that the algorithm was able to detect cow behavior (lying down or standing). It can be concluded that the behavior of the cows was defined, and the artificial intelligence was successfully applied and can be recommended for such use.

3.
Nanomaterials (Basel) ; 12(11)2022 May 26.
Artículo en Inglés | MEDLINE | ID: mdl-35683674

RESUMEN

Processing images represents a necessary step in the process of analysing the information gathered about nanoparticles after characteristic material samples have been scanned with electron microscopy, which often requires the use of image processing techniques or general purpose image manipulation software to carry out tasks such as nanoparticle detection and measurement. In recent years, the use of networks has been successfully implemented to detect and classify electron microscopy images as well as the objects within them. In this work, we present four detection models using two versions of the YOLO neural network architectures trained to detect cubical and quasi-spherical particles in SEM images; the training datasets are a mixture of real images and synthetic ones generated by a semi-arbitrary method. The resulting models were capable of detecting nanoparticles in images different than the ones used for training and identifying them in some cases as the close proximity between nanoparticles proved a challenge for the neural networks in most situations.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA