Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 86
Filtrar
1.
Sensors (Basel) ; 24(17)2024 Aug 29.
Artículo en Inglés | MEDLINE | ID: mdl-39275503

RESUMEN

This work aims at proposing an affordable, non-wearable system to detect falls of people in need of care. The proposal uses artificial vision based on deep learning techniques implemented on a Raspberry Pi4 4GB RAM with a High-Definition IR-CUT camera. The CNN architecture classifies detected people into five classes: fallen, crouching, sitting, standing, and lying down. When a fall is detected, the system sends an alert notification to mobile devices through the Telegram instant messaging platform. The system was evaluated considering real daily indoor activities under different conditions: outfit, lightning, and distance from camera. Results show a good trade-off between performance and cost of the system. Obtained performance metrics are: precision of 96.4%, specificity of 96.6%, accuracy of 94.8%, and sensitivity of 93.1%. Regarding privacy concerns, even though this system uses a camera, the video is not recorded or monitored by anyone, and pictures are only sent in case of fall detection. This work can contribute to reducing the fatal consequences of falls in people in need of care by providing them with prompt attention. Such a low-cost solution would be desirable, particularly in developing countries with limited or no medical alert systems and few resources.


Asunto(s)
Accidentes por Caídas , Humanos , Accidentes por Caídas/prevención & control , Aprendizaje Profundo , Computadores , Algoritmos
2.
Sensors (Basel) ; 24(17)2024 Aug 31.
Artículo en Inglés | MEDLINE | ID: mdl-39275593

RESUMEN

It is estimated that 10% to 20% of road accidents are related to fatigue, with accidents caused by drowsiness up to twice as deadly as those caused by other factors. In order to reduce these numbers, strategies such as advertising campaigns, the implementation of driving recorders in vehicles used for road transport of goods and passengers, or the use of drowsiness detection systems in cars have been implemented. Within the scope of the latter area, the technologies used are diverse. They can be based on the measurement of signals such as steering wheel movement, vehicle position on the road, or driver monitoring. Driver monitoring is a technology that has been exploited little so far and can be implemented in many different approaches. This work addresses the evaluation of a multidimensional drowsiness index based on the recording of facial expressions, gaze direction, and head position and studies the feasibility of its implementation in a low-cost electronic package. Specifically, the aim is to determine the driver's state by monitoring their facial expressions, such as the frequency of blinking, yawning, eye-opening, gaze direction, and head position. For this purpose, an algorithm capable of detecting drowsiness has been developed. Two approaches are compared: Facial recognition based on Haar features and facial recognition based on Histograms of Oriented Gradients (HOG). The implementation has been carried out on a Raspberry Pi, a low-cost device that allows the creation of a prototype that can detect drowsiness and interact with peripherals such as cameras or speakers. The results show that the proposed multi-index methodology performs better in detecting drowsiness than algorithms based on one-index detection.


Asunto(s)
Algoritmos , Conducción de Automóvil , Humanos , Expresión Facial , Reconocimiento Facial/fisiología , Fases del Sueño/fisiología , Accidentes de Tránsito/prevención & control , Masculino , Adulto , Reconocimiento Facial Automatizado/métodos , Femenino
3.
HardwareX ; 19: e00575, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39291287

RESUMEN

Environmental protection has gained greater importance over time due to the negative impact and irreversible consequences that have occurred worldwide and stem from pollution. One of the great challenges faced in different parts of the world is the inadequate management and classification of solid waste. In order to contribute to tackling this issue, this paper proposes an automated sorting system based on artificial vision which allows recognition and separation of recyclable materials (Plastic, Glass, Cardboard and Metal) through a webcam connected in real time to the Nvidia® Jetson Nano™ 2 GB programming board, which has a convolutional neural network (CNN) trained for the proper classification of waste. The system had a 95 % accuracy in separating plastic, 96 % in glass and metal, and 94 % in cardboard. With this in mind, we conclude it contributes to the recycling effort, which has an impact on the reduction of environmental pollution worldwide.

4.
ACS Appl Mater Interfaces ; 16(28): 36678-36687, 2024 Jul 17.
Artículo en Inglés | MEDLINE | ID: mdl-38966894

RESUMEN

Stretchable organic phototransistor arrays have potential applications in artificial visual systems due to their capacity to perceive ultraweak light across a broad spectrum. Ensuring uniform mechanical and electrical performance of individual devices within these arrays requires semiconductor films with large-area scale, well-defined orientation, and stretchability. However, the progress of stretchable phototransistors is primarily impeded by their limited electrical properties and photodetection capabilities. Herein, wafer-scale and well-oriented semiconductor films were successfully prepared using a solution shearing process. The electrical properties and photodetection capabilities were optimized by improving the polymer chain alignment. Furthermore, a stretchable 10 × 10 transistor array with high device uniformity was fabricated, demonstrating excellent mechanical robustness and photosensitive imaging ability. These arrays based on highly stretchable and well-oriented wafer-scale semiconductor films have great application potential in the field of electronic eye and artificial visual systems.

5.
Nanomicro Lett ; 16(1): 238, 2024 Jul 08.
Artículo en Inglés | MEDLINE | ID: mdl-38976105

RESUMEN

The emergence of the Internet-of-Things is anticipated to create a vast market for what are known as smart edge devices, opening numerous opportunities across countless domains, including personalized healthcare and advanced robotics. Leveraging 3D integration, edge devices can achieve unprecedented miniaturization while simultaneously boosting processing power and minimizing energy consumption. Here, we demonstrate a back-end-of-line compatible optoelectronic synapse with a transfer learning method on health care applications, including electroencephalogram (EEG)-based seizure prediction, electromyography (EMG)-based gesture recognition, and electrocardiogram (ECG)-based arrhythmia detection. With experiments on three biomedical datasets, we observe the classification accuracy improvement for the pretrained model with 2.93% on EEG, 4.90% on ECG, and 7.92% on EMG, respectively. The optical programming property of the device enables an ultra-low power (2.8 × 10-13 J) fine-tuning process and offers solutions for patient-specific issues in edge computing scenarios. Moreover, the device exhibits impressive light-sensitive characteristics that enable a range of light-triggered synaptic functions, making it promising for neuromorphic vision application. To display the benefits of these intricate synaptic properties, a 5 × 5 optoelectronic synapse array is developed, effectively simulating human visual perception and memory functions. The proposed flexible optoelectronic synapse holds immense potential for advancing the fields of neuromorphic physiological signal processing and artificial visual systems in wearable applications.

6.
Front Neurosci ; 18: 1408087, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38962178

RESUMEN

Vision plays a major role in perceiving external stimuli and information in our daily lives. The neural mechanism of color vision is complicated, involving the co-ordinated functions of a variety of cells, such as retinal cells and lateral geniculate nucleus cells, as well as multiple levels of the visual cortex. In this work, we reviewed the history of experimental and theoretical studies on this issue, from the fundamental functions of the individual cells of the visual system to the coding in the transmission of neural signals and sophisticated brain processes at different levels. We discuss various hypotheses, models, and theories related to the color vision mechanism and present some suggestions for developing novel implanted devices that may help restore color vision in visually impaired people or introduce artificial color vision to those who need it.

7.
Fundam Res ; 4(1): 158-166, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38933832

RESUMEN

Artificial vision is crucial for most artificial intelligence applications. Conventional artificial visual systems have been facing challenges in terms of real-time information processing due to the physical separation of sensors, memories, and processors, which results in the production of a large amount of redundant data as well as the data conversion and transfer between these three components consuming most of the time and energy. Emergent optoelectronic memristors with the ability to realize integrated sensing-computing-memory (ISCM) are key candidates for solving such challenges and therefore attract increasing attention. At present, the memristive ISCM devices can only perform primary-level computing with external light signals due to the fact that only monotonic increase of memconductance upon light irradiation is achieved in most of these devices. Here, we propose an all-optically controlled memristive ISCM device based on a simple structure of Au/ZnO/Pt with the ZnO thin film sputtered at pure Ar atmosphere. This device can perform advanced computing tasks such as nonvolatile neuromorphic computing and complete Boolean logic functions only by light irradiation, owing to its ability to reversibly tune the memconductance with light. Moreover, the device shows excellent operation stability ascribed to a purely electronic memconductance tuning mechanism. Hence, this study is an important step towards the next generation of artificial visual systems.

8.
Sensors (Basel) ; 24(12)2024 Jun 09.
Artículo en Inglés | MEDLINE | ID: mdl-38931537

RESUMEN

It is common to see cases in which, when performing tasks in close vision in front of a digital screen, the posture or position of the head is not adequate, especially in young people; it is essential to have a correct posture of the head to avoid visual, muscular, or joint problems. Most of the current systems to control head inclination require an external part attached to the subject's head. The aim of this study is the validation of a procedure that, through a detection algorithm and eye tracking, can control the correct position of the head in real time when subjects are in front of a digital device. The system only needs a digital device with a CCD receiver and downloadable software through which we can detect the inclination of the head, indicating if a bad posture is adopted due to a visual problem or simply inadequate visual-postural habits, alerting us to the postural anomaly to correct it.The system was evaluated in subjects with disparate interpupillary distances, at different working distances in front of the digital device, and at each distance, different tilt angles were evaluated. The system evaluated favorably in different lighting environments, correctly detecting the subjects' pupils. The results showed that for most of the variables, particularly good absolute and relative reliability values were found when measuring head tilt with lower accuracy than most of the existing systems. The evaluated results have been positive, making it a considerably inexpensive and easily affordable system for all users. It is the first application capable of measuring the head tilt of the subject at their working or reading distance in real time by tracking their eyes.


Asunto(s)
Algoritmos , Cabeza , Postura , Humanos , Postura/fisiología , Cabeza/fisiología , Inteligencia Artificial , Programas Informáticos , Masculino , Femenino , Adulto
9.
Sensors (Basel) ; 24(12)2024 Jun 11.
Artículo en Inglés | MEDLINE | ID: mdl-38931576

RESUMEN

This research focuses on developing an artificial vision system for a flexible delta robot manipulator and integrating it with machine-to-machine (M2M) communication to optimize real-time device interaction. This integration aims to increase the speed of the robotic system and improve its overall performance. The proposed combination of an artificial vision system with M2M communication can detect and recognize targets with high accuracy in real time within the limited space considered for positioning, further localization, and carrying out manufacturing processes such as assembly or sorting of parts. In this study, RGB images are used as input data for the MASK-R-CNN algorithm, and the results are processed according to the features of the delta robot arm prototype. The data obtained from MASK-R-CNN are adapted for use in the delta robot control system, considering its unique characteristics and positioning requirements. M2M technology enables the robot arm to react quickly to changes, such as moving objects or changes in their position, which is crucial for sorting and packing tasks. The system was tested under near real-world conditions to evaluate its performance and reliability.

11.
J Neural Eng ; 21(2)2024 Apr 10.
Artículo en Inglés | MEDLINE | ID: mdl-38502957

RESUMEN

Objective.The enabling technology of visual prosthetics for the blind is making rapid progress. However, there are still uncertainties regarding the functional outcomes, which can depend on many design choices in the development. In visual prostheses with a head-mounted camera, a particularly challenging question is how to deal with the gaze-locked visual percept associated with spatial updating conflicts in the brain. The current study investigates a recently proposed compensation strategy based on gaze-contingent image processing with eye-tracking. Gaze-contingent processing is expected to reinforce natural-like visual scanning and reestablished spatial updating based on eye movements. The beneficial effects remain to be investigated for daily life activities in complex visual environments.Approach.The current study evaluates the benefits of gaze-contingent processing versus gaze-locked and gaze-ignored simulations in the context of mobility, scene recognition and visual search, using a virtual reality simulated prosthetic vision paradigm with sighted subjects.Main results.Compared to gaze-locked vision, gaze-contingent processing was consistently found to improve the speed in all experimental tasks, as well as the subjective quality of vision. Similar or further improvements were found in a control condition that ignores gaze-dependent effects, a simulation that is unattainable in the clinical reality.Significance.Our results suggest that gaze-locked vision and spatial updating conflicts can be debilitating for complex visually-guided activities of daily living such as mobility and orientation. Therefore, for prospective users of head-steered prostheses with an unimpaired oculomotor system, the inclusion of a compensatory eye-tracking system is strongly endorsed.


Asunto(s)
Actividades Cotidianas , Visión Ocular , Humanos , Estudios Prospectivos , Movimientos Oculares , Simulación por Computador
12.
J Neural Eng ; 21(2)2024 Apr 10.
Artículo en Inglés | MEDLINE | ID: mdl-38547529

RESUMEN

Objective.Neuromodulation, particularly electrical stimulation, necessitates high spatial resolution to achieve artificial vision with high acuity. In epiretinal implants, this is hindered by the undesired activation of distal axons. Here, we investigate focal and axonal activation of retinal ganglion cells (RGCs) in epiretinal configuration for different sinusoidal stimulation frequencies.Approach.RGC responses to epiretinal sinusoidal stimulation at frequencies between 40 and 100 Hz were tested inex-vivophotoreceptor degenerated (rd10) isolated retinae. Experiments were conducted using a high-density CMOS-based microelectrode array, which allows to localize RGC cell bodies and axons at high spatial resolution.Main results.We report current and charge density thresholds for focal and distal axon activation at stimulation frequencies of 40, 60, 80, and 100 Hz for an electrode size with an effective area of 0.01 mm2. Activation of distal axons is avoided up to a stimulation amplitude of 0.23µA (corresponding to 17.3µC cm-2) at 40 Hz and up to a stimulation amplitude of 0.28µA (14.8µC cm-2) at 60 Hz. The threshold ratio between focal and axonal activation increases from 1.1 for 100 Hz up to 1.6 for 60 Hz, while at 40 Hz stimulation frequency, almost no axonal responses were detected in the tested intensity range. With the use of synaptic blockers, we demonstrate the underlying direct activation mechanism of the ganglion cells. Finally, using high-resolution electrical imaging and label-free electrophysiological axon tracking, we demonstrate the extent of activation in axon bundles.Significance.Our results can be exploited to define a spatially selective stimulation strategy avoiding axonal activation in future retinal implants, thereby solving one of the major limitations of artificial vision. The results may be extended to other fields of neuroprosthetics to achieve selective focal electrical stimulation.


Asunto(s)
Retina , Prótesis Visuales , Retina/fisiología , Células Ganglionares de la Retina/fisiología , Microelectrodos , Axones/fisiología , Estimulación Eléctrica/métodos
13.
Adv Mater ; 36(19): e2312094, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38320173

RESUMEN

Intelligent vision necessitates the deployment of detectors that are always-on and low-power, mirroring the continuous and uninterrupted responsiveness characteristic of human vision. Nonetheless, contemporary artificial vision systems attain this goal by the continuous processing of massive image frames and executing intricate algorithms, thereby expending substantial computational power and energy. In contrast, biological data processing, based on event-triggered spiking, has higher efficiency and lower energy consumption. Here, this work proposes an artificial vision architecture consisting of spiking photodetectors and artificial synapses, closely mirroring the intricacies of the human visual system. Distinct from previously reported techniques, the photodetector is self-powered and event-triggered, outputting light-modulated spiking signals directly, thereby fulfilling the imperative for always-on with low-power consumption. With the spiking signals processing through the integrated synapse units, recognition of graphics, gestures, and human action has been implemented, illustrating the potent image processing capabilities inherent within this architecture. The results prove the 90% accuracy rate in human action recognition within a mere five epochs utilizing a rudimentary artificial neural network. This novel architecture, grounded in spiking photodetectors, offers a viable alternative to the extant models of always-on low-power artificial vision system.


Asunto(s)
Redes Neurales de la Computación , Visión Ocular , Humanos , Inteligencia Artificial , Algoritmos , Sinapsis/fisiología , Procesamiento de Imagen Asistido por Computador
14.
Int J Mol Sci ; 25(3)2024 Jan 28.
Artículo en Inglés | MEDLINE | ID: mdl-38338908

RESUMEN

Neurons build vast gap junction-coupled networks (GJ-nets) that are permeable to ions or small molecules, enabling lateral signaling. Herein, we investigate (1) the effect of blinding diseases on GJ-nets in mouse retinas and (2) the impact of electrical stimulation on GJ permeability. GJ permeability was traced in the acute retinal explants of blind retinal degeneration 1 (rd1) mice using the GJ tracer neurobiotin. The tracer was introduced via the edge cut method into the GJ-net, and its spread was visualized in histological preparations (fluorescent tagged) using microscopy. Sustained stimulation was applied to modulate GJ permeability using a single large electrode. Our findings are: (1) The blind rd1 retinas displayed extensive intercellular coupling via open GJs. Three GJ-nets were identified: horizontal, amacrine, and ganglion cell networks. (2) Sustained stimulation significantly diminished the tracer spread through the GJs in all the cell layers, as occurs with pharmaceutical inhibition with carbenoxolone. We concluded that the GJ-nets of rd1 retinas remain coupled and functional after blinding disease and that their permeability is regulatable by sustained stimulation. These findings are essential for understanding molecular signaling in diseases over coupled networks and therapeutic approaches using electrical implants, such as eliciting visual sensations or suppressing cortical seizures.


Asunto(s)
Degeneración Retiniana , Animales , Ratones , Degeneración Retiniana/terapia , Degeneración Retiniana/patología , Retina/patología , Uniones Comunicantes , Estimulación Eléctrica , Permeabilidad
15.
Data Brief ; 53: 110088, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38357450

RESUMEN

The dataset proposed is a collection of pedestrian navigation data sequences combining visual and spatial information. The pedestrian navigation sequences are situations encountered by a pedestrian walking in an urban outdoor environment, such as moving on the sidewalk, navigating through a crowd, or crossing a street when the pedestrian light traffic is green. The acquired data are timestamped provided RGB-D images and are associated with GPS, and inertial data (acceleration, rotation). These recordings were acquired by separate processes, avoiding delays during their capture to guarantee a synchronisation between the moment of acquisition by the sensor and the moment of recording on the system. The acquisition was made in the city of Dijon, France, including narrow streets, wide avenues, and parks. Annotations of the RGB-D are also provided by bounding boxes indicating the position of relevant static or dynamic objects present in a pedestrian area, such as a tree, bench, or person. This pedestrian navigation dataset aims to support the development of smart mobile systems to assist visually impaired people in their daily movements in an outdoor environment. In this context, the visual data and localisation sequences we provide can be used to elaborate the appropriate visual processing methods to extract relevant information about the obstacles and their current positions on the path. Alongside the dataset, a visual-to-auditory substitution method has been employed to convert each image sequence into corresponding stereophonic sound files, allowing for comparison and evaluation. Synthetic sequences associated with the same information set are also provided based on the recordings of a displacement within the 3D model of a real place in Dijon.

16.
ACS Appl Mater Interfaces ; 16(4): 5028-5035, 2024 Jan 31.
Artículo en Inglés | MEDLINE | ID: mdl-38235664

RESUMEN

Artificial vision systems (AVS) have potential applications in visual prosthetics and artificially intelligent robotics, and they require a preprocessor and a processor to mimic human vision. Halide perovskite (HP) is a promising preprocessor and processor due to its excellent photoresponse, ubiquitous charge migration pathways, and innate hysteresis. However, the material instability associated with HP thin films hinders their utilization in physical AVSs. Herein, we have developed ultrahigh-density arrays of robust HP nanowires (NWs) rooted in a porous alumina membrane (PAM) as the active layer for an AVS. The NW devices exhibit gradual photocurrent change, responding to changes in light pulse duration, intensity, and number, and allow contrast enhancement of visual inputs with a device lifetime of over 5 months. The NW-based processor possesses temporally stable conductance states with retention >105 s and jitter <10%. The physical AVS demonstrated 100% accuracy in recognizing different shapes, establishing HP as a reliable material for neuromorphic vision systems.

17.
Adv Mater ; 36(6): e2301986, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37435995

RESUMEN

The development of artificial intelligence has posed a challenge to machine vision based on conventional complementary metal-oxide semiconductor (CMOS) circuits owing to its high latency and inefficient power consumption originating from the data shuffling between memory and computation units. Gaining more insights into the function of every part of the visual pathway for visual perception can bring the capabilities of machine vision in terms of robustness and generality. Hardware acceleration of more energy-efficient and biorealistic artificial vision highly necessitates neuromorphic devices and circuits that are able to mimic the function of each part of the visual pathway. In this paper, we review the structure and function of the entire class of visual neurons from the retina to the primate visual cortex within reach (Chapter 2) are reviewed. Based on the extraction of biological principles, the recent hardware-implemented visual neurons located in different parts of the visual pathway are discussed in detail in Chapters 3 and 4. Furthermore, valuable applications of inspired artificial vision in different scenarios (Chapter 5) are provided. The functional description of the visual pathway and its inspired neuromorphic devices/circuits are expected to provide valuable insights for the design of next-generation artificial visual perception systems.


Asunto(s)
Inteligencia Artificial , Vías Visuales , Animales , Visión Ocular , Computadores , Percepción Visual , Primates
19.
Int J Retina Vitreous ; 9(1): 73, 2023 Nov 23.
Artículo en Inglés | MEDLINE | ID: mdl-37996905

RESUMEN

PURPOSE: To review the available evidence on the different retinal and visual prostheses for patients with retinitis pigmentosa and new implants for other indications including dry age-related macular degeneration. METHODS: The PubMed, GoogleScholar, ScienceDirect, and ClinicalTrials databases were the main resources used to conduct the medical literature search. An extensive search was performed to identify relevant articles concerning the worldwide advances in retinal prosthesis, clinical trials, status of devices and potential future directions up to December 2022. RESULTS: Thirteen devices were found to be current and were ordered by stimulation location. Six have active clinical trials. Four have been discontinued, including the Alpha IMS, Alpha AMS, IRIS II, and ARGUS II which had FDA and CE mark approval. Future directions will be presented in the review. CONCLUSION: This review provides an update of retinal prosthetic devices, both current and discontinued. While some devices have achieved visual perception in animals and/or humans, the main issues impeding the commercialization of these devices include: increased length of time to observe outcomes, difficulties in finding validated meaures for use in studies, unknown long-term effects, lack of funding, and a low amount of patients simultaneously diagnosed with RP lacking other comorbid conditions. The ARGUS II did get FDA and CE mark approval so it was deemed safe and also effective. However, the company became more focused on a visual cortical implant. Future efforts are headed towards more biocompatible, safe, and efficacious devices.

20.
Data Brief ; 50: 109610, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37808538

RESUMEN

This paper presents a semi-automated, scalable, and homologous methodology towards IoT implemented in Python for extracting and integrating images in pedestrian and motorcyclist areas on the road for constructing a multiclass object classifier. It consists of two stages. The first stage deals with creating a non-debugged data set by acquiring images related to the semantic context previously mentioned, using an embedded device connected 24/7 via Wi-Fi to a free and public CCTV service in Medellin, Colombia. Through artificial vision techniques, and automatically performs a comparative chronological analysis to download the images observed by 80 cameras that report data asynchronously. The second stage proposes two algorithms focused on debugging the previously obtained data set. The first one facilitates the user in labeling the data set not debugged through Regions of Interest (ROI) and hotkeys. It decomposes the information in the nth image of the data set in the same dictionary to store it in a binary Pickle file. The second one is nothing more than an observer of the classification performed by the user through the first algorithm to allow the user to verify if the information contained in the Pickle file built is correct.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...