Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Biol Cybern ; 117(1-2): 95-111, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-37004546

RESUMEN

Deep neural networks have surpassed human performance in key visual challenges such as object recognition, but require a large amount of energy, computation, and memory. In contrast, spiking neural networks (SNNs) have the potential to improve both the efficiency and biological plausibility of object recognition systems. Here we present a SNN model that uses spike-latency coding and winner-take-all inhibition (WTA-I) to efficiently represent visual stimuli using multi-scale parallel processing. Mimicking neuronal response properties in early visual cortex, images were preprocessed with three different spatial frequency (SF) channels, before they were fed to a layer of spiking neurons whose synaptic weights were updated using spike-timing-dependent-plasticity. We investigate how the quality of the represented objects changes under different SF bands and WTA-I schemes. We demonstrate that a network of 200 spiking neurons tuned to three SFs can efficiently represent objects with as little as 15 spikes per neuron. Studying how core object recognition may be implemented using biologically plausible learning rules in SNNs may not only further our understanding of the brain, but also lead to novel and efficient artificial vision systems.


Asunto(s)
Modelos Neurológicos , Plasticidad Neuronal , Humanos , Plasticidad Neuronal/fisiología , Redes Neurales de la Computación , Aprendizaje/fisiología , Percepción Visual/fisiología
2.
J Neural Eng ; 19(6)2022 12 07.
Artículo en Inglés | MEDLINE | ID: mdl-36541463

RESUMEN

Objective.How can we return a functional form of sight to people who are living with incurable blindness? Despite recent advances in the development of visual neuroprostheses, the quality of current prosthetic vision is still rudimentary and does not differ much across different device technologies.Approach.Rather than aiming to represent the visual scene as naturally as possible, aSmart Bionic Eyecould provide visual augmentations through the means of artificial intelligence-based scene understanding, tailored to specific real-world tasks that are known to affect the quality of life of people who are blind, such as face recognition, outdoor navigation, and self-care.Main results.Complementary to existing research aiming to restore natural vision, we propose a patient-centered approach to incorporate deep learning-based visual augmentations into the next generation of devices.Significance.The ability of a visual prosthesis to support everyday tasks might make the difference between abandoned technology and a widely adopted next-generation neuroprosthetic device.


Asunto(s)
Reconocimiento Facial , Prótesis Visuales , Humanos , Inteligencia Artificial , Calidad de Vida , Ceguera/terapia
3.
PLoS One ; 15(1): e0227677, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-31995568

RESUMEN

Prosthetic vision is being applied to partially recover the retinal stimulation of visually impaired people. However, the phosphenic images produced by the implants have very limited information bandwidth due to the poor resolution and lack of color or contrast. The ability of object recognition and scene understanding in real environments is severely restricted for prosthetic users. Computer vision can play a key role to overcome the limitations and to optimize the visual information in the prosthetic vision, improving the amount of information that is presented. We present a new approach to build a schematic representation of indoor environments for simulated phosphene images. The proposed method combines a variety of convolutional neural networks for extracting and conveying relevant information about the scene such as structural informative edges of the environment and silhouettes of segmented objects. Experiments were conducted with normal sighted subjects with a Simulated Prosthetic Vision system. The results show good accuracy for object recognition and room identification tasks for indoor scenes using the proposed approach, compared to other image processing methods.


Asunto(s)
Inteligencia Artificial , Prótesis Visuales , Adulto , Inteligencia Artificial/estadística & datos numéricos , Simulación por Computador , Femenino , Voluntarios Sanos , Humanos , Procesamiento de Imagen Asistido por Computador , Masculino , Persona de Mediana Edad , Fosfenos/fisiología , Estimulación Luminosa/métodos , Psicofísica , Semántica , Trastornos de la Visión/fisiopatología , Trastornos de la Visión/psicología , Trastornos de la Visión/terapia , Percepción Visual , Prótesis Visuales/estadística & datos numéricos , Adulto Joven
4.
J Neural Eng ; 17(5): 056002, 2020 10 08.
Artículo en Inglés | MEDLINE | ID: mdl-32947270

RESUMEN

OBJECTIVE: Visual prostheses are designed to restore partial functional vision in patients with total vision loss. Retinal visual prostheses provide limited capabilities as a result of low resolution, limited field of view and poor dynamic range. Understanding the influence of these parameters in the perception results can guide prostheses research and design. APPROACH: In this work, we evaluate the influence of field of view with respect to spatial resolution in visual prostheses, measuring the accuracy and response time in a search and recognition task. Twenty-four normally sighted participants were asked to find and recognize usual objects, such as furniture and home appliance in indoor room scenes. For the experiment, we use a new simulated prosthetic vision system that allows simple and effective experimentation. Our system uses a virtual-reality environment based on panoramic scenes. The simulator employs a head-mounted display which allows users to feel immersed in the scene by perceiving the entire scene all around. Our experiments use public image datasets and a commercial head-mounted display. We have also released the virtual-reality software for replicating and extending the experimentation. MAIN RESULTS: Results show that the accuracy and response time decrease when the field of view is increased. Furthermore, performance appears to be correlated with the angular resolution, but showing a diminishing return even with a resolution of less than 2.3 phosphenes per degree. SIGNIFICANCE: Our results seem to indicate that, for the design of retinal prostheses, it is better to concentrate the phosphenes in a small area, to maximize the angular resolution, even if that implies sacrificing field of view.


Asunto(s)
Realidad Virtual , Prótesis Visuales , Humanos , Fosfenos , Reconocimiento en Psicología , Visión Ocular
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA