Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Front Neurorobot ; 16: 932652, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36262461

RESUMEN

Generalizing prior experiences to complete new tasks is a challenging and unsolved problem in robotics. In this work, we explore a novel framework for control of complex systems called Primitive Imitation for Control (PICO). The approach combines ideas from imitation learning, task decomposition, and novel task sequencing to generalize from demonstrations to new behaviors. Demonstrations are automatically decomposed into existing or missing sub-behaviors which allows the framework to identify novel behaviors while not duplicating existing behaviors. Generalization to new tasks is achieved through dynamic blending of behavior primitives. We evaluated the approach using demonstrations from two different robotic platforms. The experimental results show that PICO is able to detect the presence of a novel behavior primitive and build the missing control policy.

2.
Invest Ophthalmol Vis Sci ; 59(2): 792-802, 2018 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-29392324

RESUMEN

Purpose: Visual scanning by sighted individuals is done using eye and head movements. In contrast, scanning using the Argus II is solely done by head movement, since eye movements can introduce localization errors. Here, we tested if a scanning mode utilizing eye movements increases visual stability and reduces head movements in Argus II users. Methods: Eye positions were measured in real-time and were used to shift the region of interest (ROI) that is sent to the implant within the wide field of view (FOV) of the scene camera. Participants were able to use combined eye-head scanning: shifting the camera by moving their head and shifting the ROI within the FOV by eye movement. Eight blind individuals implanted with the Argus II retinal prosthesis participated in the study. A white target appeared on a touchscreen monitor and the participants were instructed to report the location of the target by touching the monitor. We compared the spread of the responses, the time to complete the task, and the amount of head movements between combined eye-head and head-only scanning. Results: All participants benefited from the combined eye-head scanning mode. Better precision (i.e., narrower spread of the perceived location) was observed in six out of eight participants. Seven of eight participants were able to adopt a scanning strategy that enabled them to perform the task with significantly less head movement. Conclusions: Integrating an eye tracker into the Argus II is feasible, reduces head movements in a seated localization task, and improves pointing precision.


Asunto(s)
Ceguera/fisiopatología , Movimientos Oculares/fisiología , Movimientos de la Cabeza/fisiología , Agudeza Visual/fisiología , Prótesis Visuales , Anciano , Ceguera/etiología , Femenino , Humanos , Masculino , Persona de Mediana Edad , Desempeño Psicomotor/fisiología , Retinitis Pigmentosa/complicaciones , Personas con Daño Visual/rehabilitación
3.
J Neural Eng ; 13(2): 026017-26017, 2016 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-26863276

RESUMEN

OBJECTIVE: We used native sensorimotor representations of fingers in a brain-machine interface (BMI) to achieve immediate online control of individual prosthetic fingers. APPROACH: Using high gamma responses recorded with a high-density electrocorticography (ECoG) array, we rapidly mapped the functional anatomy of cued finger movements. We used these cortical maps to select ECoG electrodes for a hierarchical linear discriminant analysis classification scheme to predict: (1) if any finger was moving, and, if so, (2) which digit was moving. To account for sensory feedback, we also mapped the spatiotemporal activation elicited by vibrotactile stimulation. Finally, we used this prediction framework to provide immediate online control over individual fingers of the Johns Hopkins University Applied Physics Laboratory modular prosthetic limb. MAIN RESULTS: The balanced classification accuracy for detection of movements during the online control session was 92% (chance: 50%). At the onset of movement, finger classification was 76% (chance: 20%), and 88% (chance: 25%) if the pinky and ring finger movements were coupled. Balanced accuracy of fully flexing the cued finger was 64%, and 77% had we combined pinky and ring commands. Offline decoding yielded a peak finger decoding accuracy of 96.5% (chance: 20%) when using an optimized selection of electrodes. Offline analysis demonstrated significant finger-specific activations throughout sensorimotor cortex. Activations either prior to movement onset or during sensory feedback led to discriminable finger control. SIGNIFICANCE: Our results demonstrate the ability of ECoG-based BMIs to leverage the native functional anatomy of sensorimotor cortical populations to immediately control individual finger movements in real time.


Asunto(s)
Miembros Artificiales , Electrocorticografía/métodos , Electrodos Implantados , Dedos/fisiología , Movimiento/fisiología , Corteza Sensoriomotora/fisiología , Interfaces Cerebro-Computador , Humanos , Masculino , Interfaz Usuario-Computador , Vibración , Adulto Joven
4.
Annu Int Conf IEEE Eng Med Biol Soc ; 2016: 411-414, 2016 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-28268360

RESUMEN

Retinal prosthetic devices can significantly and positively impact the ability of visually challenged individuals to live a more independent life. We describe a visual processing system which leverages image analysis techniques to produce visual patterns and allows the user to more effectively perceive their environment. These patterns are used to stimulate a retinal prosthesis to allow self guidance and a higher degree of autonomy for the affected individual. Specifically, we describe an image processing pipeline that allows for object and face localization in cluttered environments as well as various contrast enhancement strategies in the "implanted image." Finally, we describe a real-time implementation and deployment of this system on the Argus II platform. We believe that these advances can significantly improve the effectiveness of the next generation of retinal prostheses.


Asunto(s)
Algoritmos , Cara , Prótesis Visuales , Humanos , Procesamiento de Imagen Asistido por Computador , Reconocimiento Visual de Modelos/fisiología , Personas con Daño Visual
5.
Annu Int Conf IEEE Eng Med Biol Soc ; 2016: 5443-5446, 2016 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-28269489

RESUMEN

Spatial mapping, the location in space of a perceived location due to an implanted electrode's electrical stimulation is important in the design of visual prostheses. Generally, a visual prosthesis system consists of an implanted electrode array, an external camera that acquires the image, and a transmitter that sends the information to the implanted electrodes. In cortical visual implant, the layout of the implanted array in most cases does not match the retinotopic map and it is necessary to find the location of the percept of each electrode in world coordinates. Herein, we show the feasibility of using eye movements as markers to construct the spatial map of the implanted electrodes. A blind patient implanted with the Argus II retinal prosthesis was instructed to conduct an eye movement to the location of a percept generated by an electrical stimulation at different retinal locations. By analyzing the eye movements triggered by the electrical stimulation, we were able to reconstruct the spatial map of the electrodes. Our experiment demonstrates that a blind person still maintains control of eye movements that can be used to map the percept location of the implanted electrodes.


Asunto(s)
Ceguera/terapia , Movimientos Oculares/fisiología , Prótesis Visuales/normas , Estimulación Eléctrica , Electrodos Implantados , Estudios de Factibilidad , Humanos , Implantación de Prótesis
6.
IEEE Trans Neural Syst Rehabil Eng ; 22(4): 784-96, 2014 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-24760914

RESUMEN

To increase the ability of brain-machine interfaces (BMIs) to control advanced prostheses such as the modular prosthetic limb (MPL), we are developing a novel system: the Hybrid Augmented Reality Multimodal Operation Neural Integration Environment (HARMONIE). This system utilizes hybrid input, supervisory control, and intelligent robotics to allow users to identify an object (via eye tracking and computer vision) and initiate (via brain-control) a semi-autonomous reach-grasp-and-drop of the object by the MPL. Sequential iterations of HARMONIE were tested in two pilot subjects implanted with electrocorticographic (ECoG) and depth electrodes within motor areas. The subjects performed the complex task in 71.4% (20/28) and 67.7% (21/31) of trials after minimal training. Balanced accuracy for detecting movements was 91.1% and 92.9%, significantly greater than chance accuracies (p < 0.05). After BMI-based initiation, the MPL completed the entire task 100% (one object) and 70% (three objects) of the time. The MPL took approximately 12.2 s for task completion after system improvements implemented for the second subject. Our hybrid-BMI design prevented all but one baseline false positive from initiating the system. The novel approach demonstrated in this proof-of-principle study, using hybrid input, supervisory control, and intelligent robotics, addresses limitations of current BMIs.


Asunto(s)
Inteligencia Artificial , Miembros Artificiales , Interfaces Cerebro-Computador , Electroencefalografía/métodos , Movimientos Oculares , Robótica/instrumentación , Adulto , Electroencefalografía/instrumentación , Análisis de Falla de Equipo , Femenino , Humanos , Masculino , Sistemas Hombre-Máquina , Proyectos Piloto , Diseño de Prótesis , Robótica/métodos , Terapia Asistida por Computador/instrumentación , Terapia Asistida por Computador/métodos
7.
Clin Transl Sci ; 7(1): 52-9, 2014 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-24528900

RESUMEN

Our research group recently demonstrated that a person with tetraplegia could use a brain-computer interface (BCI) to control a sophisticated anthropomorphic robotic arm with skill and speed approaching that of an able-bodied person. This multiyear study exemplifies important principles in translating research from foundational theory and animal experiments into a clinical study. We present a roadmap that may serve as an example for other areas of clinical device research as well as an update on study results. Prior to conducting a multiyear clinical trial, years of animal research preceded BCI testing in an epilepsy monitoring unit, and then in a short-term (28 days) clinical investigation. Scientists and engineers developed the necessary robotic and surgical hardware, software environment, data analysis techniques, and training paradigms. Coordination among researchers, funding institutes, and regulatory bodies ensured that the study would provide valuable scientific information in a safe environment for the study participant. Finally, clinicians from neurosurgery, anesthesiology, physiatry, psychology, and occupational therapy all worked in a multidisciplinary team along with the other researchers to conduct a multiyear BCI clinical study. This teamwork and coordination can be used as a model for others attempting to translate basic science into real-world clinical situations.


Asunto(s)
Miembros Artificiales , Interfaces Cerebro-Computador , Adulto , Animales , Miembros Artificiales/estadística & datos numéricos , Interfaces Cerebro-Computador/estadística & datos numéricos , Conducta Cooperativa , Electroencefalografía , Humanos , Masculino , Modelos Animales , Primates , Diseño de Prótesis , Cuadriplejía/rehabilitación , Robótica/instrumentación , Robótica/estadística & datos numéricos , Programas Informáticos , Traumatismos de la Médula Espinal/rehabilitación , Investigación Biomédica Traslacional , Interfaz Usuario-Computador
8.
IEEE Trans Neural Syst Rehabil Eng ; 22(3): 695-705, 2014 May.
Artículo en Inglés | MEDLINE | ID: mdl-24235276

RESUMEN

Intracranial electroencephalographic (iEEG) signals from two human subjects were used to achieve simultaneous neural control of reaching and grasping movements with the Johns Hopkins University Applied Physics Lab (JHU/APL) Modular Prosthetic Limb (MPL), a dexterous robotic prosthetic arm. We performed functional mapping of high gamma activity while the subject made reaching and grasping movements to identify task-selective electrodes. Independent, online control of reaching and grasping was then achieved using high gamma activity from a small subset of electrodes with a model trained on short blocks of reaching and grasping with no further adaptation. Classification accuracy did not decline (p < 0.05, one-way ANOVA) over three blocks of testing in either subject. Mean classification accuracy during independently executed overt reach and grasp movements for (Subject 1, Subject 2) were (0.85, 0.81) and (0.80, 0.96), respectively, and during simultaneous execution they were (0.83, 0.88) and (0.58, 0.88), respectively. Our models leveraged knowledge of the subject's individual functional neuroanatomy for reaching and grasping movements, allowing rapid acquisition of control in a time-sensitive clinical setting. We demonstrate the potential feasibility of verifying functionally meaningful iEEG-based control of the MPL prior to chronic implantation, during which additional capabilities of the MPL might be exploited with further training.


Asunto(s)
Miembros Artificiales , Electroencefalografía/métodos , Fuerza de la Mano/fisiología , Desempeño Psicomotor/fisiología , Adulto , Antropometría , Electrodos Implantados , Femenino , Humanos , Masculino , Persona de Mediana Edad , Sistemas en Línea , Reproducibilidad de los Resultados
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...