Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 1 de 1
Filtrar
Más filtros

Banco de datos
Tipo del documento
Revista
País de afiliación
Intervalo de año de publicación
1.
Nature ; 620(7976): 1037-1046, 2023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-37612505

RESUMEN

Speech neuroprostheses have the potential to restore communication to people living with paralysis, but naturalistic speed and expressivity are elusive1. Here we use high-density surface recordings of the speech cortex in a clinical-trial participant with severe limb and vocal paralysis to achieve high-performance real-time decoding across three complementary speech-related output modalities: text, speech audio and facial-avatar animation. We trained and evaluated deep-learning models using neural data collected as the participant attempted to silently speak sentences. For text, we demonstrate accurate and rapid large-vocabulary decoding with a median rate of 78 words per minute and median word error rate of 25%. For speech audio, we demonstrate intelligible and rapid speech synthesis and personalization to the participant's pre-injury voice. For facial-avatar animation, we demonstrate the control of virtual orofacial movements for speech and non-speech communicative gestures. The decoders reached high performance with less than two weeks of training. Our findings introduce a multimodal speech-neuroprosthetic approach that has substantial promise to restore full, embodied communication to people living with severe paralysis.


Asunto(s)
Cara , Prótesis Neurales , Parálisis , Habla , Humanos , Corteza Cerebral/fisiología , Corteza Cerebral/fisiopatología , Ensayos Clínicos como Asunto , Comunicación , Aprendizaje Profundo , Gestos , Movimiento , Prótesis Neurales/normas , Parálisis/fisiopatología , Parálisis/rehabilitación , Vocabulario , Voz
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA