Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 1 de 1
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Nature ; 620(7976): 1037-1046, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37612505

RESUMO

Speech neuroprostheses have the potential to restore communication to people living with paralysis, but naturalistic speed and expressivity are elusive1. Here we use high-density surface recordings of the speech cortex in a clinical-trial participant with severe limb and vocal paralysis to achieve high-performance real-time decoding across three complementary speech-related output modalities: text, speech audio and facial-avatar animation. We trained and evaluated deep-learning models using neural data collected as the participant attempted to silently speak sentences. For text, we demonstrate accurate and rapid large-vocabulary decoding with a median rate of 78 words per minute and median word error rate of 25%. For speech audio, we demonstrate intelligible and rapid speech synthesis and personalization to the participant's pre-injury voice. For facial-avatar animation, we demonstrate the control of virtual orofacial movements for speech and non-speech communicative gestures. The decoders reached high performance with less than two weeks of training. Our findings introduce a multimodal speech-neuroprosthetic approach that has substantial promise to restore full, embodied communication to people living with severe paralysis.


Assuntos
Face , Próteses Neurais , Paralisia , Fala , Humanos , Córtex Cerebral/fisiologia , Córtex Cerebral/fisiopatologia , Ensaios Clínicos como Assunto , Comunicação , Aprendizado Profundo , Gestos , Movimento , Próteses Neurais/normas , Paralisia/fisiopatologia , Paralisia/reabilitação , Vocabulário , Voz
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA