Your browser doesn't support javascript.
loading
Real-time synthesis of imagined speech processes from minimally invasive recordings of neural activity.
Angrick, Miguel; Ottenhoff, Maarten C; Diener, Lorenz; Ivucic, Darius; Ivucic, Gabriel; Goulis, Sophocles; Saal, Jeremy; Colon, Albert J; Wagner, Louis; Krusienski, Dean J; Kubben, Pieter L; Schultz, Tanja; Herff, Christian.
Afiliação
  • Angrick M; Cognitive Systems Lab, University of Bremen, Bremen, Germany. miguel.angrick@uni-bremen.de.
  • Ottenhoff MC; Department of Neurosurgery, School of Mental Health and Neurosciences, Maastricht University, Maastricht, The Netherlands.
  • Diener L; Cognitive Systems Lab, University of Bremen, Bremen, Germany.
  • Ivucic D; Cognitive Systems Lab, University of Bremen, Bremen, Germany.
  • Ivucic G; Cognitive Systems Lab, University of Bremen, Bremen, Germany.
  • Goulis S; Department of Neurosurgery, School of Mental Health and Neurosciences, Maastricht University, Maastricht, The Netherlands.
  • Saal J; Department of Neurosurgery, School of Mental Health and Neurosciences, Maastricht University, Maastricht, The Netherlands.
  • Colon AJ; Academic Center for Epileptology, Kempenhaeghe/Maastricht University Medical Center, Kempenhaeghe, The Netherlands.
  • Wagner L; Academic Center for Epileptology, Kempenhaeghe/Maastricht University Medical Center, Kempenhaeghe, The Netherlands.
  • Krusienski DJ; ASPEN Lab, Biomedical Engineering Department, Virginia Commonwealth University, Richmond, VA, USA.
  • Kubben PL; Department of Neurosurgery, School of Mental Health and Neurosciences, Maastricht University, Maastricht, The Netherlands.
  • Schultz T; Academic Center for Epileptology, Kempenhaeghe/Maastricht University Medical Center, Maastricht, The Netherlands.
  • Herff C; Cognitive Systems Lab, University of Bremen, Bremen, Germany.
Commun Biol ; 4(1): 1055, 2021 09 23.
Article em En | MEDLINE | ID: mdl-34556793
ABSTRACT
Speech neuroprosthetics aim to provide a natural communication channel to individuals who are unable to speak due to physical or neurological impairments. Real-time synthesis of acoustic speech directly from measured neural activity could enable natural conversations and notably improve quality of life, particularly for individuals who have severely limited means of communication. Recent advances in decoding approaches have led to high quality reconstructions of acoustic speech from invasively measured neural activity. However, most prior research utilizes data collected during open-loop experiments of articulated speech, which might not directly translate to imagined speech processes. Here, we present an approach that synthesizes audible speech in real-time for both imagined and whispered speech conditions. Using a participant implanted with stereotactic depth electrodes, we were able to reliably generate audible speech in real-time. The decoding models rely predominately on frontal activity suggesting that speech processes have similar representations when vocalized, whispered, or imagined. While reconstructed audio is not yet intelligible, our real-time synthesis approach represents an essential step towards investigating how patients will learn to operate a closed-loop speech neuroprosthesis based on imagined speech.
Assuntos

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Qualidade de Vida / Fala / Eletrodos Implantados / Próteses Neurais / Interfaces Cérebro-Computador Idioma: En Ano de publicação: 2021 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Qualidade de Vida / Fala / Eletrodos Implantados / Próteses Neurais / Interfaces Cérebro-Computador Idioma: En Ano de publicação: 2021 Tipo de documento: Article