Your browser doesn't support javascript.
loading
Continuous synthesis of artificial speech sounds from human cortical surface recordings during silent speech production.
Meng, Kevin; Goodarzy, Farhad; Kim, EuiYoung; Park, Ye Jin; Kim, June Sic; Cook, Mark J; Chung, Chun Kee; Grayden, David B.
Afiliação
  • Meng K; Department of Biomedical Engineering, The University of Melbourne, Melbourne, Australia.
  • Goodarzy F; Graeme Clark Institute for Biomedical Engineering, The University of Melbourne, Melbourne, Australia.
  • Kim E; Department of Medicine, St Vincent's Hospital, The University of Melbourne, Melbourne, Australia.
  • Park YJ; Interdisciplinary Program in Neuroscience, Seoul National University, Seoul, Republic of Korea.
  • Kim JS; Department of Brain and Cognitive Sciences, Seoul National University, Seoul, Republic of Korea.
  • Cook MJ; Research Institute of Basic Sciences, Seoul National University, Seoul, Republic of Korea.
  • Chung CK; Department of Biomedical Engineering, The University of Melbourne, Melbourne, Australia.
  • Grayden DB; Graeme Clark Institute for Biomedical Engineering, The University of Melbourne, Melbourne, Australia.
J Neural Eng ; 20(4)2023 07 27.
Article em En | MEDLINE | ID: mdl-37459853
ABSTRACT
Objective. Brain-computer interfaces can restore various forms of communication in paralyzed patients who have lost their ability to articulate intelligible speech. This study aimed to demonstrate the feasibility of closed-loop synthesis of artificial speech sounds from human cortical surface recordings during silent speech production.Approach. Ten participants with intractable epilepsy were temporarily implanted with intracranial electrode arrays over cortical surfaces. A decoding model that predicted audible outputs directly from patient-specific neural feature inputs was trained during overt word reading and immediately tested with overt, mimed and imagined word reading. Predicted outputs were later assessed objectively against corresponding voice recordings and subjectively through human perceptual judgments.Main results. Artificial speech sounds were successfully synthesized during overt and mimed utterances by two participants with some coverage of the precentral gyrus. About a third of these sounds were correctly identified by naïve listeners in two-alternative forced-choice tasks. A similar outcome could not be achieved during imagined utterances by any of the participants. However, neural feature contribution analyses suggested the presence of exploitable activation patterns during imagined speech in the postcentral gyrus and the superior temporal gyrus. In future work, a more comprehensive coverage of cortical surfaces, including posterior parts of the middle frontal gyrus and the inferior frontal gyrus, could improve synthesis performance during imagined speech.Significance.As the field of speech neuroprostheses is rapidly moving toward clinical trials, this study addressed important considerations about task instructions and brain coverage when conducting research on silent speech with non-target participants.
Assuntos
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Fala / Fonética Tipo de estudo: Prognostic_studies Limite: Humans Idioma: En Ano de publicação: 2023 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Fala / Fonética Tipo de estudo: Prognostic_studies Limite: Humans Idioma: En Ano de publicação: 2023 Tipo de documento: Article