Your browser doesn't support javascript.
loading
Responses to Visual Speech in Human Posterior Superior Temporal Gyrus Examined with iEEG Deconvolution.
Metzger, Brian A; Magnotti, John F; Wang, Zhengjia; Nesbitt, Elizabeth; Karas, Patrick J; Yoshor, Daniel; Beauchamp, Michael S.
Afiliação
  • Metzger BA; Department of Neurosurgery, Baylor College of Medicine, Houston, Texas 77030.
  • Magnotti JF; Department of Neurosurgery, Baylor College of Medicine, Houston, Texas 77030.
  • Wang Z; Department of Neurosurgery, Baylor College of Medicine, Houston, Texas 77030.
  • Nesbitt E; Department of Neurosurgery, Baylor College of Medicine, Houston, Texas 77030.
  • Karas PJ; Department of Neurosurgery, Baylor College of Medicine, Houston, Texas 77030.
  • Yoshor D; Department of Neurosurgery, Baylor College of Medicine, Houston, Texas 77030.
  • Beauchamp MS; Department of Neurosurgery, Baylor College of Medicine, Houston, Texas 77030 michael.beauchamp@bcm.edu.
J Neurosci ; 40(36): 6938-6948, 2020 09 02.
Article em En | MEDLINE | ID: mdl-32727820
ABSTRACT
Experimentalists studying multisensory integration compare neural responses to multisensory stimuli with responses to the component modalities presented in isolation. This procedure is problematic for multisensory speech perception since audiovisual speech and auditory-only speech are easily intelligible but visual-only speech is not. To overcome this confound, we developed intracranial encephalography (iEEG) deconvolution. Individual stimuli always contained both auditory and visual speech, but jittering the onset asynchrony between modalities allowed for the time course of the unisensory responses and the interaction between them to be independently estimated. We applied this procedure to electrodes implanted in human epilepsy patients (both male and female) over the posterior superior temporal gyrus (pSTG), a brain area known to be important for speech perception. iEEG deconvolution revealed sustained positive responses to visual-only speech and larger, phasic responses to auditory-only speech. Confirming results from scalp EEG, responses to audiovisual speech were weaker than responses to auditory-only speech, demonstrating a subadditive multisensory neural computation. Leveraging the spatial resolution of iEEG, we extended these results to show that subadditivity is most pronounced in more posterior aspects of the pSTG. Across electrodes, subadditivity correlated with visual responsiveness, supporting a model in which visual speech enhances the efficiency of auditory speech processing in pSTG. The ability to separate neural processes may make iEEG deconvolution useful for studying a variety of complex cognitive and perceptual tasks.SIGNIFICANCE STATEMENT Understanding speech is one of the most important human abilities. Speech perception uses information from both the auditory and visual modalities. It has been difficult to study neural responses to visual speech because visual-only speech is difficult or impossible to comprehend, unlike auditory-only and audiovisual speech. We used intracranial encephalography deconvolution to overcome this obstacle. We found that visual speech evokes a positive response in the human posterior superior temporal gyrus, enhancing the efficiency of auditory speech processing.
Assuntos
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Percepção da Fala / Lobo Temporal / Percepção Visual / Potenciais Evocados Limite: Adult / Female / Humans / Male Idioma: En Revista: J Neurosci Ano de publicação: 2020 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Percepção da Fala / Lobo Temporal / Percepção Visual / Potenciais Evocados Limite: Adult / Female / Humans / Male Idioma: En Revista: J Neurosci Ano de publicação: 2020 Tipo de documento: Article