Your browser doesn't support javascript.
loading
A representation of abstract linguistic categories in the visual system underlies successful lipreading.
Nidiffer, Aaron R; Cao, Cody Zhewei; O'Sullivan, Aisling; Lalor, Edmund C.
Afiliação
  • Nidiffer AR; Department of Biomedical Engineering, Department of Neuroscience, Del Monte Institute for Neuroscience, University of Rochester, Rochester, NY, USA.
  • Cao CZ; Department of Psychology, University of Michigan, Ann Arbor, MI, USA.
  • O'Sullivan A; School of Engineering, Trinity College Institute of Neuroscience, Trinity Centre for Biomedical Engineering, Trinity College, Dublin, Ireland.
  • Lalor EC; Department of Biomedical Engineering, Department of Neuroscience, Del Monte Institute for Neuroscience, University of Rochester, Rochester, NY, USA; School of Engineering, Trinity College Institute of Neuroscience, Trinity Centre for Biomedical Engineering, Trinity College, Dublin, Ireland. Electron
Neuroimage ; 282: 120391, 2023 11 15.
Article em En | MEDLINE | ID: mdl-37757989
ABSTRACT
There is considerable debate over how visual speech is processed in the absence of sound and whether neural activity supporting lipreading occurs in visual brain areas. Much of the ambiguity stems from a lack of behavioral grounding and neurophysiological analyses that cannot disentangle high-level linguistic and phonetic/energetic contributions from visual speech. To address this, we recorded EEG from human observers as they watched silent videos, half of which were novel and half of which were previously rehearsed with the accompanying audio. We modeled how the EEG responses to novel and rehearsed silent speech reflected the processing of low-level visual features (motion, lip movements) and a higher-level categorical representation of linguistic units, known as visemes. The ability of these visemes to account for the EEG - beyond the motion and lip movements - was significantly enhanced for rehearsed videos in a way that correlated with participants' trial-by-trial ability to lipread that speech. Source localization of viseme processing showed clear contributions from visual cortex, with no strong evidence for the involvement of auditory areas. We interpret this as support for the idea that the visual system produces its own specialized representation of speech that is (1) well-described by categorical linguistic features, (2) dissociable from lip movements, and (3) predictive of lipreading ability. We also suggest a reinterpretation of previous findings of auditory cortical activation during silent speech that is consistent with hierarchical accounts of visual and audiovisual speech perception.
Assuntos
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Córtex Auditivo / Percepção da Fala Tipo de estudo: Prognostic_studies Limite: Humans Idioma: En Revista: Neuroimage Assunto da revista: DIAGNOSTICO POR IMAGEM Ano de publicação: 2023 Tipo de documento: Article País de afiliação: Estados Unidos

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Córtex Auditivo / Percepção da Fala Tipo de estudo: Prognostic_studies Limite: Humans Idioma: En Revista: Neuroimage Assunto da revista: DIAGNOSTICO POR IMAGEM Ano de publicação: 2023 Tipo de documento: Article País de afiliação: Estados Unidos