Your browser doesn't support javascript.
loading
EEG-based auditory attention decoding with audiovisual speech for hearing-impaired listeners.
Wang, Bo; Xu, Xiran; Niu, Yadong; Wu, Chao; Wu, Xihong; Chen, Jing.
Afiliación
  • Wang B; Speech and Hearing Research Center, Key Laboratory of Machine Perception (Ministry of Education), School of Intelligence Science and Technology, Peking University, Beijing 100871, China.
  • Xu X; Speech and Hearing Research Center, Key Laboratory of Machine Perception (Ministry of Education), School of Intelligence Science and Technology, Peking University, Beijing 100871, China.
  • Niu Y; Speech and Hearing Research Center, Key Laboratory of Machine Perception (Ministry of Education), School of Intelligence Science and Technology, Peking University, Beijing 100871, China.
  • Wu C; School of Nursing, Peking University, Beijing 100191, China.
  • Wu X; Speech and Hearing Research Center, Key Laboratory of Machine Perception (Ministry of Education), School of Intelligence Science and Technology, Peking University, Beijing 100871, China.
  • Chen J; National Biomedical Imaging Center, College of Future Technology, Beijing 100871, China.
Cereb Cortex ; 33(22): 10972-10983, 2023 11 04.
Article en En | MEDLINE | ID: mdl-37750333
Auditory attention decoding (AAD) was used to determine the attended speaker during an auditory selective attention task. However, the auditory factors modulating AAD remained unclear for hearing-impaired (HI) listeners. In this study, scalp electroencephalogram (EEG) was recorded with an auditory selective attention paradigm, in which HI listeners were instructed to attend one of the two simultaneous speech streams with or without congruent visual input (articulation movements), and at a high or low target-to-masker ratio (TMR). Meanwhile, behavioral hearing tests (i.e. audiogram, speech reception threshold, temporal modulation transfer function) were used to assess listeners' individual auditory abilities. The results showed that both visual input and increasing TMR could significantly enhance the cortical tracking of the attended speech and AAD accuracy. Further analysis revealed that the audiovisual (AV) gain in attended speech cortical tracking was significantly correlated with listeners' auditory amplitude modulation (AM) sensitivity, and the TMR gain in attended speech cortical tracking was significantly correlated with listeners' hearing thresholds. Temporal response function analysis revealed that subjects with higher AM sensitivity demonstrated more AV gain over the right occipitotemporal and bilateral frontocentral scalp electrodes.
Asunto(s)
Palabras clave

Texto completo: 1 Banco de datos: MEDLINE Asunto principal: Percepción del Habla / Pérdida Auditiva Límite: Humans Idioma: En Revista: Cereb Cortex Asunto de la revista: CEREBRO Año: 2023 Tipo del documento: Article País de afiliación: China

Texto completo: 1 Banco de datos: MEDLINE Asunto principal: Percepción del Habla / Pérdida Auditiva Límite: Humans Idioma: En Revista: Cereb Cortex Asunto de la revista: CEREBRO Año: 2023 Tipo del documento: Article País de afiliación: China