Your browser doesn't support javascript.
loading
Decoding selective auditory attention with EEG using a transformer model.
Xu, Zihao; Bai, Yanru; Zhao, Ran; Hu, Hongmei; Ni, Guangjian; Ming, Dong.
Afiliação
  • Xu Z; Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072 China; Tianjin Key Laboratory of Brain Science and Neuroengineering, Tianjin 300072 China.
  • Bai Y; Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072 China; Tianjin Key Laboratory of Brain Science and Neuroengineering, Tianjin 300072 China.
  • Zhao R; Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072 China; Tianjin Key Laboratory of Brain Science and Neuroengineering, Tianjin 300072 China.
  • Hu H; Medizinische Physik, Carl von Ossietzky Universität Oldenburg and Cluster of Excellence "Hearing4all", Küpkersweg 74, 26129, Oldenburg, Germany.
  • Ni G; Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072 China; Tianjin Key Laboratory of Brain Science and Neuroengineering, Tianjin 300072 China; Department of Biomedical Engineering, College of Precision Instruments and Optoelectronics Engineering, Tianjin Uni
  • Ming D; Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072 China; Tianjin Key Laboratory of Brain Science and Neuroengineering, Tianjin 300072 China; Department of Biomedical Engineering, College of Precision Instruments and Optoelectronics Engineering, Tianjin Uni
Methods ; 204: 410-417, 2022 08.
Article em En | MEDLINE | ID: mdl-35447360
ABSTRACT
The human auditory system extracts valid information in noisy environments while ignoring other distractions, relying primarily on auditory attention. Studies have shown that the cerebral cortex responds differently to the sound source locations and that auditory attention is time-varying. In this work, we proposed a data-driven encoder-decoder architecture model for auditory attention detection (AAD), denoted as AAD-transformer. The model contains temporal self-attention and channel attention modules and could reconstruct the speech envelope by dynamically assigning weights according to the temporal self-attention and channel attention mechanisms of electroencephalogram (EEG). In addition, the model is conducted based on data-driven without additional preprocessing steps. The proposed model was validated using a binaural listening dataset, in which the speech stimulus was Mandarin, and compared with other models. The results showed that the decoding accuracy of the AAD-transformer in the 0.15-second decoding time window was 76.35%, which was much higher than the accuracy of the linear model using temporal response function in the 3-second decoding time window (increased by 16.27%). This work provides a novel auditory attention detection method, and the data-driven characteristic makes it convenient for neural-steered hearing devices, especially those who speak tonal languages.
Assuntos
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Percepção da Fala Tipo de estudo: Prognostic_studies Limite: Humans Idioma: En Revista: Methods Assunto da revista: BIOQUIMICA Ano de publicação: 2022 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Percepção da Fala Tipo de estudo: Prognostic_studies Limite: Humans Idioma: En Revista: Methods Assunto da revista: BIOQUIMICA Ano de publicação: 2022 Tipo de documento: Article