Your browser doesn't support javascript.
loading
Attention-based convolutional neural network with multi-modal temporal information fusion for motor imagery EEG decoding.
Ma, Xinzhi; Chen, Weihai; Pei, Zhongcai; Zhang, Yue; Chen, Jianer.
Afiliación
  • Ma X; School of Automation Science and Electrical Engineering, Beihang University, Beijing, China; Hangzhou Innovation Institute, Beihang University, Hangzhou, China.
  • Chen W; School of Electrical Engineering and Automation, Anhui University, Hefei, China. Electronic address: whchen@buaa.edu.cn.
  • Pei Z; School of Automation Science and Electrical Engineering, Beihang University, Beijing, China; Hangzhou Innovation Institute, Beihang University, Hangzhou, China.
  • Zhang Y; Hangzhou Innovation Institute, Beihang University, Hangzhou, China.
  • Chen J; Department of Geriatric Rehabilitation, Third Affiliated Hospital, Zhejiang Chinese Medical University, Hangzhou, China.
Comput Biol Med ; 175: 108504, 2024 Jun.
Article en En | MEDLINE | ID: mdl-38701593
ABSTRACT
Convolutional neural network (CNN) has been widely applied in motor imagery (MI)-based brain computer interface (BCI) to decode electroencephalography (EEG) signals. However, due to the limited perceptual field of convolutional kernel, CNN only extracts features from local region without considering long-term dependencies for EEG decoding. Apart from long-term dependencies, multi-modal temporal information is equally important for EEG decoding because it can offer a more comprehensive understanding of the temporal dynamics of neural processes. In this paper, we propose a novel deep learning network that combines CNN with self-attention mechanism to encapsulate multi-modal temporal information and global dependencies. The network first extracts multi-modal temporal information from two distinct perspectives average and variance. A shared self-attention module is then designed to capture global dependencies along these two feature dimensions. We further design a convolutional encoder to explore the relationship between average-pooled and variance-pooled features and fuse them into more discriminative features. Moreover, a data augmentation method called signal segmentation and recombination is proposed to improve the generalization capability of the proposed network. The experimental results on the BCI Competition IV-2a (BCIC-IV-2a) and BCI Competition IV-2b (BCIC-IV-2b) datasets show that our proposed method outperforms the state-of-the-art methods and achieves 4-class average accuracy of 85.03% on the BCIC-IV-2a dataset. The proposed method implies the effectiveness of multi-modal temporal information fusion in attention-based deep learning networks and provides a new perspective for MI-EEG decoding. The code is available at https//github.com/Ma-Xinzhi/EEG-TransNet.
Asunto(s)
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Redes Neurales de la Computación / Electroencefalografía / Interfaces Cerebro-Computador Límite: Humans Idioma: En Revista: Comput Biol Med Año: 2024 Tipo del documento: Article País de afiliación: China

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Redes Neurales de la Computación / Electroencefalografía / Interfaces Cerebro-Computador Límite: Humans Idioma: En Revista: Comput Biol Med Año: 2024 Tipo del documento: Article País de afiliación: China
...