Your browser doesn't support javascript.
loading
Emotion Classification Based on Transformer and CNN for EEG Spatial-Temporal Feature Learning.
Yao, Xiuzhen; Li, Tianwen; Ding, Peng; Wang, Fan; Zhao, Lei; Gong, Anmin; Nan, Wenya; Fu, Yunfa.
Afiliación
  • Yao X; Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China.
  • Li T; Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming 650500, China.
  • Ding P; Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming 650500, China.
  • Wang F; Faculty of Science, Kunming University of Science and Technology, Kunming 650500, China.
  • Zhao L; Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China.
  • Gong A; Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming 650500, China.
  • Nan W; Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China.
  • Fu Y; Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming 650500, China.
Brain Sci ; 14(3)2024 Mar 11.
Article en En | MEDLINE | ID: mdl-38539656
ABSTRACT

OBJECTIVES:

The temporal and spatial information of electroencephalogram (EEG) signals is crucial for recognizing features in emotion classification models, but it excessively relies on manual feature extraction. The transformer model has the capability of performing automatic feature extraction; however, its potential has not been fully explored in the classification of emotion-related EEG signals. To address these challenges, the present study proposes a novel model based on transformer and convolutional neural networks (TCNN) for EEG spatial-temporal (EEG ST) feature learning to automatic emotion classification.

METHODS:

The proposed EEG ST-TCNN model utilizes position encoding (PE) and multi-head attention to perceive channel positions and timing information in EEG signals. Two parallel transformer encoders in the model are used to extract spatial and temporal features from emotion-related EEG signals, and a CNN is used to aggregate the EEG's spatial and temporal features, which are subsequently classified using Softmax.

RESULTS:

The proposed EEG ST-TCNN model achieved an accuracy of 96.67% on the SEED dataset and accuracies of 95.73%, 96.95%, and 96.34% for the arousal-valence, arousal, and valence dimensions, respectively, for the DEAP dataset.

CONCLUSIONS:

The results demonstrate the effectiveness of the proposed ST-TCNN model, with superior performance in emotion classification compared to recent relevant studies.

SIGNIFICANCE:

The proposed EEG ST-TCNN model has the potential to be used for EEG-based automatic emotion recognition.
Palabras clave

Texto completo: 1 Bases de datos: MEDLINE Idioma: En Revista: Brain Sci Año: 2024 Tipo del documento: Article País de afiliación: China

Texto completo: 1 Bases de datos: MEDLINE Idioma: En Revista: Brain Sci Año: 2024 Tipo del documento: Article País de afiliación: China