Your browser doesn't support javascript.
loading
Multi-Kernel Temporal and Spatial Convolution for EEG-Based Emotion Classification.
Emsawas, Taweesak; Morita, Takashi; Kimura, Tsukasa; Fukui, Ken-Ichi; Numao, Masayuki.
Afiliação
  • Emsawas T; Graduate School of Information Science and Technology, Osaka University, Osaka 565-0871, Japan.
  • Morita T; The Institute of Scientific and Industrial Research (ISIR), Osaka University, Osaka 567-0047, Japan.
  • Kimura T; The Institute of Scientific and Industrial Research (ISIR), Osaka University, Osaka 567-0047, Japan.
  • Fukui KI; The Institute of Scientific and Industrial Research (ISIR), Osaka University, Osaka 567-0047, Japan.
  • Numao M; The Institute of Scientific and Industrial Research (ISIR), Osaka University, Osaka 567-0047, Japan.
Sensors (Basel) ; 22(21)2022 Oct 27.
Article em En | MEDLINE | ID: mdl-36365948
ABSTRACT
Deep learning using an end-to-end convolutional neural network (ConvNet) has been applied to several electroencephalography (EEG)-based brain-computer interface tasks to extract feature maps and classify the target output. However, the EEG analysis remains challenging since it requires consideration of various architectural design components that influence the representational ability of extracted features. This study proposes an EEG-based emotion classification model called the multi-kernel temporal and spatial convolution network (MultiT-S ConvNet). The multi-scale kernel is used in the model to learn various time resolutions, and separable convolutions are applied to find related spatial patterns. In addition, we enhanced both the temporal and spatial filters with a lightweight gating mechanism. To validate the performance and classification accuracy of MultiT-S ConvNet, we conduct subject-dependent and subject-independent experiments on EEG-based emotion datasets DEAP and SEED. Compared with existing methods, MultiT-S ConvNet outperforms with higher accuracy results and a few trainable parameters. Moreover, the proposed multi-scale module in temporal filtering enables extracting a wide range of EEG representations, covering short- to long-wavelength components. This module could be further implemented in any model of EEG-based convolution networks, and its ability potentially improves the model's learning capacity.
Assuntos
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Eletroencefalografia / Interfaces Cérebro-Computador Limite: Humans Idioma: En Revista: Sensors (Basel) Ano de publicação: 2022 Tipo de documento: Article País de afiliação: Japão

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Eletroencefalografia / Interfaces Cérebro-Computador Limite: Humans Idioma: En Revista: Sensors (Basel) Ano de publicação: 2022 Tipo de documento: Article País de afiliação: Japão