Your browser doesn't support javascript.
loading
Joint low-rank tensor fusion and cross-modal attention for multimodal physiological signals based emotion recognition.
Wan, Xin; Wang, Yongxiong; Wang, Zhe; Tang, Yiheng; Liu, Benke.
Afiliação
  • Wan X; School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, People's Republic of China.
  • Wang Y; School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, People's Republic of China.
  • Wang Z; School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, People's Republic of China.
  • Tang Y; School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, People's Republic of China.
  • Liu B; School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, People's Republic of China.
Physiol Meas ; 45(7)2024 Jul 11.
Article em En | MEDLINE | ID: mdl-38917842
ABSTRACT
Objective. Physiological signals based emotion recognition is a prominent research domain in the field of human-computer interaction. Previous studies predominantly focused on unimodal data, giving limited attention to the interplay among multiple modalities. Within the scope of multimodal emotion recognition, integrating the information from diverse modalities and leveraging the complementary information are the two essential issues to obtain the robust representations.Approach. Thus, we propose a intermediate fusion strategy for combining low-rank tensor fusion with the cross-modal attention to enhance the fusion of electroencephalogram, electrooculogram, electromyography, and galvanic skin response. Firstly, handcrafted features from distinct modalities are individually fed to corresponding feature extractors to obtain latent features. Subsequently, low-rank tensor is fused to integrate the information by the modality interaction representation. Finally, a cross-modal attention module is employed to explore the potential relationships between the distinct latent features and modality interaction representation, and recalibrate the weights of different modalities. And the resultant representation is adopted for emotion recognition.Main results. Furthermore, to validate the effectiveness of the proposed method, we execute subject-independent experiments within the DEAP dataset. The proposed method has achieved the accuracies of 73.82% and 74.55% for valence and arousal classification.Significance. The results of extensive experiments verify the outstanding performance of the proposed method.
Assuntos
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Processamento de Sinais Assistido por Computador / Eletroencefalografia / Eletromiografia / Emoções Limite: Humans Idioma: En Revista: Physiol Meas Assunto da revista: BIOFISICA / ENGENHARIA BIOMEDICA / FISIOLOGIA Ano de publicação: 2024 Tipo de documento: Article País de publicação: ENGLAND / ESCOCIA / GB / GREAT BRITAIN / INGLATERRA / REINO UNIDO / SCOTLAND / UK / UNITED KINGDOM

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Processamento de Sinais Assistido por Computador / Eletroencefalografia / Eletromiografia / Emoções Limite: Humans Idioma: En Revista: Physiol Meas Assunto da revista: BIOFISICA / ENGENHARIA BIOMEDICA / FISIOLOGIA Ano de publicação: 2024 Tipo de documento: Article País de publicação: ENGLAND / ESCOCIA / GB / GREAT BRITAIN / INGLATERRA / REINO UNIDO / SCOTLAND / UK / UNITED KINGDOM