Your browser doesn't support javascript.
loading
A Multimodal Dataset for Mixed Emotion Recognition.
Yang, Pei; Liu, Niqi; Liu, Xinge; Shu, Yezhi; Ji, Wenqi; Ren, Ziqi; Sheng, Jenny; Yu, Minjing; Yi, Ran; Zhang, Dan; Liu, Yong-Jin.
Afiliação
  • Yang P; Tsinghua University, Department of Computer Science and Technology, Beijing, 100084, China.
  • Liu N; Tsinghua University, Department of Computer Science and Technology, Beijing, 100084, China.
  • Liu X; Tsinghua University, Department of Computer Science and Technology, Beijing, 100084, China.
  • Shu Y; Tsinghua University, Department of Computer Science and Technology, Beijing, 100084, China.
  • Ji W; Tsinghua University, Department of Computer Science and Technology, Beijing, 100084, China.
  • Ren Z; Tsinghua University, Department of Computer Science and Technology, Beijing, 100084, China.
  • Sheng J; Tsinghua University, Department of Computer Science and Technology, Beijing, 100084, China.
  • Yu M; Tianjin University, College of Intelligence and Computing, Tianjin, 300350, China.
  • Yi R; Shanghai Jiao Tong University, Department of Computer Science and Engineering, Shanghai, 200240, China.
  • Zhang D; Tsinghua University, Department of Psychology, Beijing, 100084, China.
  • Liu YJ; Tsinghua University, Department of Computer Science and Technology, Beijing, 100084, China. liuyongjin@tsinghua.edu.cn.
Sci Data ; 11(1): 847, 2024 Aug 05.
Article em En | MEDLINE | ID: mdl-39103399
ABSTRACT
Mixed emotions have attracted increasing interest recently, but existing datasets rarely focus on mixed emotion recognition from multimodal signals, hindering the affective computing of mixed emotions. On this basis, we present a multimodal dataset with four kinds of signals recorded while watching mixed and non-mixed emotion videos. To ensure effective emotion induction, we first implemented a rule-based video filtering step to select the videos that could elicit stronger positive, negative, and mixed emotions. Then, an experiment with 80 participants was conducted, in which the data of EEG, GSR, PPG, and frontal face videos were recorded while they watched the selected video clips. We also recorded the subjective emotional rating on PANAS, VAD, and amusement-disgust dimensions. In total, the dataset consists of multimodal signal data and self-assessment data from 73 participants. We also present technical validations for emotion induction and mixed emotion classification from physiological signals and face videos. The average accuracy of the 3-class classification (i.e., positive, negative, and mixed) can reach 80.96% when using SVM and features from all modalities, which indicates the possibility of identifying mixed emotional states.
Assuntos

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Emoções Limite: Humans Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Emoções Limite: Humans Idioma: En Ano de publicação: 2024 Tipo de documento: Article