Action Unit Models of Facial Expression of Emotion in the Presence of Speech.
Int Conf Affect Comput Intell Interact Workshops
; 2013: 49-54, 2013 Sep.
Article
en En
| MEDLINE
| ID: mdl-25525561
Automatic recognition of emotion using facial expressions in the presence of speech poses a unique challenge because talking reveals clues for the affective state of the speaker but distorts the canonical expression of emotion on the face. We introduce a corpus of acted emotion expression where speech is either present (talking) or absent (silent). The corpus is uniquely suited for analysis of the interplay between the two conditions. We use a multimodal decision level fusion classifier to combine models of emotion from talking and silent faces as well as from audio to recognize five basic emotions: anger, disgust, fear, happy and sad. Our results strongly indicate that emotion prediction in the presence of speech from action unit facial features is less accurate when the person is talking. Modeling talking and silent expressions separately and fusing the two models greatly improves accuracy of prediction in the talking setting. The advantages are most pronounced when silent and talking face models are fused with predictions from audio features. In this multi-modal prediction both the combination of modalities and the separate models of talking and silent facial expression of emotion contribute to the improvement.
Texto completo:
1
Colección:
01-internacional
Base de datos:
MEDLINE
Tipo de estudio:
Prognostic_studies
Idioma:
En
Revista:
Int Conf Affect Comput Intell Interact Workshops
Año:
2013
Tipo del documento:
Article
País de afiliación:
Estados Unidos
Pais de publicación:
Estados Unidos