Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
PLoS One ; 15(4): e0231968, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32330178

RESUMO

In the wake of rapid advances in automatic affect analysis, commercial automatic classifiers for facial affect recognition have attracted considerable attention in recent years. While several options now exist to analyze dynamic video data, less is known about the relative performance of these classifiers, in particular when facial expressions are spontaneous rather than posed. In the present work, we tested eight out-of-the-box automatic classifiers, and compared their emotion recognition performance to that of human observers. A total of 937 videos were sampled from two large databases that conveyed the basic six emotions (happiness, sadness, anger, fear, surprise, and disgust) either in posed (BU-4DFE) or spontaneous (UT-Dallas) form. Results revealed a recognition advantage for human observers over automatic classification. Among the eight classifiers, there was considerable variance in recognition accuracy ranging from 48% to 62%. Subsequent analyses per type of expression revealed that performance by the two best performing classifiers approximated those of human observers, suggesting high agreement for posed expressions. However, classification accuracy was consistently lower (although above chance level) for spontaneous affective behavior. The findings indicate potential shortcomings of existing out-of-the-box classifiers for measuring emotions, and highlight the need for more spontaneous facial databases that can act as a benchmark in the training and testing of automatic emotion recognition systems. We further discuss some limitations of analyzing facial expressions that have been recorded in controlled environments.


Assuntos
Afeto , Expressão Facial , Reconhecimento Psicológico , Adulto , Automação , Feminino , Humanos , Masculino
2.
Front Psychol ; 8: 2342, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-29375448

RESUMO

Despite being a pan-cultural phenomenon, laughter is arguably the least understood behaviour deployed in social interaction. As well as being a response to humour, it has other important functions including promoting social affiliation, developing cooperation and regulating competitive behaviours. This multi-functional feature of laughter marks it as an adaptive behaviour central to facilitating social cohesion. However, it is not clear how laughter achieves this social cohesion. We consider two approaches to understanding how laughter facilitates social cohesion - the 'representational' approach and the 'affect-induction' approach. The representational approach suggests that laughter conveys information about the expresser's emotional state, and the listener decodes this information to gain knowledge about the laugher's felt state. The affect-induction approach views laughter as a tool to influence the affective state of listeners. We describe a modified version of the affect-induction approach, in which laughter is combined with additional factors - including social context, verbal information, other social signals and knowledge of the listener's emotional state - to influence an interaction partner. This view asserts that laughter by itself is ambiguous: the same laughter may induce positive or negative affect in a listener, with the outcome determined by the combination of these additional factors. Here we describe two experiments exploring which of these approaches accurately describes laughter. Participants judged the genuineness of audio-video recordings of social interactions containing laughter. Unknown to the participants the recordings contained either the original laughter or replacement laughter from a different part of the interaction. When replacement laughter was matched for intensity, genuineness judgements were similar to judgements of the original unmodified recordings. When replacement laughter was not matched for intensity, genuineness judgements were generally significantly lower. These results support the affect-induction view of laughter by suggesting that laughter is inherently underdetermined and ambiguous, and that its interpretation is determined by the context in which it occurs.

3.
Psychol Methods ; 19(1): 155-74, 2014 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-24219542

RESUMO

Emotion research has long been dominated by the "standard method" of displaying posed or acted static images of facial expressions of emotion. While this method has been useful, it is unable to investigate the dynamic nature of emotion expression. Although continuous self-report traces have enabled the measurement of dynamic expressions of emotion, a consensus has not been reached on the correct statistical techniques that permit inferences to be made with such measures. We propose generalized additive models and generalized additive mixed models as techniques that can account for the dynamic nature of such continuous measures. These models allow us to hold constant shared components of responses that are due to perceived emotion across time, while enabling inference concerning linear differences between groups. The generalized additive mixed model approach is preferred, as it can account for autocorrelation in time series data and allows emotion decoding participants to be modeled as random effects. To increase confidence in linear differences, we assess the methods that address interactions between categorical variables and dynamic changes over time. In addition, we provide comments on the use of generalized additive models to assess the effect size of shared perceived emotion and discuss sample sizes. Finally, we address additional uses, the inference of feature detection, continuous variable interactions, and measurement of ambiguity.


Assuntos
Emoções , Expressão Facial , Modelos Estatísticos , Autorrelato , Percepção Social , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA