RESUMO
Introduction: Psychotherapists' emotional and empathic competencies have a positive influence on psychotherapy outcome and alliance. However, it is doubtful whether psychotherapy education in itself leads to improvements in trainee psychotherapists' emotion recognition accuracy (ERA), which is an essential part of these competencies. Methods: In a randomized, controlled, double-blind study (N = 68), we trained trainee psychotherapists (57% psychodynamic therapy and 43% cognitive behavioral therapy) to detect non-verbal emotional expressions in others using standardized computerized trainings - one for multimodal emotion recognition accuracy and one for micro expression recognition accuracy - and compared their results to an active control group one week after the training (n = 60) and at the one-year follow up (n = 55). The participants trained once weekly during a three-week period. As outcome measures, we used a multimodal emotion recognition accuracy task, a micro expression recognition accuracy task and an emotion recognition accuracy task for verbal and non-verbal (combined) emotional expressions in medical settings. Results: The results of mixed multilevel analyses suggest that the multimodal emotion recognition accuracy training led to significantly steeper increases than the other two conditions from pretest to the posttest one week after the last training session. When comparing the pretest to follow-up differences in slopes, the superiority of the multimodal training group was still detectable in the unimodal audio modality and the unimodal video modality (in comparison to the control training group), but not when considering the multimodal audio-video modality or the total score of the multimodal emotion recognition accuracy measure. The micro expression training group showed a significantly steeper change trajectory from pretest to posttest compared to the control training group, but not compared to the multimodal training group. However, the effect vanished again until the one-year follow-up. There were no differences in change trajectories for the outcome measure about emotion recognition accuracy in medical settings. Discussion: We conclude that trainee psychotherapists' emotion recognition accuracy can be effectively trained, especially multimodal emotion recognition accuracy, and suggest that the changes in unimodal emotion recognition accuracy (audio-only and video-only) are long-lasting. Implications of these findings for the psychotherapy education are discussed.
RESUMO
Nonverbal emotion recognition accuracy (ERA) is a central feature of successful communication and interaction, and is of importance for many professions. We developed and evaluated two ERA training programs-one focusing on dynamic multimodal expressions (audio, video, audio-video) and one focusing on facial micro expressions. Sixty-seven subjects were randomized to one of two experimental groups (multimodal, micro expression) or an active control group (emotional working memory task). Participants trained once weekly with a brief computerized training program for three consecutive weeks. Pre-post outcome measures consisted of a multimodal ERA task, a micro expression recognition task, and a task about patients' emotional cues. Post measurement took place approximately a week after the last training session. Non-parametric mixed analyses of variance using the Aligned Rank Transform were used to evaluate the effectiveness of the training programs. Results showed that multimodal training was significantly more effective in improving multimodal ERA compared to micro expression training or the control training; and the micro expression training was significantly more effective in improving micro expression ERA compared to the other two training conditions. Both pre-post effects can be interpreted as large. No group differences were found for the outcome measure about recognizing patients' emotion cues. There were no transfer effects of the training programs, meaning that participants only improved significantly for the specific facet of ERA that they had trained on. Further, low baseline ERA was associated with larger ERA improvements. Results are discussed with regard to methodological and conceptual aspects, and practical implications and future directions are explored.
RESUMO
Age-related differences in emotion recognition have predominantly been investigated using static pictures of facial expressions, and positive emotions beyond happiness have rarely been included. The current study instead used dynamic facial and vocal stimuli, and included a wider than usual range of positive emotions. In Task 1, younger and older adults were tested for their abilities to recognize 12 emotions from brief video recordings presented in visual, auditory, and multimodal blocks. Task 2 assessed recognition of 18 emotions conveyed by non-linguistic vocalizations (e.g., laughter, sobs, and sighs). Results from both tasks showed that younger adults had significantly higher overall recognition rates than older adults. In Task 1, significant group differences (younger > older) were only observed for the auditory block (across all emotions), and for expressions of anger, irritation, and relief (across all presentation blocks). In Task 2, significant group differences were observed for 6 out of 9 positive, and 8 out of 9 negative emotions. Overall, results indicate that recognition of both positive and negative emotions show age-related differences. This suggests that the age-related positivity effect in emotion recognition may become less evident when dynamic emotional stimuli are used and happiness is not the only positive emotion under study.
Assuntos
Envelhecimento/fisiologia , Ira , Expressão Facial , Felicidade , Reconhecimento Psicológico/fisiologia , Adolescente , Adulto , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-IdadeRESUMO
Individuals vary in emotion recognition ability (ERA), but the causes and correlates of this variability are not well understood. Previous studies have largely focused on unimodal facial or vocal expressions and a small number of emotion categories, which may not reflect how emotions are expressed in everyday interactions. We investigated individual differences in ERA using a brief test containing dynamic multimodal (facial and vocal) expressions of 5 positive and 7 negative emotions (the ERAM test). Study 1 (N = 593) showed that ERA was positively correlated with emotional understanding, empathy, and openness, and negatively correlated with alexithymia. Women also had higher ERA than men. Study 2 was conducted online and replicated the recognition rates from Study 1 (which was conducted in lab) in a different sample (N = 106). Study 2 also showed that participants who had higher ERA were more accurate in their meta-cognitive judgments about their own accuracy. Recognition rates for visual, auditory, and audio-visual expressions were substantially correlated in both studies. Results provide further clues about the underlying structure of ERA and its links to broader affective processes. The ERAM test can be used for both lab and online research, and is freely available for academic research.
Assuntos
Expressão Facial , Individualidade , Sintomas Afetivos , Emoções , Feminino , Humanos , Masculino , Reconhecimento PsicológicoRESUMO
It has been the subject of much debate in the study of vocal expression of emotions whether posed expressions (e.g., actor portrayals) are different from spontaneous expressions. In the present investigation, we assembled a new database consisting of 1877 voice clips from 23 datasets, and used it to systematically compare spontaneous and posed expressions across 3 experiments. Results showed that (a) spontaneous expressions were generally rated as more genuinely emotional than were posed expressions, even when controlling for differences in emotion intensity, (b) there were differences between the two stimulus types with regard to their acoustic characteristics, and (c) spontaneous expressions with a high emotion intensity conveyed discrete emotions to listeners to a similar degree as has previously been found for posed expressions, supporting a dose-response relationship between intensity of expression and discreteness in perceived emotions. Our conclusion is that there are reliable differences between spontaneous and posed expressions, though not necessarily in the ways commonly assumed. Implications for emotion theories and the use of emotion portrayals in studies of vocal expression are discussed.
RESUMO
The ability to correctly understand the emotional expression of another person is essential for social relationships and appears to be a partly inherited trait. The neuropeptides oxytocin and vasopressin have been shown to influence this ability as well as face processing in humans. Here, recognition of the emotional content of faces and voices, separately and combined, was investigated in 492 subjects, genotyped for 25 single nucleotide polymorphisms (SNPs) in eight genes encoding proteins important for oxytocin and vasopressin neurotransmission. The SNP rs4778599 in the gene encoding aryl hydrocarbon receptor nuclear translocator 2 (ARNT2), a transcription factor that participates in the development of hypothalamic oxytocin and vasopressin neurons, showed an association that survived correction for multiple testing with emotion recognition of audio-visual stimuli in women (n = 309). This study demonstrates evidence for an association that further expands previous findings of oxytocin and vasopressin involvement in emotion recognition.
Assuntos
Translocador Nuclear Receptor Aril Hidrocarboneto/genética , Fatores de Transcrição Hélice-Alça-Hélice Básicos/genética , Emoções , Vias Neurais/fisiologia , Ocitocina/fisiologia , Reconhecimento Psicológico/fisiologia , Estimulação Acústica , Adolescente , Adulto , Expressão Facial , Feminino , Genótipo , Humanos , Masculino , Ocitocina/genética , Estimulação Luminosa , Polimorfismo de Nucleotídeo Único , Desempenho Psicomotor/fisiologia , Vasopressinas/genética , Vasopressinas/fisiologia , Voz , Adulto JovemRESUMO
Objectives: Insufficient sleep has been associated with impaired recognition of facial emotions. However, previous studies have found inconsistent results, potentially stemming from the type of static picture task used. We therefore examined whether insufficient sleep was associated with decreased emotion recognition ability in two separate studies using a dynamic multimodal task. Methods: Study 1 used a cross-sectional design consisting of 291 participants with questionnaire measures assessing sleep duration and self-reported sleep quality for the previous night. Study 2 used an experimental design involving 181 participants where individuals were quasi-randomized into either a sleep-deprivation (N = 90) or a sleep-control (N = 91) condition. All participants from both studies were tested on the same forced-choice multimodal test of emotion recognition to assess the accuracy of emotion categorization. Results: Sleep duration, self-reported sleep quality (study 1), and sleep deprivation (study 2) did not predict overall emotion recognition accuracy or speed. Similarly, the responses to each of the twelve emotions tested showed no evidence of impaired recognition ability, apart from one positive association suggesting that greater self-reported sleep quality could predict more accurate recognition of disgust (study 1). Conclusions: The studies presented here involve considerably larger samples than previous studies and the results support the null hypotheses. Therefore, we suggest that the ability to accurately categorize the emotions of others is not associated with short-term sleep duration or sleep quality and is resilient to acute periods of insufficient sleep.
Assuntos
Emoções , Expressão Facial , Reconhecimento Psicológico , Privação do Sono/psicologia , Adolescente , Adulto , Estudos Transversais , Feminino , Humanos , Masculino , Autorrelato , Inquéritos e Questionários , Suécia , Fatores de Tempo , Adulto JovemRESUMO
The vocal expression of human emotions is embedded within language and the study of intonation has to take into account two interacting levels of information--emotional and semantic meaning. In addition to the discussion of this dual coding system, an extension of Brunswik's lens model is proposed. This model includes the influences of conventions, norms, and display rules (pull effects) and psychobiological mechanisms (push effects) on emotional vocalizations produced by the speaker (encoding) and the reciprocal influences of these two aspects on attributions made by the listener (decoding), allowing the dissociation and systematic study of the production and perception of intonation. Three empirical studies are described as examples of possibilities of dissociating these different phenomena at the behavioral and neurological levels in the study of intonation.
Assuntos
Emoções Manifestas/fisiologia , Imaginação/fisiologia , Semântica , Percepção da Fala/fisiologia , Fala/fisiologia , Encéfalo/fisiologia , Humanos , Modelos PsicológicosRESUMO
The present study examines the effect of language experience on vocal emotion perception in a second language. Native speakers of French with varying levels of self-reported English ability were asked to identify emotions from vocal expressions produced by American actors in a forced-choice task, and to rate their pleasantness, power, alertness and intensity on continuous scales. Stimuli included emotionally expressive English speech (emotional prosody) and non-linguistic vocalizations (affect bursts), and a baseline condition with Swiss-French pseudo-speech. Results revealed effects of English ability on the recognition of emotions in English speech but not in non-linguistic vocalizations. Specifically, higher English ability was associated with less accurate identification of positive emotions, but not with the interpretation of negative emotions. Moreover, higher English ability was associated with lower ratings of pleasantness and power, again only for emotional prosody. This suggests that second language skills may sometimes interfere with emotion recognition from speech prosody, particularly for positive emotions.
Assuntos
Emoções/fisiologia , Idioma , Percepção da Fala/fisiologia , Feminino , Humanos , Modelos Lineares , Masculino , Adulto JovemRESUMO
We propose to use a comprehensive path model of vocal emotion communication, encompassing encoding, transmission, and decoding processes, to empirically model data sets on emotion expression and recognition. The utility of the approach is demonstrated for two data sets from two different cultures and languages, based on corpora of vocal emotion enactment by professional actors and emotion inference by naïve listeners. Lens model equations, hierarchical regression, and multivariate path analysis are used to compare the relative contributions of objectively measured acoustic cues in the enacted expressions and subjective voice cues as perceived by listeners to the variance in emotion inference from vocal expressions for four emotion families (fear, anger, happiness, and sadness). While the results confirm the central role of arousal in vocal emotion communication, the utility of applying an extended path modeling framework is demonstrated by the identification of unique combinations of distal cues and proximal percepts carrying information about specific emotion families, independent of arousal. The statistical models generated show that more sophisticated acoustic parameters need to be developed to explain the distal underpinnings of subjective voice quality percepts that account for much of the variance in emotion inference, in particular voice instability and roughness. The general approach advocated here, as well as the specific results, open up new research strategies for work in psychology (specifically emotion and social perception research) and engineering and computer science (specifically research and development in the domain of affective computing, particularly on automatic emotion detection and synthetic emotion expression in avatars).
Assuntos
Emoções Manifestas , Fala , Comunicação , Sinais (Psicologia) , Humanos , Modelos Estatísticos , VozRESUMO
Research on the perception of emotional expressions in faces and voices is exploding in psychology, the neurosciences, and affective computing. This article provides an overview of some of the major emotion expression (EE) corpora currently available for empirical research and introduces a new, dynamic, multimodal corpus of emotion expressions, the Geneva Multimodal Emotion Portrayals Core Set (GEMEP-CS). The design features of the corpus are outlined and justified, and detailed validation data for the core set selection are presented and discussed. Finally, an associated database with microcoded facial, vocal, and body action elements, as well as observer ratings, is introduced.
Assuntos
Emoções , Pesquisa/instrumentação , Expressão Facial , Humanos , VozRESUMO
We tested Ekman's (2003) suggestion that movements of a small number of reliable facial muscles are particularly trustworthy cues to experienced emotion because they tend to be difficult to produce voluntarily. On the basis of theoretical predictions, we identified two subsets of facial action units (AUs): reliable AUs and versatile AUs. A survey on the controllability of facial AUs confirmed that reliable AUs indeed seem more difficult to control than versatile AUs, although the distinction between the two sets of AUs should be understood as a difference in degree of controllability rather than a discrete categorization. Professional actors enacted a series of emotional states using method acting techniques, and their facial expressions were rated by independent judges. The effect of the two subsets of AUs (reliable AUs and versatile AUs) on identification of the emotion conveyed, its perceived authenticity, and perceived intensity was investigated. Activation of the reliable AUs had a stronger effect than that of versatile AUs on the identification, perceived authenticity, and perceived intensity of the emotion expressed. We found little evidence, however, for specific links between individual AUs and particular emotion categories. We conclude that reliable AUs may indeed convey trustworthy information about emotional processes but that most of these AUs are likely to be shared by several emotions rather than providing information about specific emotions. This study also suggests that the issue of reliable facial muscles may generalize beyond the Duchenne smile.
Assuntos
Emoções Manifestas , Expressão Facial , Músculos Faciais , Feminino , Humanos , Masculino , Reconhecimento Psicológico , Sorriso , Adulto JovemRESUMO
Emotion recognition ability has been identified as a central component of emotional competence. We describe the development of an instrument that objectively measures this ability on the basis of actor portrayals of dynamic expressions of 10 emotions (2 variants each for 5 emotion families), operationalized as recognition accuracy in 4 presentation modes combining the visual and auditory sense modalities (audio/video, audio only, video only, still picture). Data from a large validation study, including construct validation using related tests (Profile of Nonverbal Sensitivity; Rosenthal, Hall, DiMatteo, Rogers, & Archer, 1979; Japanese and Caucasian Facial Expressions of Emotion; Biehl et al., 1997; Diagnostic Analysis of Nonverbal Accuracy; Nowicki & Duke, 1994; Emotion Recognition Index; Scherer & Scherer, 2008), are reported. The results show the utility of a test designed to measure both coarse and fine-grained emotion differentiation and modality-specific skills. Factor analysis of the data suggests 2 separate abilities, visual and auditory recognition, which seem to be largely independent of personality dispositions.
Assuntos
Emoções , Expressão Facial , Comunicação não Verbal , Reconhecimento Visual de Modelos , Inventário de Personalidade/estatística & dados numéricos , Reconhecimento Psicológico , Qualidade da Voz , Adolescente , Adulto , Discriminação Psicológica , Inteligência Emocional , Feminino , Humanos , Masculino , Psicometria/estatística & dados numéricos , Reprodutibilidade dos Testes , Ajustamento Social , Adulto JovemRESUMO
To examine the basis of emotional changes to the voice, physiological and electroglottal measures were combined with acoustic speech analysis of 30 men performing a computer task in which they lost or gained points under two levels of difficulty. Predictions of the main effects of difficulty and reward on the voice were not borne out by the data. Instead, vocal changes depended largely on interactions between gain versus loss and difficulty. The rate at which the vocal folds open and close (fundamental frequency; fo) was higher for loss than for gain when difficulty was high, but not when difficulty was low. Electroglottal measures revealed that fo changes corresponded to shorter glottal open times for the loss conditions. Longer closed and shorter open phases were consistent with raised laryngeal tension in difficult loss conditions. Similarly, skin conductance indicated higher sympathetic arousal in loss than gain conditions, particularly when difficulty was high. The results provide evidence of the physiological basis of affective vocal responses, confirming the utility of measuring physiology and voice in the study of emotion.