Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 17 de 17
Filtrar
1.
Cogn Emot ; 28(5): 936-46, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24350613

RESUMO

In two studies, the robustness of anger recognition of bodily expressions is tested. In the first study, video recordings of an actor expressing four distinct emotions (anger, despair, fear, and joy) were structurally manipulated as to image impairment and body segmentation. The results show that anger recognition is more robust than other emotions to image impairment and to body segmentation. Moreover, the study showed that arms expressing anger were more robustly recognised than arms expressing other emotions. Study 2 added face blurring as a variable to the bodily expressions and showed that it decreased accurate emotion recognition-but more for recognition of joy and despair than for anger and fear. In sum, the paper indicates the robustness of anger recognition in multileveled deteriorated bodily expressions.


Assuntos
Ira/fisiologia , Emoções Manifestas/fisiologia , Comunicação não Verbal/psicologia , Reconhecimento Psicológico/fisiologia , Adolescente , Adulto , Expressão Facial , Feminino , Humanos , Masculino , Estimulação Luminosa/métodos , Adulto Jovem
2.
J Intell ; 11(11)2023 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-37998709

RESUMO

Emotional intelligence (EI) has gained significant popularity as a scientific construct over the past three decades, yet its conceptualization and measurement still face limitations. Applied EI research often overlooks its components, treating it as a global characteristic, and there are few widely used performance-based tests for assessing ability EI. The present paper proposes avenues for advancing ability EI measurement by connecting the main EI components to models and theories from the emotion science literature and related fields. For emotion understanding and emotion recognition, we discuss the implications of basic emotion theory, dimensional models, and appraisal models of emotion for creating stimuli, scenarios, and response options. For the regulation and management of one's own and others' emotions, we discuss how the process model of emotion regulation and its extensions to interpersonal processes can inform the creation of situational judgment items. In addition, we emphasize the importance of incorporating context, cross-cultural variability, and attentional and motivational factors into future models and measures of ability EI. We hope this article will foster exchange among scholars in the fields of ability EI, basic emotion science, social cognition, and emotion regulation, leading to an enhanced understanding of the individual differences in successful emotional functioning and communication.

3.
Front Psychol ; 14: 1148863, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37179889

RESUMO

Introduction: According to recent meta-analyses, emotional intelligence can significantly predict academic performance. In this research, we wanted to investigate a particular group of students for which emotional intelligence should be crucial. Namely, we examined whether emotional intelligence, conceptualized as an ability, uniquely contributes to academic performance in hospitality management education beyond fluid intelligence and personality. Methods: Using a battery of tests and questionnaires in an online survey, we analyzed if fluid ability, the Big-Five personality dimensions, and ability-based emotional intelligence predict six module grades in a sample of N = 330 first-semester students at a Swiss-based hospitality school. Results: We found that the ability to manage other people's emotions is more predictive of module grades than fluid ability if the courses involve substantial parts of interactive work. Complementarily, the more a module focuses on theoretical knowledge or abstract subject material, the more fluid ability predicted performance. Other abilities and factors - emotion understanding, emotion regulation, the students' age, conscientiousness, and openness - predicted performance only in specific modules, hinting that the didactic methods and grading procedures are complex and involve various skills and dispositions of the students. Discussion: Given that the hospitality education and industry are buzzing with interactions with peers and guests alike, we provide evidence that interpersonal and emotional competencies are vital to hospitality curricula.

4.
J Gerontol B Psychol Sci Soc Sci ; 77(1): 84-93, 2022 01 12.
Artigo em Inglês | MEDLINE | ID: mdl-33842959

RESUMO

OBJECTIVES: It is commonly argued that older adults show difficulties in standardized tasks of emotional expression perception, yet most previous works relied on classic sets of static, decontextualized, and stereotypical facial expressions. In real life, facial expressions are dynamic and embedded in a rich context, 2 key factors that may aid emotion perception. Specifically, body language provides important affective cues that may disambiguate facial movements. METHOD: We compared emotion perception of dynamic faces, bodies, and their combination in a sample of older (age 60-83, n = 126) and young (age 18-30, n = 124) adults. We used the Geneva Multimodal Emotion Portrayals set, which includes a full view of expressers' faces and bodies, displaying a diverse range of positive and negative emotions, portrayed dynamically and holistically in a nonstereotypical, unconstrained manner. Critically, we digitally manipulated the dynamic cue such that perceivers viewed isolated faces (without bodies), isolated bodies (without faces), or faces with bodies. RESULTS: Older adults showed better perception of positive and negative dynamic facial expressions, while young adults showed better perception of positive isolated dynamic bodily expressions. Importantly, emotion perception of faces with bodies was comparable across ages. DISCUSSION: Dynamic emotion perception in young and older adults may be more similar than previously assumed, especially when the task is more realistic and ecological. Our results emphasize the importance of contextualized and ecological tasks in emotion perception across ages.


Assuntos
Envelhecimento/fisiologia , Emoções/fisiologia , Reconhecimento Facial/fisiologia , Cinésica , Percepção Social , Adolescente , Adulto , Fatores Etários , Idoso , Idoso de 80 Anos ou mais , Expressão Facial , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
5.
Int J Psychol ; 46(6): 401-35, 2011 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-22126090

RESUMO

Do members of different cultures express (or "encode") emotions in the same fashion? How well can members of distinct cultures recognize (or "decode") each other's emotion expressions? The question of cultural universality versus specificity in emotional expression has been a hot topic of debate for more than half a century, but, despite a sizeable amount of empirical research produced to date, no convincing answers have emerged. We suggest that this unsatisfactory state of affairs is due largely to a lack of concern with the precise mechanisms involved in emotion expression and perception, and propose to use a modified Brunswikian lens model as an appropriate framework for research in this area. On this basis we provide a comprehensive review of the existing literature and point to research paradigms that are likely to provide the evidence required to resolve the debate on universality vs. cultural specificity of emotional expression. Applying this fresh perspective, our analysis reveals that, given the paucity of pertinent data, no firm conclusions can be drawn on actual expression (encoding) patterns across cultures (although there appear to be more similarities than differences), but that there is compelling evidence for intercultural continuity in decoding, or recognition, ability. We also note a growing body of research on the notion of ingroup advantage due to expression "dialects," above and beyond the general encoding or decoding patterns. We furthermore suggest that these empirical patterns could be explained by both universality in the underlying mechanisms and cultural specificity in the input to, and the regulation of, these expression and perception mechanisms. Overall, more evidence is needed, both to further elucidate these mechanisms and to inventory the patterns of cultural effects. We strongly recommend using more solid conceptual and theoretical perspectives, as well as more ecologically valid approaches, in designing future studies in emotion expression and perception research.


Assuntos
Comparação Transcultural , Emoções , Relações Interpessoais , Percepção Social , Valores Sociais , Pré-Escolar , Comunicação , Emoções Manifestas , Expressão Facial , Generalização Psicológica , Humanos , Lactente , Modelos Psicológicos , Teoria da Construção Pessoal , Identificação Social , Acústica da Fala , Teoria da Mente
6.
Emotion ; 21(1): 73-95, 2021 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-31682143

RESUMO

Theory and research on emotion expression, both on production and recognition, has been dominated by a categorical emotion approach suggesting that discrete emotions are elicited and expressed via prototypical facial muscle configurations that can then be recognized by observers, presumably via template matching. This tradition is increasingly challenged by alternative theoretical approaches. In particular, appraisal theorists have suggested that specific elements of facial expressions are directly produced by the result of certain appraisals and have made detailed predictions about the facial patterns to be expected for these appraisal configurations. This approach has been extended to emotion perception, with theorists claiming that observers first infer individual appraisals and only then make categorical emotion judgments from the estimated appraisal patterns, using semantic inference rules. Here we report two studies that empirically examine the two central hypotheses proposed by this theoretical position: (a) that specific appraisals produce predicted patterns of facial muscle expressions and (b) that observers can infer a person's appraisals of ongoing events from the predicted facial expression configurations. The results show that (a) professional actors use many of the predicted facial action unit patterns to enact, in a realistic scenario setting, appraisal outcomes specified by experimental design, and (b) observers systematically infer specific appraisals from ecologically valid video recordings of marketing research participants as they view TV commercials (selected according to the likelihood of eliciting specific appraisals). The patterns of facial action units identified in these studies correspond largely to prior predictions and encourage further research on appraisal-driven expression and inference. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Assuntos
Emoções/fisiologia , Expressão Facial , Adulto , Idoso , Comunicação , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
7.
J Intell ; 9(1)2021 Mar 05.
Artigo em Inglês | MEDLINE | ID: mdl-33807593

RESUMO

Drawing upon multidimensional theories of intelligence, the current paper evaluates if the Geneva Emotional Competence Test (GECo) fits within a higher-order intelligence space and if emotional intelligence (EI) branches predict distinct criteria related to adjustment and motivation. Using a combination of classical and S-1 bifactor models, we find that (a) a first-order oblique and bifactor model provide excellent and comparably fitting representation of an EI structure with self-regulatory skills operating independent of general ability, (b) residualized EI abilities uniquely predict criteria over general cognitive ability as referenced by fluid intelligence, and (c) emotion recognition and regulation incrementally predict grade point average (GPA) and affective engagement in opposing directions, after controlling for fluid general ability and the Big Five personality traits. Results are qualified by psychometric analyses suggesting only emotion regulation has enough determinacy and reliable variance beyond a general ability factor to be treated as a manifest score in analyses and interpretation. Findings call for renewed, albeit tempered, research on EI as a multidimensional intelligence and highlight the need for refined assessment of emotional perception, understanding, and management to allow focused analyses of different EI abilities.

8.
Psychophysiology ; 57(12): e13684, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-32996608

RESUMO

When perceiving emotional facial expressions there is an automatic tendency to react with a matching facial expression. A classic explanation of this phenomenon, termed the matched motor hypothesis, highlights the importance of topographic matching, that is, the correspondence in body parts, between perceived and produced actions. More recent studies using mimicry paradigms have challenged this classic account, producing ample evidence against the matched motor hypothesis. However, research using stimulus-response compatibility (SRC) paradigms usually assumed the effect relies on topographic matching. While mimicry and SRC share some characteristics, critical differences between the paradigms suggest conclusions cannot be simply transferred from one to another. Thus, our aim in the present study was to directly test the matched motor hypothesis using SRC. Specifically, we investigated whether observing emotional body postures or hearing emotional vocalizations produces a tendency to respond with one's face, despite completely different motor actions being involved. In three SRC experiments, participants were required to either smile or frown in response to a color cue, presented concurrently with stimuli of happy and angry facial (experiment 1), body (experiment 2), or vocal (experiment 3) expressions. Reaction times were measured using facial EMG. Whether presenting facial, body, or vocal expressions, we found faster responses in compatible, compared to incompatible trials. These results demonstrate that the SRC effect of emotional expressions does not require topographic matching. Our findings question interpretations of previous research and suggest further examination of the matched motor hypothesis.


Assuntos
Percepção Auditiva/fisiologia , Emoções/fisiologia , Expressão Facial , Reconhecimento Facial/fisiologia , Gestos , Postura/fisiologia , Percepção Social , Adolescente , Adulto , Ira/fisiologia , Eletromiografia , Feminino , Felicidade , Humanos , Masculino , Adulto Jovem
9.
J Appl Psychol ; 104(4): 559-580, 2019 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-30346195

RESUMO

Emotional intelligence (EI) has been frequently studied as a predictor of work criteria, but disparate approaches to defining and measuring EI have produced rather inconsistent findings. The conceptualization of EI as an ability to be measured with performance-based tests is by many considered the most appropriate approach, but only few tests developed in this tradition exist, and none of them is designed to specifically assess EI in the workplace. The present research introduces the Geneva Emotional Competence test (GECo)-a new ability EI test measuring emotion recognition (assessed using video clips of actors), emotion understanding, emotion regulation in oneself, and emotion management in others (all assessed with situational judgment items of work-related scenarios). For the situational judgment items, correct and incorrect response options were developed using established theories from the emotion and organizational field. Five studies (total N = 888) showed that all subtests had high measurement precision (as assessed with Item Response Theory), and correlated in expected ways with other EI tests, cognitive intelligence, personality, and demographic variables. Further, the GECo predicted performance in computerized assessment center tasks in a sample of professionals, and explained academic performance in students incrementally above another ability EI test. Because of its theory-based scoring, good psychometric properties, and focus on the workplace, the GECo represents a promising tool for studying the role of four major EI components in organizational outcomes. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Assuntos
Aptidão/fisiologia , Inteligência Emocional/fisiologia , Emoções/fisiologia , Emprego/psicologia , Relações Interpessoais , Psicometria/instrumentação , Percepção Social , Adulto , Humanos
11.
Front Psychol ; 10: 508, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30941073

RESUMO

Research on facial emotion expression has mostly focused on emotion recognition, assuming that a small number of discrete emotions is elicited and expressed via prototypical facial muscle configurations as captured in still photographs. These are expected to be recognized by observers, presumably via template matching. In contrast, appraisal theories of emotion propose a more dynamic approach, suggesting that specific elements of facial expressions are directly produced by the result of certain appraisals and predicting the facial patterns to be expected for certain appraisal configurations. This approach has recently been extended to emotion perception, claiming that observers first infer individual appraisals and only then make categorical emotion judgments based on the estimated appraisal patterns, using inference rules. Here, we report two related studies to empirically investigate the facial action unit configurations that are used by actors to convey specific emotions in short affect bursts and to examine to what extent observers can infer a person's emotions from the predicted facial expression configurations. The results show that (1) professional actors use many of the predicted facial action unit patterns to enact systematically specified appraisal outcomes in a realistic scenario setting, and (2) naïve observers infer the respective emotions based on highly similar facial movement configurations with a degree of accuracy comparable to earlier research findings. Based on estimates of underlying appraisal criteria for the different emotions we conclude that the patterns of facial action units identified in this research correspond largely to prior predictions and encourage further research on appraisal-driven expression and inference.

12.
Front Psychol ; 9: 763, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29867704

RESUMO

The majority of research on emotion expression has focused on static facial prototypes of a few selected, mostly negative emotions. Implicitly, most researchers seem to have considered all positive emotions as sharing one common signal (namely, the smile), and consequently as being largely indistinguishable from each other in terms of expression. Recently, a new wave of studies has started to challenge the traditional assumption by considering the role of multiple modalities and the dynamics in the expression and recognition of positive emotions. Based on these recent studies, we suggest that positive emotions are better expressed and correctly perceived when (a) they are communicated simultaneously through the face and body and (b) perceivers have access to dynamic stimuli. Notably, we argue that this improvement is comparatively more important for positive emotions than for negative emotions. Our view is that the misperception of positive emotions has fewer immediate and potentially life-threatening consequences than the misperception of negative emotions; therefore, from an evolutionary perspective, there was only limited benefit in the development of clear, quick signals that allow observers to draw fine distinctions between them. Consequently, we suggest that the successful communication of positive emotions requires a stronger signal than that of negative emotions, and that this signal is provided by the use of the body and the way those movements unfold. We hope our contribution to this growing field provides a new direction and a theoretical grounding for the many lines of empirical research on the expression and recognition of positive emotions.

13.
J Pers Soc Psychol ; 114(3): 358-379, 2018 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-29461080

RESUMO

Although research on facial emotion recognition abounds, there has been little attention on the nature of the underlying mechanisms. In this article, using a "reverse engineering" approach, we suggest that emotion inference from facial expression mirrors the expression process. As a strong case can be made for an appraisal theory account of emotional expression, which holds that appraisal results directly determine the nature of facial muscle actions, we claim that observers first detect specific appraisals from different facial muscle actions and then use implicit inference rules to categorize and name specific emotions. We report three experiments in which, guided by theoretical predictions and past empirical evidence, we systematically manipulated specific facial action units individually and in different configurations via synthesized avatar expressions. Large, diverse groups of participants judged the resulting videos for the underlying appraisals and/or the ensuing emotions. The results confirm that participants can infer targeted appraisals and emotions from synthesized facial actions based on appraisal predictions. We also report evidence that the ability to correctly interpret the synthesized stimuli is highly correlated with emotion recognition ability as part of emotional competence. We conclude by highlighting the importance of adopting a theory-based experimental approach in future research, focusing on the dynamic unfolding of facial expressions of emotion. (PsycINFO Database Record


Assuntos
Emoções/fisiologia , Expressão Facial , Músculos Faciais/fisiologia , Reconhecimento Facial/fisiologia , Percepção Social , Adolescente , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
14.
Front Psychol ; 4: 292, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23750144

RESUMO

WITHOUT DOUBT, THERE IS EMOTIONAL INFORMATION IN ALMOST ANY KIND OF SOUND RECEIVED BY HUMANS EVERY DAY: be it the affective state of a person transmitted by means of speech; the emotion intended by a composer while writing a musical piece, or conveyed by a musician while performing it; or the affective state connected to an acoustic event occurring in the environment, in the soundtrack of a movie, or in a radio play. In the field of affective computing, there is currently some loosely connected research concerning either of these phenomena, but a holistic computational model of affect in sound is still lacking. In turn, for tomorrow's pervasive technical systems, including affective companions and robots, it is expected to be highly beneficial to understand the affective dimensions of "the sound that something makes," in order to evaluate the system's auditory environment and its own audio output. This article aims at a first step toward a holistic computational model: starting from standard acoustic feature extraction schemes in the domains of speech, music, and sound analysis, we interpret the worth of individual features across these three domains, considering four audio databases with observer annotations in the arousal and valence dimensions. In the results, we find that by selection of appropriate descriptors, cross-domain arousal, and valence regression is feasible achieving significant correlations with the observer annotations of up to 0.78 for arousal (training on sound and testing on enacted speech) and 0.60 for valence (training on enacted speech and testing on music). The high degree of cross-domain consistency in encoding the two main dimensions of affect may be attributable to the co-evolution of speech and music from multimodal affect bursts, including the integration of nature sounds for expressive effects.

15.
Emotion ; 12(5): 1161-79, 2012 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-22081890

RESUMO

Research on the perception of emotional expressions in faces and voices is exploding in psychology, the neurosciences, and affective computing. This article provides an overview of some of the major emotion expression (EE) corpora currently available for empirical research and introduces a new, dynamic, multimodal corpus of emotion expressions, the Geneva Multimodal Emotion Portrayals Core Set (GEMEP-CS). The design features of the corpus are outlined and justified, and detailed validation data for the core set selection are presented and discussed. Finally, an associated database with microcoded facial, vocal, and body action elements, as well as observer ratings, is introduced.


Assuntos
Emoções , Pesquisa/instrumentação , Expressão Facial , Humanos , Voz
16.
Emotion ; 12(5): 1085-101, 2012 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-22059517

RESUMO

Emotion communication research strongly focuses on the face and voice as expressive modalities, leaving the rest of the body relatively understudied. Contrary to the early assumption that body movement only indicates emotional intensity, recent studies have shown that body movement and posture also conveys emotion specific information. However, a deeper understanding of the underlying mechanisms is hampered by a lack of production studies informed by a theoretical framework. In this research we adopted the Body Action and Posture (BAP) coding system to examine the types and patterns of body movement that are employed by 10 professional actors to portray a set of 12 emotions. We investigated to what extent these expression patterns support explicit or implicit predictions from basic emotion theory, bidimensional theory, and componential appraisal theory. The overall results showed partial support for the different theoretical approaches. They revealed that several patterns of body movement systematically occur in portrayals of specific emotions, allowing emotion differentiation. Although a few emotions were prototypically expressed by one particular pattern, most emotions were variably expressed by multiple patterns, many of which can be explained as reflecting functional components of emotion such as modes of appraisal and action readiness. It is concluded that further work in this largely underdeveloped area should be guided by an appropriate theoretical framework to allow a more systematic design of experiments and clear hypothesis testing.


Assuntos
Emoções , Movimento , Comunicação não Verbal , Postura , Expressão Facial , Gestos , Humanos , Teoria Psicológica
17.
Emotion ; 12(4): 701-715, 2012 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-22642350

RESUMO

We tested Ekman's (2003) suggestion that movements of a small number of reliable facial muscles are particularly trustworthy cues to experienced emotion because they tend to be difficult to produce voluntarily. On the basis of theoretical predictions, we identified two subsets of facial action units (AUs): reliable AUs and versatile AUs. A survey on the controllability of facial AUs confirmed that reliable AUs indeed seem more difficult to control than versatile AUs, although the distinction between the two sets of AUs should be understood as a difference in degree of controllability rather than a discrete categorization. Professional actors enacted a series of emotional states using method acting techniques, and their facial expressions were rated by independent judges. The effect of the two subsets of AUs (reliable AUs and versatile AUs) on identification of the emotion conveyed, its perceived authenticity, and perceived intensity was investigated. Activation of the reliable AUs had a stronger effect than that of versatile AUs on the identification, perceived authenticity, and perceived intensity of the emotion expressed. We found little evidence, however, for specific links between individual AUs and particular emotion categories. We conclude that reliable AUs may indeed convey trustworthy information about emotional processes but that most of these AUs are likely to be shared by several emotions rather than providing information about specific emotions. This study also suggests that the issue of reliable facial muscles may generalize beyond the Duchenne smile.


Assuntos
Emoções Manifestas , Expressão Facial , Músculos Faciais , Feminino , Humanos , Masculino , Reconhecimento Psicológico , Sorriso , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA