Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 37
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Emotion ; 2024 Sep 19.
Artículo en Inglés | MEDLINE | ID: mdl-39298240

RESUMEN

What does it mean to feel good? Is our experience of gazing in awe at a majestic mountain fundamentally different than erupting with triumph when our favorite team wins the championship? Here, we use a semantic space approach to test which positive emotional experiences are distinct from each other based on in-depth personal narratives of experiences involving 22 positive emotions (n = 165; 3,592 emotional events). A bottom-up computational analysis was applied to the transcribed text, with unsupervised clustering employed to maximize internal granular consistency (i.e., the clusters being maximally different and maximally internally homogeneous). The analysis yielded four emotions that map onto distinct clusters of subjective experiences: amusement, interest, lust, and tenderness. The application of the semantic space approach to in-depth personal accounts yields a nuanced understanding of positive emotional experiences. Moreover, this analytical method allows for the bottom-up development of emotion taxonomies, showcasing its potential for broader applications in the study of subjective experiences. (PsycInfo Database Record (c) 2024 APA, all rights reserved).

2.
Cogn Emot ; : 1-17, 2024 Jul 07.
Artículo en Inglés | MEDLINE | ID: mdl-38973174

RESUMEN

Previous research has demonstrated that individuals from Western cultures exhibit categorical perception (CP) in their judgments of emotional faces. However, the extent to which this phenomenon characterises the judgments of facial expressions among East Asians remains relatively unexplored. Building upon recent findings showing that East Asians are more likely than Westerners to see a mixture of emotions in facial expressions of anger and disgust, the present research aimed to investigate whether East Asians also display CP for angry and disgusted faces. To address this question, participants from Canada and China were recruited to discriminate pairs of faces along the anger-disgust continuum. The results revealed the presence of CP in both cultural groups, as participants consistently exhibited higher accuracy and faster response latencies when discriminating between-category pairs of expressions compared to within-category pairs. Moreover, the magnitude of CP did not vary significantly across cultures. These findings provide novel evidence supporting the existence of CP for facial expressions in both East Asian and Western cultures, suggesting that CP is a perceptual phenomenon that transcends cultural boundaries. This research contributes to the growing literature on cross-cultural perceptions of facial expressions by deepening our understanding of how facial expressions are perceived categorically across cultures.

3.
Emotion ; 2024 Jun 17.
Artículo en Inglés | MEDLINE | ID: mdl-38884970

RESUMEN

When in distress, people often seek help in regulating their emotions by sharing them with others. Paradoxically, although people perceive such social sharing as beneficial, it often fails to promote emotional recovery. This may be explained by people seeking-and eliciting-emotional support, which offers only momentary relief. We hypothesized that (1) the type of support sharers seek shapes corresponding support provided by listeners, (2) the intensity of sharers' emotions increases their desire for emotional support and decreases their desire for cognitive support, and (3) listeners' empathic accuracy promotes support provision that matches sharers' desires. In 8-min interactions, participants (N = 208; data collected in 2016-2017) were randomly assigned to the role of sharer (asked to discuss an upsetting situation) or listener (instructed to respond naturally). Next, participants watched their video-recorded interaction in 20-s fragments. Sharers rated their emotional intensity and support desires, and listeners rated the sharer's emotional intensity and their own support provision. First, we found that the desire for support predicted corresponding support provision. Second, the intensity of sharers' emotions was associated with an increased desire for emotional and cognitive support. Third, the more accurately listeners judged sharers' emotional intensity, the more they fulfilled sharers' emotional (but not cognitive) support desire. These findings suggest that people have partial control over the success of their social sharing in bringing about effective interpersonal emotion regulation. People elicit the support they desire at that moment, explaining why they perceive sharing as beneficial even though it may not engender emotional recovery. (PsycInfo Database Record (c) 2024 APA, all rights reserved).

4.
Cogn Emot ; : 1-19, 2023 Nov 24.
Artículo en Inglés | MEDLINE | ID: mdl-37997898

RESUMEN

When we hear another person laugh or scream, can we tell the kind of situation they are in - for example, whether they are playing or fighting? Nonverbal expressions are theorised to vary systematically across behavioural contexts. Perceivers might be sensitive to these putative systematic mappings and thereby correctly infer contexts from others' vocalisations. Here, in two pre-registered experiments, we test the prediction that listeners can accurately deduce production contexts (e.g. being tickled, discovering threat) from spontaneous nonverbal vocalisations, like sighs and grunts. In Experiment 1, listeners (total n = 3120) matched 200 nonverbal vocalisations to one of 10 contexts using yes/no response options. Using signal detection analysis, we show that listeners were accurate at matching vocalisations to nine of the contexts. In Experiment 2, listeners (n = 337) categorised the production contexts by selecting from 10 response options in a forced-choice task. By analysing unbiased hit rates, we show that participants categorised all 10 contexts at better-than-chance levels. Together, these results demonstrate that perceivers can infer contexts from nonverbal vocalisations at rates that exceed that of random selection, suggesting that listeners are sensitive to systematic mappings between acoustic structures in vocalisations and behavioural contexts.

5.
BMC Psychol ; 10(1): 257, 2022 Nov 08.
Artículo en Inglés | MEDLINE | ID: mdl-36348466

RESUMEN

BACKGROUND: Syrian refugees comprise the vast majority of refugees in the Netherlands. Although some research has been carried out on factors promoting refugee resilience, there have been few empirical studies on the resilience of Syrian refugees. METHOD: We used a qualitative method to understand adversity, emotion, and the factors contributing to resilience in Syrian refugees. We interviewed eighteen adult Syrian refugees residing in the Netherlands and used thematic analysis to identify the themes. RESULTS: We identified themes and organized them into three main parts describing the challenges (pre and post-resettlement), key emotions pertaining to those experiences, and resilience factors. We found six primary protective factors internally and externally promoting participants' resilience: future orientation, coping strategies, social support, opportunities, religiosity, and cultural identity. In addition, positive emotions constituted a key feature of refugees' resilience. CONCLUSION: The results highlight the challenges and emotions in each stage of the Syrian refugees' journey and the multitude of factors affecting their resilience. Our findings on religiosity and maintaining cultural identity suggest that resilience can be enhanced on a cultural level. So it is worth noting these aspects when designing prevention or intervention programs for Syrian refugees.


Asunto(s)
Refugiados , Adulto , Humanos , Refugiados/psicología , Siria , Países Bajos , Emociones , Adaptación Psicológica
6.
Philos Trans R Soc Lond B Biol Sci ; 377(1841): 20200404, 2022 01 03.
Artículo en Inglés | MEDLINE | ID: mdl-34775822

RESUMEN

Laughter is a ubiquitous social signal. Recent work has highlighted distinctions between spontaneous and volitional laughter, which differ in terms of both production mechanisms and perceptual features. Here, we test listeners' ability to infer group identity from volitional and spontaneous laughter, as well as the perceived positivity of these laughs across cultures. Dutch (n = 273) and Japanese (n = 131) participants listened to decontextualized laughter clips and judged (i) whether the laughing person was from their cultural in-group or an out-group; and (ii) whether they thought the laughter was produced spontaneously or volitionally. They also rated the positivity of each laughter clip. Using frequentist and Bayesian analyses, we show that listeners were able to infer group membership from both spontaneous and volitional laughter, and that performance was equivalent for both types of laughter. Spontaneous laughter was rated as more positive than volitional laughter across the two cultures, and in-group laughs were perceived as more positive than out-group laughs by Dutch but not Japanese listeners. Our results demonstrate that both spontaneous and volitional laughter can be used by listeners to infer laughers' cultural group identity. This article is part of the theme issue 'Voice modulation: from origin and mechanism to social impact (Part II)'.


Asunto(s)
Risa , Percepción Auditiva , Teorema de Bayes , Emociones , Procesos de Grupo , Humanos
7.
J Nonverbal Behav ; 45(4): 419-454, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34744232

RESUMEN

The human voice communicates emotion through two different types of vocalizations: nonverbal vocalizations (brief non-linguistic sounds like laughs) and speech prosody (tone of voice). Research examining recognizability of emotions from the voice has mostly focused on either nonverbal vocalizations or speech prosody, and included few categories of positive emotions. In two preregistered experiments, we compare human listeners' (total n = 400) recognition performance for 22 positive emotions from nonverbal vocalizations (n = 880) to that from speech prosody (n = 880). The results show that listeners were more accurate in recognizing most positive emotions from nonverbal vocalizations compared to prosodic expressions. Furthermore, acoustic classification experiments with machine learning models demonstrated that positive emotions are expressed with more distinctive acoustic patterns for nonverbal vocalizations as compared to speech prosody. Overall, the results suggest that vocal expressions of positive emotions are communicated more successfully when expressed as nonverbal vocalizations compared to speech prosody. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s10919-021-00375-1.

8.
J Intell ; 9(2)2021 May 07.
Artículo en Inglés | MEDLINE | ID: mdl-34067013

RESUMEN

Individual differences in understanding other people's emotions have typically been studied with recognition tests using prototypical emotional expressions. These tests have been criticized for the use of posed, prototypical displays, raising the question of whether such tests tell us anything about the ability to understand spontaneous, non-prototypical emotional expressions. Here, we employ the Emotional Accuracy Test (EAT), which uses natural emotional expressions and defines the recognition as the match between the emotion ratings of a target and a perceiver. In two preregistered studies (Ntotal = 231), we compared the performance on the EAT with two well-established tests of emotion recognition ability: the Geneva Emotion Recognition Test (GERT) and the Reading the Mind in the Eyes Test (RMET). We found significant overlap (r > 0.20) between individuals' performance in recognizing spontaneous emotions in naturalistic settings (EAT) and posed (or enacted) non-verbal measures of emotion recognition (GERT, RMET), even when controlling for individual differences in verbal IQ. On average, however, participants reported enjoying the EAT more than the other tasks. Thus, the current research provides a proof-of-concept validation of the EAT as a useful measure for testing the understanding of others' emotions, a crucial feature of emotional intelligence. Further, our findings indicate that emotion recognition tests using prototypical expressions are valid proxies for measuring the understanding of others' emotions in more realistic everyday contexts.

9.
Front Psychol ; 12: 579474, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34122207

RESUMEN

Positive emotions are linked to numerous benefits, but not everyone appreciates the same kinds of positive emotional experiences. We examine how distinct positive emotions are perceived and whether individuals' perceptions are linked to how societies evaluate those emotions. Participants from Hong Kong and Netherlands rated 23 positive emotions based on their individual perceptions (positivity, arousal, and socially engaging) and societal evaluations (appropriate, valued, and approved of). We found that (1) there were cultural differences in judgments about all six aspects of positive emotions; (2) positivity, arousal, and social engagement predicted emotions being positively regarded at the societal level in both cultures; and (3) that positivity mattered more for the Dutch participants, although arousal and social engagement mattered more in Hong Kong for societal evaluations. These findings provide a granular map of the perception and evaluation of distinct positive emotions in two cultures and highlight the role of cultures in the understanding how positive emotions are perceived and evaluated.

10.
Cogn Emot ; 35(6): 1175-1186, 2021 09.
Artículo en Inglés | MEDLINE | ID: mdl-34000966

RESUMEN

The perception of multisensory emotion cues is affected by culture. For example, East Asians rely more on vocal, as compared to facial, affective cues compared to Westerners. However, it is unknown whether these cultural differences exist in childhood, and if not, which processing style is exhibited in children. The present study tested East Asian and Western children, as well as adults from both cultural backgrounds, to probe cross-cultural similarities and differences at different ages, and to establish the weighting of each modality at different ages. Participants were simultaneously shown a face and a voice expressing either congruent or incongruent emotions, and were asked to judge whether the person was happy or angry. Replicating previous research, East Asian adults relied more on vocal cues than did Western adults. Young children from both cultural groups, however, behaved like Western adults, relying primarily on visual information. The proportion of responses based on vocal cues increased with age in East Asian, but not Western, participants. These results suggest that culture is an important factor in developmental changes in the perception of facial and vocal affective information.


Asunto(s)
Expresión Facial , Voz , Adulto , Ira , Niño , Preescolar , Emociones , Humanos , Percepción
11.
Proc Biol Sci ; 287(1929): 20201148, 2020 06 24.
Artículo en Inglés | MEDLINE | ID: mdl-32546102

RESUMEN

Vocalizations linked to emotional states are partly conserved among phylogenetically related species. This continuity may allow humans to accurately infer affective information from vocalizations produced by chimpanzees. In two pre-registered experiments, we examine human listeners' ability to infer behavioural contexts (e.g. discovering food) and core affect dimensions (arousal and valence) from 155 vocalizations produced by 66 chimpanzees in 10 different positive and negative contexts at high, medium or low arousal levels. In experiment 1, listeners (n = 310), categorized the vocalizations in a forced-choice task with 10 response options, and rated arousal and valence. In experiment 2, participants (n = 3120) matched vocalizations to production contexts using yes/no response options. The results show that listeners were accurate at matching vocalizations of most contexts in addition to inferring arousal and valence. Judgments were more accurate for negative as compared to positive vocalizations. An acoustic analysis demonstrated that, listeners made use of brightness and duration cues, and relied on noisiness in making context judgements, and pitch to infer core affect dimensions. Overall, the results suggest that human listeners can infer affective information from chimpanzee vocalizations beyond core affect, indicating phylogenetic continuity in the mapping of vocalizations to behavioural contexts.


Asunto(s)
Percepción Auditiva , Pan troglodytes , Acústica , Afecto , Animales , Señales (Psicología) , Emociones , Femenino , Humanos , Masculino , Ruido
12.
J Exp Soc Psychol ; 87: 103912, 2020 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-32127724

RESUMEN

Empathizing with others is widely presumed to increase our understanding of their emotions. Little is known, however, about which empathic process actually help people recognize others' feelings more accurately. Here, we probed the relationship between emotion recognition and two empathic processes: spontaneously felt similarity (having had a similar experience) and deliberate perspective taking (focus on the other vs. oneself). We report four studies in which participants (total N = 803) watched videos of targets sharing genuine negative emotional experiences. Participants' multi-scalar ratings of the targets' emotions were compared with the targets' own emotion ratings. In Study 1 we found that having had a similar experience to what the target was sharing was associated with lower recognition of the target's emotions. Study 2 replicated the same pattern and in addition showed that making participants' own imagined reaction to the described event salient resulted in further reduced accuracy. Studies 3 and 4 were preregistered replications and extensions of Studies 1 and 2, in which we observed the same outcome using a different stimulus set, indicating the robustness of the finding. Moreover, Study 4 directly investigated the underlying mechanism of the observed effect. Findings showed that perceivers who have had a negative life experience similar to the emotional event described in the video felt greater personal distress after watching the video, which in part explained their reduced accuracy. These results provide the first demonstration that spontaneous empathy, evoked by similarity in negative experiences, may inhibit rather than increase our understanding of others' emotions.

13.
Psychon Bull Rev ; 27(2): 237-265, 2020 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-31898261

RESUMEN

Researchers examining nonverbal communication of emotions are becoming increasingly interested in differentiations between different positive emotional states like interest, relief, and pride. But despite the importance of the voice in communicating emotion in general and positive emotion in particular, there is to date no systematic review of what characterizes vocal expressions of different positive emotions. Furthermore, integration and synthesis of current findings are lacking. In this review, we comprehensively review studies (N = 108) investigating acoustic features relating to specific positive emotions in speech prosody and nonverbal vocalizations. We find that happy voices are generally loud with considerable variability in loudness, have high and variable pitch, and are high in the first two formant frequencies. When specific positive emotions are directly compared with each other, pitch mean, loudness mean, and speech rate differ across positive emotions, with patterns mapping onto clusters of emotions, so-called emotion families. For instance, pitch is higher for epistemological emotions (amusement, interest, relief), moderate for savouring emotions (contentment and pleasure), and lower for a prosocial emotion (admiration). Some, but not all, of the differences in acoustic patterns also map on to differences in arousal levels. We end by pointing to limitations in extant work and making concrete proposals for future research on positive emotions in the voice.


Asunto(s)
Emociones/fisiología , Comunicación no Verbal/fisiología , Habla/fisiología , Voz/fisiología , Humanos
14.
Emotion ; 20(3): 513-517, 2020 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-30816745

RESUMEN

Nonverbal vocalizations of some emotions have been found to be recognizable both within and across cultures. However, East Asians tend to suppress socially disengaging emotions because of interdependent views on self-other relationships. Here we tested the possibility that norms in interdependent cultures around socially disengaging emotions may influence nonverbal vocal communication of emotions. Specifically, we predicted that East Asians' vocalizations of socially disengaging emotions would be less recognizable to Westerners than those of other emotions. To test this hypothesis, we performed a balanced cross-cultural experiment in which 30 Dutch and 30 Japanese listeners categorized and rated Dutch and Japanese vocalizations expressing nine emotions including anger and triumph, two socially disengaging emotions. The only condition for which recognition performance failed to exceed chance level was Dutch listeners' judgments of Japanese anger vocalizations, p = .302. The magnitude of the in-group advantage (i.e., enhanced recognition accuracy when producer and perceiver cultures match) was also largest for Japanese anger vocalizations out of all the 18 conditions investigated, p < .001. The second largest in-group advantage was obtained for Japanese triumph vocalizations, p < .001. In addition, Dutch listeners rated Japanese vocalizations of anger and triumph as less intense, negative/positive, and aroused than did Japanese listeners, ps < .001. Taken together, these findings suggest that East Asian-specific cultural norms of interpersonal relationships are associated with specificity in nonverbal vocal communication of socially disengaging emotions, especially anger, to the point that some signals can only be understood by individuals who are culturally familiar with them. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Asunto(s)
Emociones/fisiología , Comunicación no Verbal/psicología , Adulto , Cultura , Femenino , Humanos , Japón , Masculino , Reconocimiento en Psicología , Adulto Joven
15.
Emotion ; 20(8): 1435-1445, 2020 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-31478724

RESUMEN

Are emotional expressions shaped by specialized innate mechanisms that guide learning, or do they develop exclusively from learning without innate preparedness? Here we test whether nonverbal affective vocalisations produced by bilaterally congenitally deaf adults contain emotional information that is recognisable to naive listeners. Because these deaf individuals have had no opportunity for auditory learning, the presence of such an association would imply that mappings between emotions and vocalizations are buffered against the absence of input that is typically important for their development and thus at least partly innate. We recorded nonverbal vocalizations expressing 9 emotions from 8 deaf individuals (435 tokens) and 8 matched hearing individuals (536 tokens). These vocalizations were submitted to an acoustic analysis and used in a recognition study in which naive listeners (n = 812) made forced-choice judgments. Our results show that naive listeners can reliably infer many emotional states from nonverbal vocalizations produced by deaf individuals. In particular, deaf vocalizations of fear, disgust, sadness, amusement, sensual pleasure, surprise, and relief were recognized at better-than-chance levels, whereas anger and achievement/triumph vocalizations were not. Differences were found on most acoustic features of the vocalizations produced by deaf as compared with hearing individuals. Our results suggest that there is an innate component to the associations between human emotions and vocalizations. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Asunto(s)
Percepción Auditiva/fisiología , Emociones/fisiología , Adulto , Anciano , Femenino , Humanos , Masculino , Persona de Mediana Edad
16.
Cogn Emot ; 33(8): 1587-1598, 2019 12.
Artículo en Inglés | MEDLINE | ID: mdl-30810482

RESUMEN

Crying is a common response to emotional distress that elicits support from the environment. People may regulate another's crying in several ways, such as by providing socio-affective support (e.g. comforting) or cognitive support (e.g. reappraisal), or by trying to emotionally disengage the other by suppression or distraction. We examined whether people adapt their interpersonal emotion regulation strategies to the situational context, by manipulating the regulatory demand of the situation in which someone is crying. Participants watched a video of a crying man and provided support by recording a video message. We hypothesised that when immediate down-regulation was required (i.e. high regulatory demand), participants would provide lower levels of socio-affective and cognitive support, and instead distract the crying person or encourage them to suppress their emotions, compared to when there is no such urgency (i.e. low regulatory demand). As predicted, both self-reported and behavioural responses indicated that high (as compared to low) regulatory demand led to a reduction in socio-affective support provision, and a strong increase in suppression and distraction. Cognitive support provision, however, was unaffected by regulatory demand. When the context required more immediate down-regulation, participants thus employed more regulation strategies aimed at disengaging from the emotional experience. This study provides a first step in showing that people take the context into account when attempting to regulate others' emotions.


Asunto(s)
Llanto/psicología , Regulación Emocional/fisiología , Relaciones Interpersonales , Apoyo Social , Adulto , Femenino , Humanos , Masculino , Autoinforme , Adulto Joven
17.
Emotion ; 19(1): 53-69, 2019 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-29504800

RESUMEN

Recent work has challenged the previously widely accepted belief that affective processing does not require awareness and can be carried out with more limited resources than semantic processing. This debate has focused exclusively on visual perception, even though evidence from both human and animal studies suggests that existence for nonconscious affective processing would be physiologically more feasible in the auditory system. Here we contrast affective and semantic processing of nonverbal emotional vocalizations under different levels of awareness in three experiments, using explicit (two-alternative forced choice masked affective and semantic categorization tasks, Experiments 1 and 2) and implicit (masked affective and semantic priming, Experiment 3) measures. Identical stimuli and design were used in the semantic and affective tasks. Awareness was manipulated by altering stimulus-mask signal-to-noise ratio during continuous auditory masking. Stimulus awareness was measured on each trial using a four-point perceptual awareness scale. In explicit tasks, neither affective nor semantic categorization could be performed in the complete absence of awareness, while both tasks could be performed above chance level when stimuli were consciously perceived. Semantic categorization was faster than affective evaluation. When the stimuli were partially perceived, semantic categorization accuracy exceeded affective evaluation accuracy. In implicit tasks neither affective nor semantic priming occurred in the complete absence of awareness, whereas both affective and semantic priming emerged when participants were aware of the primes. We conclude that auditory semantic processing is faster than affective processing, and that both affective and semantic auditory processing are dependent on awareness. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Asunto(s)
Concienciación/fisiología , Emociones/fisiología , Tiempo de Reacción/fisiología , Adulto , Femenino , Humanos , Masculino , Adulto Joven
18.
Cogn Emot ; 33(6): 1129-1143, 2019 09.
Artículo en Inglés | MEDLINE | ID: mdl-30345872

RESUMEN

When in emotional distress, people often turn to others for support. Paradoxically, even when people perceive social support to be beneficial, it often does not result in emotional recovery. This paradox may be explained by the fact that the sharing process disproportionately centres on support that is not helpful in the long run. A distinction has been made between two types of support that are differentially effective: Whereas socio-affective support alleviates momentary emotional distress, cognitive support fosters long-term recovery. But can listeners tell what support the sharer needs? The present study examines the hypothesis that sharers communicate their support goals by sharing in such a way that it allows listeners to infer the sharer's needs. In Experiment 1, we manipulated participants' support goals, and showed that socio-affective support goals led participants to express more emotions, whereas cognitive support goals resulted in greater use of appraisals. In Experiments 2 and 3, we tested whether these differential expressions would affect the support goals that listeners inferred. We found no evidence for such an effect: Listeners consistently perceived the sharer to predominantly want socio-affective support. These findings help explain why many social sharing instances revolve around socio-affective support, leading to subjectively experienced benefits, but not to genuine recovery.


Asunto(s)
Afecto/fisiología , Percepción Auditiva/fisiología , Distrés Psicológico , Apoyo Social , Adolescente , Adulto , Femenino , Humanos , Masculino , Persona de Mediana Edad , Adulto Joven
19.
J Cross Cult Psychol ; 49(1): 130-148, 2018 01.
Artículo en Inglés | MEDLINE | ID: mdl-29386689

RESUMEN

Although perceivers often agree about the primary emotion that is conveyed by a particular expression, observers may concurrently perceive several additional emotions from a given facial expression. In the present research, we compared the perception of two types of nonintended emotions in Chinese and Dutch observers viewing facial expressions: emotions which were morphologically similar to the intended emotion and emotions which were morphologically dissimilar to the intended emotion. Findings were consistent across two studies and showed that (a) morphologically similar emotions were endorsed to a greater extent than dissimilar emotions and (b) Chinese observers endorsed nonintended emotions more than did Dutch observers. Furthermore, the difference between Chinese and Dutch observers was more pronounced for the endorsement of morphologically similar emotions than of dissimilar emotions. We also obtained consistent evidence that Dutch observers endorsed nonintended emotions that were congruent with the preceding expressions to a greater degree. These findings suggest that culture and morphological similarity both influence the extent to which perceivers see several emotions in a facial expression.

20.
Cogn Emot ; 32(8): 1597-1610, 2018 12.
Artículo en Inglés | MEDLINE | ID: mdl-29388471

RESUMEN

Dynamic changes in emotional expressions are a valuable source of information in social interactions. As the expressive behaviour of a person changes, the inferences drawn from the behaviour may also change. Here, we test the possibility that dynamic changes in emotional expressions affect person perception in terms of stable trait attributions. Across three experiments, we examined perceivers' inferences about others' personality traits from changing emotional expressions. Expressions changed from one emotion ("start emotion") to another emotion ("end emotion"), allowing us to disentangle potential primacy, recency, and averaging effects. Drawing on three influential models of person perception, we examined perceptions of dominance and affiliation (Experiment 1a), competence and warmth (Experiment 1b), and dominance and trustworthiness (Experiment 2). A strong recency effect was consistently found across all trait judgments, that is, the end emotion of dynamic expressions had a strong impact on trait ratings. Evidence for a primacy effect was also observed (i.e. the information of start emotions was integrated), but less pronounced, and only for trait ratings relating to affiliation, warmth, and trustworthiness. Taken together, these findings suggest that, when making trait judgements about others, observers weigh the most recently displayed emotion in dynamic expressions more heavily than the preceding emotion.


Asunto(s)
Emociones , Expresión Facial , Relaciones Interpersonales , Percepción Social , Adulto , Análisis por Conglomerados , Femenino , Humanos , Juicio , Masculino , Países Bajos , Personalidad , Estudiantes/psicología , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA