Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 62
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Proc Natl Acad Sci U S A ; 120(37): e2218593120, 2023 09 12.
Artículo en Inglés | MEDLINE | ID: mdl-37676911

RESUMEN

Despite the variability of music across cultures, some types of human songs share acoustic characteristics. For example, dance songs tend to be loud and rhythmic, and lullabies tend to be quiet and melodious. Human perceptual sensitivity to the behavioral contexts of songs, based on these musical features, suggests that basic properties of music are mutually intelligible, independent of linguistic or cultural content. Whether these effects reflect universal interpretations of vocal music, however, is unclear because prior studies focus almost exclusively on English-speaking participants, a group that is not representative of humans. Here, we report shared intuitions concerning the behavioral contexts of unfamiliar songs produced in unfamiliar languages, in participants living in Internet-connected industrialized societies (n = 5,516 native speakers of 28 languages) or smaller-scale societies with limited access to global media (n = 116 native speakers of three non-English languages). Participants listened to songs randomly selected from a representative sample of human vocal music, originally used in four behavioral contexts, and rated the degree to which they believed the song was used for each context. Listeners in both industrialized and smaller-scale societies inferred the contexts of dance songs, lullabies, and healing songs, but not love songs. Within and across cohorts, inferences were mutually consistent. Further, increased linguistic or geographical proximity between listeners and singers only minimally increased the accuracy of the inferences. These results demonstrate that the behavioral contexts of three common forms of music are mutually intelligible cross-culturally and imply that musical diversity, shaped by cultural evolution, is nonetheless grounded in some universal perceptual phenomena.


Asunto(s)
Evolución Cultural , Música , Humanos , Lenguaje , Lingüística , Acústica
2.
Proc Biol Sci ; 291(2027): 20240958, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39013420

RESUMEN

Darwin proposed that blushing-the reddening of the face owing to heightened self-awareness-is 'the most human of all expressions'. Yet, relatively little is known about the underlying mechanisms of blushing. Theories diverge on whether it is a rapid, spontaneous emotional response that does not involve reflection upon the self or whether it results from higher-order socio-cognitive processes. Investigating the neural substrates of blushing can shed light on the mental processes underlying blushing and the mechanisms involved in self-awareness. To reveal neural activity associated with blushing, 16-20 year-old participants (n = 40) watched pre-recorded videos of themselves (versus other people as a control condition) singing karaoke in a magnetic resonance imaging scanner. We measured participants' cheek temperature increase-an indicator of blushing-and their brain activity. The results showed that blushing is higher when watching oneself versus others sing. Those who blushed more while watching themselves sing had, on average, higher activation in the cerebellum (lobule V) and the left paracentral lobe and exhibited more time-locked processing of the videos in early visual cortices. These findings show that blushing is associated with the activation of brain areas involved in emotional arousal, suggesting that it may occur independently of higher-order socio-cognitive processes. Our results provide new avenues for future research on self-awareness in infants and non-human animals.


Asunto(s)
Mejilla , Emociones , Imagen por Resonancia Magnética , Humanos , Masculino , Adulto Joven , Adolescente , Femenino , Mejilla/fisiología , Encéfalo/fisiología , Canto
3.
Cogn Emot ; : 1-17, 2024 Jul 07.
Artículo en Inglés | MEDLINE | ID: mdl-38973174

RESUMEN

Previous research has demonstrated that individuals from Western cultures exhibit categorical perception (CP) in their judgments of emotional faces. However, the extent to which this phenomenon characterises the judgments of facial expressions among East Asians remains relatively unexplored. Building upon recent findings showing that East Asians are more likely than Westerners to see a mixture of emotions in facial expressions of anger and disgust, the present research aimed to investigate whether East Asians also display CP for angry and disgusted faces. To address this question, participants from Canada and China were recruited to discriminate pairs of faces along the anger-disgust continuum. The results revealed the presence of CP in both cultural groups, as participants consistently exhibited higher accuracy and faster response latencies when discriminating between-category pairs of expressions compared to within-category pairs. Moreover, the magnitude of CP did not vary significantly across cultures. These findings provide novel evidence supporting the existence of CP for facial expressions in both East Asian and Western cultures, suggesting that CP is a perceptual phenomenon that transcends cultural boundaries. This research contributes to the growing literature on cross-cultural perceptions of facial expressions by deepening our understanding of how facial expressions are perceived categorically across cultures.

4.
Proc Natl Acad Sci U S A ; 117(4): 1924-1934, 2020 01 28.
Artículo en Inglés | MEDLINE | ID: mdl-31907316

RESUMEN

What is the nature of the feelings evoked by music? We investigated how people represent the subjective experiences associated with Western and Chinese music and the form in which these representational processes are preserved across different cultural groups. US (n = 1,591) and Chinese (n = 1,258) participants listened to 2,168 music samples and reported on the specific feelings (e.g., "angry," "dreamy") or broad affective features (e.g., valence, arousal) that they made individuals feel. Using large-scale statistical tools, we uncovered 13 distinct types of subjective experience associated with music in both cultures. Specific feelings such as "triumphant" were better preserved across the 2 cultures than levels of valence and arousal, contrasting with theoretical claims that valence and arousal are building blocks of subjective experience. This held true even for music selected on the basis of its valence and arousal levels and for traditional Chinese music. Furthermore, the feelings associated with music were found to occupy continuous gradients, contradicting discrete emotion theories. Our findings, visualized within an interactive map (https://www.ocf.berkeley.edu/∼acowen/music.html) reveal a complex, high-dimensional space of subjective experience associated with music in multiple cultures. These findings can inform inquiries ranging from the etiology of affective disorders to the neurological basis of emotion.


Asunto(s)
Afecto/fisiología , Nivel de Alerta/fisiología , Evolución Cultural , Emociones/fisiología , Modelos Teóricos , Música/psicología , Percepción Auditiva , China , Comparación Transcultural , Humanos , Estados Unidos
5.
Cogn Emot ; : 1-19, 2023 Nov 24.
Artículo en Inglés | MEDLINE | ID: mdl-37997898

RESUMEN

When we hear another person laugh or scream, can we tell the kind of situation they are in - for example, whether they are playing or fighting? Nonverbal expressions are theorised to vary systematically across behavioural contexts. Perceivers might be sensitive to these putative systematic mappings and thereby correctly infer contexts from others' vocalisations. Here, in two pre-registered experiments, we test the prediction that listeners can accurately deduce production contexts (e.g. being tickled, discovering threat) from spontaneous nonverbal vocalisations, like sighs and grunts. In Experiment 1, listeners (total n = 3120) matched 200 nonverbal vocalisations to one of 10 contexts using yes/no response options. Using signal detection analysis, we show that listeners were accurate at matching vocalisations to nine of the contexts. In Experiment 2, listeners (n = 337) categorised the production contexts by selecting from 10 response options in a forced-choice task. By analysing unbiased hit rates, we show that participants categorised all 10 contexts at better-than-chance levels. Together, these results demonstrate that perceivers can infer contexts from nonverbal vocalisations at rates that exceed that of random selection, suggesting that listeners are sensitive to systematic mappings between acoustic structures in vocalisations and behavioural contexts.

6.
Cogn Emot ; 36(3): 388-401, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-35639090

RESUMEN

Social Functionalist Theory (SFT) emerged 20 years ago to orient emotion science to the social nature of emotion. Here we expand upon SFT and make the case for how emotions, relationships, and culture constitute one another. First, we posit that emotions enable the individual to meet six "relational needs" within social interactions: security, commitment, status, trust, fairness, and belongingness. Building upon this new theorising, we detail four principles concerning emotional experience, cognition, expression, and the cultural archiving of emotion. We conclude by considering the bidirectional influences between culture, relationships, and emotion, outlining areas of future inquiry.


Asunto(s)
Cognición , Emociones , Humanos
7.
Psychol Sci ; 32(12): 2035-2041, 2021 12.
Artículo en Inglés | MEDLINE | ID: mdl-34788164

RESUMEN

Older age is characterized by more positive and less negative emotional experience. Recent work by Carstensen et al. (2020) demonstrated that the age advantages in emotional experience have persisted during the COVID-19 pandemic. In two studies, we replicated and extended this work. In Study 1, we conducted a large-scale test of the robustness of Carstensen and colleagues' findings using data from 23,350 participants in 63 countries. Our results confirm that age advantages in emotions have persisted during the COVID-19 pandemic. In Study 2, we directly compared the age advantages before and during the COVID-19 pandemic in a within-participants study (N = 4,370). We found that the age advantages in emotions decreased during the pandemic. These findings are consistent with theoretical proposals that the age advantages reflect older adults' ability to avoid situations that are likely to cause negative emotions, which is challenging under conditions of sustained unavoidable stress.


Asunto(s)
COVID-19 , Pandemias , Anciano , Envejecimiento , Emociones , Humanos , SARS-CoV-2
8.
Biol Lett ; 17(9): 20210319, 2021 09.
Artículo en Inglés | MEDLINE | ID: mdl-34464539

RESUMEN

Human adult laughter is characterized by vocal bursts produced predominantly during exhalation, yet apes laugh while exhaling and inhaling. The current study investigated our hypothesis that laughter of human infants changes from laughter similar to that of apes to increasingly resemble that of human adults over early development. We further hypothesized that the more laughter is produced on the exhale, the more positively it is perceived. To test these predictions, novice (n = 102) and expert (phonetician, n = 15) listeners judged the extent to which human infant laughter (n = 44) was produced during inhalation or exhalation, and the extent to which they found the laughs pleasant and contagious. Support was found for both hypotheses, which were further confirmed in two pre-registered replication studies. Likely through social learning and the anatomical development of the vocal production system, infants' initial ape-like laughter transforms into laughter similar to that of adult humans over the course of ontogeny.


Asunto(s)
Hominidae , Risa , Voz , Adulto , Animales , Emociones , Humanos , Lactante
9.
Cogn Emot ; 35(6): 1175-1186, 2021 09.
Artículo en Inglés | MEDLINE | ID: mdl-34000966

RESUMEN

The perception of multisensory emotion cues is affected by culture. For example, East Asians rely more on vocal, as compared to facial, affective cues compared to Westerners. However, it is unknown whether these cultural differences exist in childhood, and if not, which processing style is exhibited in children. The present study tested East Asian and Western children, as well as adults from both cultural backgrounds, to probe cross-cultural similarities and differences at different ages, and to establish the weighting of each modality at different ages. Participants were simultaneously shown a face and a voice expressing either congruent or incongruent emotions, and were asked to judge whether the person was happy or angry. Replicating previous research, East Asian adults relied more on vocal cues than did Western adults. Young children from both cultural groups, however, behaved like Western adults, relying primarily on visual information. The proportion of responses based on vocal cues increased with age in East Asian, but not Western, participants. These results suggest that culture is an important factor in developmental changes in the perception of facial and vocal affective information.


Asunto(s)
Expresión Facial , Voz , Adulto , Ira , Niño , Preescolar , Emociones , Humanos , Percepción
10.
Proc Biol Sci ; 287(1929): 20201148, 2020 06 24.
Artículo en Inglés | MEDLINE | ID: mdl-32546102

RESUMEN

Vocalizations linked to emotional states are partly conserved among phylogenetically related species. This continuity may allow humans to accurately infer affective information from vocalizations produced by chimpanzees. In two pre-registered experiments, we examine human listeners' ability to infer behavioural contexts (e.g. discovering food) and core affect dimensions (arousal and valence) from 155 vocalizations produced by 66 chimpanzees in 10 different positive and negative contexts at high, medium or low arousal levels. In experiment 1, listeners (n = 310), categorized the vocalizations in a forced-choice task with 10 response options, and rated arousal and valence. In experiment 2, participants (n = 3120) matched vocalizations to production contexts using yes/no response options. The results show that listeners were accurate at matching vocalizations of most contexts in addition to inferring arousal and valence. Judgments were more accurate for negative as compared to positive vocalizations. An acoustic analysis demonstrated that, listeners made use of brightness and duration cues, and relied on noisiness in making context judgements, and pitch to infer core affect dimensions. Overall, the results suggest that human listeners can infer affective information from chimpanzee vocalizations beyond core affect, indicating phylogenetic continuity in the mapping of vocalizations to behavioural contexts.


Asunto(s)
Percepción Auditiva , Pan troglodytes , Acústica , Afecto , Animales , Señales (Psicología) , Emociones , Femenino , Humanos , Masculino , Ruido
11.
Cogn Emot ; 34(6): 1112-1122, 2020 09.
Artículo en Inglés | MEDLINE | ID: mdl-32046586

RESUMEN

Theories on empathy have argued that feeling empathy for others is related to accurate recognition of their emotions. Previous research that tested this assumption, however, has reported inconsistent findings. We suggest that this inconsistency may be due to a lack of consideration of the fact that empathy has two facets: empathic concern, namely the compassion for unfortunate others, and personal distress, the experience of discomfort in response to others' distress. We test the hypothesis that empathic concern is positively related to emotion recognition, whereas personal distress is negatively related to emotion recognition. Individual tendencies to respond with concern or distress were measured with the standard IRI (Interpersonal Reactivity Index) self-report questionnaire. Emotion recognition performance was assessed with three standard tests of nonverbal emotion recognition. Across two studies (total N = 431) anddifferent emotion recognition tests, we found that these two facets of affective empathy have opposite relations to recognition of facial expressions of emotions: empathic concern was positively related, while personal distress was negatively related, to accurate emotion recognition. These findings fit with existing motivational models of empathy, suggesting that empathic concern and personal distress have opposing impacts on the likelihood that empathy makes one a better emotion observer.


Asunto(s)
Emociones , Empatía , Reconocimiento en Psicología , Adulto , Femenino , Humanos , Masculino , Autoinforme , Encuestas y Cuestionarios , Adulto Joven
12.
Hum Brain Mapp ; 40(12): 3561-3574, 2019 08 15.
Artículo en Inglés | MEDLINE | ID: mdl-31062899

RESUMEN

In the present fMRI study, we aimed to obtain insight into the key brain networks involved in the experience of awe-a complex emotion that is typically elicited by perceptually vast stimuli. Participants were presented with awe-eliciting, positive and neutral videos, while they were instructed to get fully absorbed in the scenery or to count the number of perspective changes. By using a whole-brain analysis we found that several brain regions that are considered part of the default mode network (DMN), including the frontal pole, the angular gyrus, and the posterior cingulate cortex, were more strongly activated in the absorption condition. But this was less the case when participants were watching awe videos. We suggest that while watching awe videos, participants were deeply immersed in the videos and that levels of self-reflective thought were as much reduced during the awe videos, as during the perspective counting condition. In contrast, key regions of the fronto-parietal network (FPN), including the supramarginal gyrus, the medial frontal gyrus, and the insula, were most strongly activated in the analytical condition when participants were watching awe compared to positive and neutral videos. This finding underlines the captivating, immersive, and attention-grabbing nature of awe stimuli that is considered to be responsible for reductions in self-reflective thought. Together these findings suggest that a key feature of the experience of awe is a reduced engagement in self-referential processing, in line with the subjective self-report measures (i.e., participants perceived their self to be smaller).


Asunto(s)
Encéfalo/diagnóstico por imagen , Encéfalo/fisiología , Emociones/fisiología , Red Nerviosa/diagnóstico por imagen , Red Nerviosa/fisiología , Estimulación Luminosa/métodos , Adolescente , Adulto , Femenino , Humanos , Imagen por Resonancia Magnética/métodos , Masculino , Grabación en Video/métodos , Adulto Joven
13.
Cogn Emot ; 33(7): 1461-1471, 2019 11.
Artículo en Inglés | MEDLINE | ID: mdl-30734635

RESUMEN

Previous research has found that individuals vary greatly in emotion differentiation, that is, the extent to which they distinguish between different emotions when reporting on their own feelings. Building on previous work that has shown that emotion differentiation is associated with individual differences in intrapersonal functions, the current study asks whether emotion differentiation is also related to interpersonal skills. Specifically, we examined whether individuals who are high in emotion differentiation would be more accurate in recognising others' emotional expressions. We report two studies in which we used an established paradigm tapping negative emotion differentiation and several emotion recognition tasks. In Study 1 (N = 363), we found that individuals high in emotion differentiation were more accurate in recognising others' emotional facial expressions. Study 2 (N = 217), replicated this finding using emotion recognition tasks with varying amounts of emotional information. These findings suggest that the knowledge we use to understand our own emotional experience also helps us understand the emotions of others.


Asunto(s)
Emociones/fisiología , Empatía/fisiología , Expresión Facial , Reconocimiento en Psicología/fisiología , Adulto , Femenino , Humanos , Relaciones Interpersonales , Masculino , Habilidades Sociales , Adulto Joven
14.
Cogn Emot ; 33(3): 391-403, 2019 05.
Artículo en Inglés | MEDLINE | ID: mdl-29607731

RESUMEN

Adults perceive emotional expressions categorically, with discrimination being faster and more accurate between expressions from different emotion categories (i.e. blends with two different predominant emotions) than between two stimuli from the same category (i.e. blends with the same predominant emotion). The current study sought to test whether facial expressions of happiness and fear are perceived categorically by pre-verbal infants, using a new stimulus set that was shown to yield categorical perception in adult observers (Experiments 1 and 2). These stimuli were then used with 7-month-old infants (N = 34) using a habituation and visual preference paradigm (Experiment 3). Infants were first habituated to an expression of one emotion, then presented with the same expression paired with a novel expression either from the same emotion category or from a different emotion category. After habituation to fear, infants displayed a novelty preference for pairs of between-category expressions, but not within-category ones, showing categorical perception. However, infants showed no novelty preference when they were habituated to happiness. Our findings provide evidence for categorical perception of emotional expressions in pre-verbal infants, while the asymmetrical effect challenges the notion of a bias towards negative information in this age group.


Asunto(s)
Discriminación en Psicología , Expresión Facial , Percepción Visual , Adulto , Miedo , Femenino , Felicidad , Humanos , Lactante , Masculino , Estimulación Luminosa , Conducta Verbal , Adulto Joven
15.
Cogn Emot ; 33(8): 1587-1598, 2019 12.
Artículo en Inglés | MEDLINE | ID: mdl-30810482

RESUMEN

Crying is a common response to emotional distress that elicits support from the environment. People may regulate another's crying in several ways, such as by providing socio-affective support (e.g. comforting) or cognitive support (e.g. reappraisal), or by trying to emotionally disengage the other by suppression or distraction. We examined whether people adapt their interpersonal emotion regulation strategies to the situational context, by manipulating the regulatory demand of the situation in which someone is crying. Participants watched a video of a crying man and provided support by recording a video message. We hypothesised that when immediate down-regulation was required (i.e. high regulatory demand), participants would provide lower levels of socio-affective and cognitive support, and instead distract the crying person or encourage them to suppress their emotions, compared to when there is no such urgency (i.e. low regulatory demand). As predicted, both self-reported and behavioural responses indicated that high (as compared to low) regulatory demand led to a reduction in socio-affective support provision, and a strong increase in suppression and distraction. Cognitive support provision, however, was unaffected by regulatory demand. When the context required more immediate down-regulation, participants thus employed more regulation strategies aimed at disengaging from the emotional experience. This study provides a first step in showing that people take the context into account when attempting to regulate others' emotions.


Asunto(s)
Llanto/psicología , Regulación Emocional/fisiología , Relaciones Interpersonales , Apoyo Social , Adulto , Femenino , Humanos , Masculino , Autoinforme , Adulto Joven
16.
Cogn Emot ; 33(6): 1129-1143, 2019 09.
Artículo en Inglés | MEDLINE | ID: mdl-30345872

RESUMEN

When in emotional distress, people often turn to others for support. Paradoxically, even when people perceive social support to be beneficial, it often does not result in emotional recovery. This paradox may be explained by the fact that the sharing process disproportionately centres on support that is not helpful in the long run. A distinction has been made between two types of support that are differentially effective: Whereas socio-affective support alleviates momentary emotional distress, cognitive support fosters long-term recovery. But can listeners tell what support the sharer needs? The present study examines the hypothesis that sharers communicate their support goals by sharing in such a way that it allows listeners to infer the sharer's needs. In Experiment 1, we manipulated participants' support goals, and showed that socio-affective support goals led participants to express more emotions, whereas cognitive support goals resulted in greater use of appraisals. In Experiments 2 and 3, we tested whether these differential expressions would affect the support goals that listeners inferred. We found no evidence for such an effect: Listeners consistently perceived the sharer to predominantly want socio-affective support. These findings help explain why many social sharing instances revolve around socio-affective support, leading to subjectively experienced benefits, but not to genuine recovery.


Asunto(s)
Afecto/fisiología , Percepción Auditiva/fisiología , Distrés Psicológico , Apoyo Social , Adolescente , Adulto , Femenino , Humanos , Masculino , Persona de Mediana Edad , Adulto Joven
17.
Cogn Emot ; 32(3): 504-515, 2018 05.
Artículo en Inglés | MEDLINE | ID: mdl-28447544

RESUMEN

Posed stimuli dominate the study of nonverbal communication of emotion, but concerns have been raised that the use of posed stimuli may inflate recognition accuracy relative to spontaneous expressions. Here, we compare recognition of emotions from spontaneous expressions with that of matched posed stimuli. Participants made forced-choice judgments about the expressed emotion and whether the expression was spontaneous, and rated expressions on intensity (Experiments 1 and 2) and prototypicality (Experiment 2). Listeners were able to accurately infer emotions from both posed and spontaneous expressions, from auditory, visual, and audiovisual cues. Furthermore, perceived intensity and prototypicality were found to play a role in the accurate recognition of emotion, particularly from spontaneous expressions. Our findings demonstrate that perceivers can reliably recognise emotions from spontaneous expressions, and that depending on the comparison set, recognition levels can even be equivalent to that of posed stimulus sets.


Asunto(s)
Emociones , Reconocimiento en Psicología , Percepción Auditiva , Señales (Psicología) , Expresión Facial , Femenino , Humanos , Juicio , Masculino , Percepción Visual , Adulto Joven
18.
Cogn Emot ; 32(8): 1597-1610, 2018 12.
Artículo en Inglés | MEDLINE | ID: mdl-29388471

RESUMEN

Dynamic changes in emotional expressions are a valuable source of information in social interactions. As the expressive behaviour of a person changes, the inferences drawn from the behaviour may also change. Here, we test the possibility that dynamic changes in emotional expressions affect person perception in terms of stable trait attributions. Across three experiments, we examined perceivers' inferences about others' personality traits from changing emotional expressions. Expressions changed from one emotion ("start emotion") to another emotion ("end emotion"), allowing us to disentangle potential primacy, recency, and averaging effects. Drawing on three influential models of person perception, we examined perceptions of dominance and affiliation (Experiment 1a), competence and warmth (Experiment 1b), and dominance and trustworthiness (Experiment 2). A strong recency effect was consistently found across all trait judgments, that is, the end emotion of dynamic expressions had a strong impact on trait ratings. Evidence for a primacy effect was also observed (i.e. the information of start emotions was integrated), but less pronounced, and only for trait ratings relating to affiliation, warmth, and trustworthiness. Taken together, these findings suggest that, when making trait judgements about others, observers weigh the most recently displayed emotion in dynamic expressions more heavily than the preceding emotion.


Asunto(s)
Emociones , Expresión Facial , Relaciones Interpersonales , Percepción Social , Adulto , Análisis por Conglomerados , Femenino , Humanos , Juicio , Masculino , Países Bajos , Personalidad , Estudiantes/psicología , Adulto Joven
19.
Cogn Emot ; 32(6): 1247-1264, 2018 09.
Artículo en Inglés | MEDLINE | ID: mdl-29119854

RESUMEN

When in emotional distress, people often turn to others for social support. A general distinction has been made between two types of support that are differentially effective: Whereas socio-affective support temporarily alleviates emotional distress, cognitive support may contribute to better long-term recovery. In the current studies, we examine what type of support individuals seek. We first confirmed in a pilot study that these two types of support can be reliably distinguished. Then, in Study 1, we experimentally tested participants' support evaluations in response to different emotional situations using a vignette methodology. Findings showed that individuals perceived any type of reaction that included socio-affective support as preferable. The evaluation of cognitive support, however, was dependent on the specific emotion: Unlike worry and regret, anger and sadness were characterised by a strong dislike for purely cognitive support. Using different materials, Study 2 replicated these findings. Taken together, the findings suggest that individuals evaluate different types of support in a way that is unlikely to benefit emotional recovery in the long run.


Asunto(s)
Cognición , Emociones , Apoyo Social , Adolescente , Adulto , Anciano , Femenino , Humanos , Masculino , Persona de Mediana Edad , Adulto Joven
20.
J Cross Cult Psychol ; 49(1): 130-148, 2018 01.
Artículo en Inglés | MEDLINE | ID: mdl-29386689

RESUMEN

Although perceivers often agree about the primary emotion that is conveyed by a particular expression, observers may concurrently perceive several additional emotions from a given facial expression. In the present research, we compared the perception of two types of nonintended emotions in Chinese and Dutch observers viewing facial expressions: emotions which were morphologically similar to the intended emotion and emotions which were morphologically dissimilar to the intended emotion. Findings were consistent across two studies and showed that (a) morphologically similar emotions were endorsed to a greater extent than dissimilar emotions and (b) Chinese observers endorsed nonintended emotions more than did Dutch observers. Furthermore, the difference between Chinese and Dutch observers was more pronounced for the endorsement of morphologically similar emotions than of dissimilar emotions. We also obtained consistent evidence that Dutch observers endorsed nonintended emotions that were congruent with the preceding expressions to a greater degree. These findings suggest that culture and morphological similarity both influence the extent to which perceivers see several emotions in a facial expression.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA