RESUMO
Emotional facial expressions have a communicative function. Besides information about the internal states (emotions) and the intentions of the expresser (action tendencies), they also communicate what the expresser wants the observer to do (appeals). Yet, there is very little research on the association of appeals with specific emotions. The present study has the aim to study the mental association of appeals and expressions through reverse correlation. Using reverse correlation, we estimated the observer-specific internal representations of expressions associated with four different appeals. A second group of participants rated the resulting expressions. As predicted, we found that the appeal to celebrate was uniquely associated with a happy expression and the appeal to empathize with a sad expression. A pleading appeal to stop was more strongly associated with sadness than with anger, whereas a command to stop was comparatively more strongly associated with anger. The results show that observers internally represent appeals as specific emotional expressions.
RESUMO
Humans can recombine thousands of different facial expressions. This variability is due to the ability to voluntarily or involuntarily modulate emotional expressions, which, in turn, depends on the existence of two anatomically separate pathways. The Voluntary (VP) and Involuntary (IP) pathways mediate the production of posed and spontaneous facial expressions, respectively, and might also affect the left and right sides of the face differently. This is a neglected aspect in the literature on emotion, where posed expressions instead of genuine expressions are often used as stimuli. Two experiments with different induction methods were specifically designed to investigate the unfolding of spontaneous and posed facial expressions of happiness along the facial vertical axis (left, right) with a high-definition 3-D optoelectronic system. The results showed that spontaneous expressions were distinguished from posed facial movements as revealed by reliable spatial and speed key kinematic patterns in both experiments. Moreover, VP activation produced a lateralization effect: compared with the felt smile, the posed smile involved an initial acceleration of the left corner of the mouth, while an early deceleration of the right corner occurred in the second phase of the movement, after the velocity peak.
RESUMO
Compared to other animals, humans supposedly excel at voluntarily controlling and strategically displaying emotional signals. Yet, new data shows that nonhuman great apes' emotion expressions may also be subject to voluntary control. A key context to further explore this is during post-conflict (PC) periods, where signalling by distressed victims may influence bystander responses, including the offering of consolation. To address this, our study investigates the signalling behaviour of sanctuary-living bonobo victims following aggression and its relation to audience composition and PC interactions. Results show that the production of paedomorphic signals by victims (regardless of age) increased their chances of receiving consolation. In adults, the production of such signals additionally reduced the risk of renewed aggression from opponents. Signal production also increased with audience size, yet strategies differed by age: while immatures reduced signalling in proximity of close-social partners, adults did so especially after receiving consolation. These results suggest that bonobos can flexibly adjust their emotion signalling to influence the outcome of PC events, and that this tendency has a developmental trajectory. Overall, these findings highlight the potential role that flexible emotion communication played in the sociality of our last common ancestor with Pan. This article is part of the theme issue 'Cognition, communication and social bonds in primates'.
Assuntos
Empatia , Pan paniscus , Agressão/psicologia , Animais , Emoções , Humanos , Pan paniscus/psicologia , Comportamento SocialRESUMO
Decades of research show that contextual information from the body, visual scene, and voices can facilitate judgments of facial expressions of emotion. To date, most research suggests that bodily expressions of emotion offer context for interpreting facial expressions, but not vice versa. The present research aimed to investigate the conditions under which mutual processing of facial and bodily displays of emotion facilitate and/or interfere with emotion recognition. In the current two studies, we examined whether body and face emotion recognition are enhanced through integration of shared emotion cues, and/or hindered through mixed signals (i.e., interference). We tested whether faces and bodies facilitate or interfere with emotion processing by pairing briefly presented (33 ms), backward-masked presentations of faces with supraliminally presented bodies (Experiment 1) and vice versa (Experiment 2). Both studies revealed strong support for integration effects, but not interference. Integration effects are most pronounced for low-emotional clarity facial and bodily expressions, suggesting that when more information is needed in one channel, the other channel is recruited to disentangle any ambiguity. That this occurs for briefly presented, backward-masked presentations reveals low-level visual integration of shared emotional signal value.
Assuntos
Emoções , Reconhecimento Facial , Sinais (Psicologia) , Expressão Facial , Humanos , Estimulação LuminosaRESUMO
In computer-mediated communication, emoticons are conventionally rendered in yellow. Previous studies demonstrated that colors evoke certain affective meanings, and face color modulates perceived emotion. We investigated whether color variation affects the recognition of emoticon expressions. Japanese participants were presented with emoticons depicting four basic emotions (Happy, Sad, Angry, Surprised) and a Neutral expression, each rendered in eight colors. Four conditions (E1-E4) were employed in the lab-based experiment; E5, with an additional participant sample, was an online replication of the critical E4. In E1, colored emoticons were categorized in a 5AFC task. In E2-E5, stimulus affective meaning was assessed using visual scales with anchors corresponding to each emotion. The conditions varied in stimulus arrays: E2: light gray emoticons; E3: colored circles; E4 and E5: colored emoticons. The affective meaning of Angry and Sad emoticons was found to be stronger when conferred in warm and cool colors, respectively, the pattern highly consistent between E4 and E5. The affective meaning of colored emoticons is regressed to that of achromatic expression counterparts and decontextualized color. The findings provide evidence that affective congruency of the emoticon expression and the color it is rendered in facilitates recognition of the depicted emotion, augmenting the conveyed emotional message.
RESUMO
Emotion recognition plays an important role in children's socio-emotional development. Research on children's emotion recognition has heavily relied on stimulus sets of photos of adults posed stereotyped facial configurations. The Child Affective Facial Expression set (CAFE) is a relatively new stimulus set that provides researchers with photographs of a diverse group of children's facial configurations in seven emotional categories-angry, sad, happy, fearful, disgusted, surprised, and neutral. However, the large size of the full CAFE set makes it less ideal for research in children. Here, we introduce two subsets of CAFE with 140 photographs of children's facial configurations in each set, diverse in the race and ethnicity of the models, and designed to produce variability in naïve observers. The subsets have been validated with 1000 adult participants.
RESUMO
The human body is an important source of information to infer a person's emotional state. Research with adult observers indicate that the posture of the torso, arms and hands provide important perceptual cues for recognising anger, fear and happy expressions. Much less is known about whether infants process body regions differently for different body expressions. To address this issue, we used eye tracking to investigate whether infants' visual exploration patterns differed when viewing body expressions. Forty-eight 7-months-old infants were randomly presented with static images of adult female bodies expressing anger, fear and happiness, as well as an emotionally-neutral posture. Facial cues to emotional state were removed by masking the faces. We measured the proportion of looking time, proportion and number of fixations, and duration of fixations on the head, upper body and lower body regions for the different expressions. We showed that infants explored the upper body more than the lower body. Importantly, infants at this age fixated differently on different body regions depending on the expression of the body posture. In particular, infants spent a larger proportion of their looking times and had longer fixation durations on the upper body for fear relative to the other expressions. These results extend and replicate the information about infant processing of emotional expressions displayed by human bodies, and they support the hypothesis that infants' visual exploration of human bodies is driven by the upper body.
Assuntos
Emoções/fisiologia , Comportamento Exploratório/fisiologia , Movimentos Oculares/fisiologia , Tecnologia de Rastreamento Ocular , Gestos , Postura/fisiologia , Adulto , Expressão Facial , Feminino , Humanos , Lactente , Masculino , Estimulação Luminosa/métodos , Distribuição Aleatória , Reconhecimento Psicológico/fisiologiaRESUMO
BACKGROUND: People with Prader-Willi Syndrome (PWS) experience great difficulties in social adaptation that could be explained by disturbances in emotional competencies. However, current knowledge about the emotional functioning of people with PWS is incomplete. In particular, despite being the foundation of social adaptation, their emotional expression abilities have never been investigated. In addition, motor and cognitive difficulties - characteristic of PWS - could further impair these abilities. METHOD: To explore the expression abilities of children with PWS, twenty-five children with PWS aged 5 to 10 years were assessed for 1) their emotional facial reactions to a funny video-clip and 2) their ability to produce on demand the facial and bodily expressions of joy, anger, fear and sadness. Their productions were compared to those of two groups of children with typical development, matched to PWS children by chronological age and by developmental age. The analyses focused on the proportion of expressive patterns relating to the target emotion and to untargeted emotions in the children's productions. RESULTS: The results showed that the facial and bodily emotional expressions of children with PWS were particularly difficult to interpret, involving a pronounced mixture of different emotional patterns. In addition, it was observed that the emotions produced on demand by PWS children were particularly poor and equivocal. CONCLUSIONS: As far as we know, this study is the first to highlight the existence of particularities in the expression of emotions in PWS children. These results shed new light on emotional dysfunction in PWS and consequently on the adaptive abilities of those affected in daily life.
Assuntos
Síndrome de Prader-Willi , Criança , Emoções , Humanos , Ajustamento SocialRESUMO
Emotional expressions provide strong signals in social interactions and can function as emotion inducers in a perceiver. Although speech provides one of the most important channels for human communication, its physiological correlates, such as activations of the autonomous nervous system (ANS) while listening to spoken utterances, have received far less attention than in other domains of emotion processing. Our study aimed at filling this gap by investigating autonomic activation in response to spoken utterances that were embedded into larger semantic contexts. Emotional salience was manipulated by providing information on alleged speaker similarity. We compared these autonomic responses to activations triggered by affective sounds, such as exploding bombs, and applause. These sounds had been rated and validated as being either positive, negative, or neutral. As physiological markers of ANS activity, we recorded skin conductance responses (SCRs) and changes of pupil size while participants classified both prosodic and sound stimuli according to their hedonic valence. As expected, affective sounds elicited increased arousal in the receiver, as reflected in increased SCR and pupil size. In contrast, SCRs to angry and joyful prosodic expressions did not differ from responses to neutral ones. Pupil size, however, was modulated by affective prosodic utterances, with increased dilations for angry and joyful compared to neutral prosody, although the similarity manipulation had no effect. These results indicate that cues provided by emotional prosody in spoken semantically neutral utterances might be too subtle to trigger SCR, although variation in pupil size indicated the salience of stimulus variation. Our findings further demonstrate a functional dissociation between pupil dilation and skin conductance that presumably origins from their differential innervation.
RESUMO
Emotion expressions convey valuable information about others' internal states and likely behaviours. Accurately identifying expressions is critical for social interactions, but so is perceiver confidence when decoding expressions. Even if a perceiver correctly labels an expression, uncertainty may impair appropriate behavioural responses and create uncomfortable interactions. Past research has found that perceivers report greater confidence when identifying emotions displayed by cultural ingroup members, an effect attributed to greater perceptual skill and familiarity with own-culture than other-culture faces. However, the current research presents novel evidence for an ingroup advantage in emotion decoding confidence across arbitrary group boundaries that hold culture constant. In two experiments using different stimulus sets participants not only labeled minimal ingroup expressions more accurately, but did so with greater confidence. These results offer novel evidence that ingroup advantages in emotion decoding confidence stem partly from social-cognitive processes.
Assuntos
Emoções , Identificação Social , Percepção Social , Adulto , Feminino , Humanos , Masculino , Reconhecimento Psicológico , Adulto JovemRESUMO
When interacting with other people, both children's biological predispositions and past experiences play a role in how they will process and respond to social-emotional cues. Children may partly differ in their reactions to such cues because they differ in the threshold for perceiving such cues in general. Theoretically, perceptual sensitivity (i.e., the amount of detection of slight, low-intensity stimuli from the external environment independent of visual and auditory ability) might, therefore, provide us with specific information on individual differences in susceptibility to the environment. However, the temperament trait of perceptual sensitivity is highly understudied. In an experiment, we tested whether school-aged children's (N=521, 52.5% boys, Mage=9.72years, SD=1.51) motor (facial electromyography) and affective (self-report) reactivities to dynamic facial expressions and vocalizations is predicted by their (parent-reported) perceptual sensitivity. Our results indicate that children's perceptual sensitivity predicts their motor reactivity to both happy and angry expressions and vocalizations. In addition, perceptual sensitivity interacted with positive (but not negative) parenting behavior in predicting children's motor reactivity to these emotions. Our findings suggest that perceptual sensitivity might indeed provide us with information on individual differences in reactivity to social-emotional cues, both alone and in interaction with parenting behavior. Because perceptual sensitivity focuses specifically on whether children perceive cues from their environment, and not on whether these cues cause arousal and/or whether children are able to regulate this arousal, it should be considered that perceptual sensitivity lies at the root of such individual differences.
Assuntos
Sinais (Psicologia) , Emoções/fisiologia , Expressão Facial , Poder Familiar/psicologia , Percepção Visual/fisiologia , Adolescente , Adulto , Criança , Eletromiografia , Feminino , Humanos , Individualidade , Masculino , Estimulação Luminosa , Adulto JovemRESUMO
When we do not know how to correctly behave in a new context, the emotions that people familiar with the context show in response to the behaviors of others, can help us understand what to do or not to do. The present study examined cross-cultural differences in how group emotional expressions (anger, sadness, neutral) can be used to deduce a norm violation in four cultures (Germany, Israel, Greece, and the US), which differ in terms of decoding rules for negative emotions. As expected, in all four countries, anger was a stronger norm violation signal than sadness or neutral expressions. However, angry and sad expressions were perceived as more intense and the relevant norm was learned better in Germany and Israel than in Greece and the US. Participants in Greece were relatively better at using sadness as a sign of a likely norm violation. The results demonstrate both cultural universality and cultural differences in the use of group emotion expressions in norm learning. In terms of cultural differences they underscore that the social signal value of emotional expressions may vary with culture as a function of cultural differences, both in emotion perception, and as a function of a differential use of emotions.
RESUMO
The current study tested whether men and women receive different degrees of social punishment for violating norms of emotional expression. Participants watched videos of male and female targets (whose reactions were pre-tested to be equivalent in expressivity and valence) viewing either a positive or negative slideshow, with their emotional reaction to the slideshow manipulated to be affectively congruent, affectively incongruent, or flat. Participants then rated the target on a number of social evaluation measures. Displaying an incongruent emotional expression, relative to a congruent one, harmed judgments of women more than men. Women are expected to be more emotionally expressive than men, making an incongruent expression more deviant for women. These results highlight the importance of social norms in construing another person's emotion displays, which can subsequently determine acceptance or rejection of that person.
Assuntos
Emoções , Expressão Facial , Percepção Social , Adulto , Feminino , Humanos , Masculino , Fatores SexuaisRESUMO
Previous studies have shown that the amygdala (AMG) plays a role in how affective signals are processed. Animal research has allowed this role to be better understood and has assigned to the basolateral amygdala (BLA) an important role in threat perception. Here we show that, when passively exposed to bodily threat signals during a facial expressions recognition task, humans with bilateral BLA damage but with a functional central-medial amygdala (CMA) have a profound deficit in ignoring task-irrelevant bodily threat signals.
Assuntos
Tonsila do Cerebelo/fisiologia , Medo/fisiologia , Reconhecimento Psicológico/fisiologia , Comportamento Social , Percepção Social , Adulto , Expressão Facial , Feminino , Humanos , Adulto JovemRESUMO
Humans excel at assessing conspecific emotional valence and intensity, based solely on non-verbal vocal bursts that are also common in other mammals. It is not known, however, whether human listeners rely on similar acoustic cues to assess emotional content in conspecific and heterospecific vocalizations, and which acoustical parameters affect their performance. Here, for the first time, we directly compared the emotional valence and intensity perception of dog and human non-verbal vocalizations. We revealed similar relationships between acoustic features and emotional valence and intensity ratings of human and dog vocalizations: those with shorter call lengths were rated as more positive, whereas those with a higher pitch were rated as more intense. Our findings demonstrate that humans rate conspecific emotional vocalizations along basic acoustic rules, and that they apply similar rules when processing dog vocal expressions. This suggests that humans may utilize similar mental mechanisms for recognizing human and heterospecific vocal emotions.