Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 48
Filter
1.
Emotion ; 24(6): 1376-1385, 2024 Sep.
Article in English | MEDLINE | ID: mdl-38512197

ABSTRACT

Although emotional mimicry is ubiquitous in social interactions, its mechanisms and roles remain disputed. A prevalent view is that imitating others' expressions facilitates emotional understanding, but the evidence is mixed and almost entirely based on facial emotions. In a preregistered study, we asked whether inhibiting orofacial mimicry affects authenticity perception in vocal emotions. Participants listened to authentic and posed laughs and cries, while holding a pen between the teeth and lips to inhibit orofacial responses (n = 75), or while responding freely without a pen (n = 75). They made authenticity judgments and rated how much they felt the conveyed emotions (emotional contagion). Mimicry inhibition decreased the accuracy of authenticity perception in laughter and crying, and in posed and authentic vocalizations. It did not affect contagion ratings, however, nor performance in a cognitive control task, ruling out the effort of holding the pen as an explanation for the decrements in authenticity perception. Laughter was more contagious than crying, and authentic vocalizations were more contagious than posed ones, regardless of whether mimicry was inhibited or not. These findings confirm the role of mimicry in emotional understanding and extend it to auditory emotions. They also imply that perceived emotional contagion can be unrelated to mimicry. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Subject(s)
Crying , Emotions , Facial Expression , Laughter , Social Perception , Humans , Female , Male , Adult , Young Adult , Laughter/physiology , Crying/physiology , Emotions/physiology , Imitative Behavior/physiology , Auditory Perception/physiology
2.
Neurosci Lett ; 825: 137690, 2024 Mar 10.
Article in English | MEDLINE | ID: mdl-38373631

ABSTRACT

We present a questionnaire exploring everyday laughter experience. We developed a 30-item questionnaire in English and collected data on an English-speaking sample (N = 823). Based on Principal Component Analysis (PCA), we identified four dimensions which accounted for variations in people's experiences of laughter: laughter frequency ('Frequency'), social usage of laughter ('Usage'), understanding of other people's laughter ('Understanding'), and feelings towards laughter ('Liking'). Reliability and validity of the LPPQ were assessed. To explore potential similarities and differences based on culture and language, we collected data with Mandarin Chinese-speaking population (N = 574). A PCA suggested the extraction of the same four dimensions, with some item differences between English and Chinese versions. The Laughter Production and Perception Questionnaire (LPPQ) will advance research into the experience of human laughter, which has a potentially crucial role in everyday life.


Subject(s)
Laughter , Humans , Emotions , Reproducibility of Results , Surveys and Questionnaires
3.
Annu Rev Psychol ; 75: 87-128, 2024 Jan 18.
Article in English | MEDLINE | ID: mdl-37738514

ABSTRACT

Music training is generally assumed to improve perceptual and cognitive abilities. Although correlational data highlight positive associations, experimental results are inconclusive, raising questions about causality. Does music training have far-transfer effects, or do preexisting factors determine who takes music lessons? All behavior reflects genetic and environmental influences, but differences in emphasis-nature versus nurture-have been a source of tension throughout the history of psychology. After reviewing the recent literature, we conclude that the evidence that music training causes nonmusical benefits is weak or nonexistent, and that researchers routinely overemphasize contributions from experience while neglecting those from nature. The literature is also largely exploratory rather than theory driven. It fails to explain mechanistically how music-training effects could occur and ignores evidence that far transfer is rare. Instead of focusing on elusive perceptual or cognitive benefits, we argue that it is more fruitful to examine the social-emotional effects of engaging with music, particularly in groups, and that music-based interventions may be effective mainly for clinical or atypical populations.


Subject(s)
Music , Humans , Cognition , Emotions
5.
Cortex ; 172: 254-270, 2024 03.
Article in English | MEDLINE | ID: mdl-38123404

ABSTRACT

The ability to distinguish spontaneous from volitional emotional expressions is an important social skill. How do blind individuals perceive emotional authenticity? Unlike sighted individuals, they cannot rely on facial and body language cues, relying instead on vocal cues alone. Here, we combined behavioral and ERP measures to investigate authenticity perception in laughter and crying in individuals with early- or late-blindness onset. Early-blind, late-blind, and sighted control participants (n = 17 per group, N = 51) completed authenticity and emotion discrimination tasks while EEG data were recorded. The stimuli consisted of laughs and cries that were either spontaneous or volitional. The ERP analysis focused on the N1, P2, and late positive potential (LPP). Behaviorally, early-blind participants showed intact authenticity perception, but late-blind participants performed worse than controls. There were no group differences in the emotion discrimination task. In brain responses, all groups were sensitive to laughter authenticity at the P2 stage, and to crying authenticity at the early LPP stage. Nevertheless, only early-blind participants were sensitive to crying authenticity at the N1 and middle LPP stages, and to laughter authenticity at the early LPP stage. Furthermore, early-blind and sighted participants were more sensitive than late-blind ones to crying authenticity at the P2 and late LPP stages. Altogether, these findings suggest that early blindness relates to facilitated brain processing of authenticity in voices, both at early sensory and late cognitive-evaluative stages. Late-onset blindness, in contrast, relates to decreased sensitivity to authenticity at behavioral and brain levels.


Subject(s)
Laughter , Voice , Humans , Emotions/physiology , Blindness , Laughter/physiology , Social Perception , Electroencephalography , Evoked Potentials/physiology
7.
J Exp Psychol Hum Percept Perform ; 49(7): 1083-1089, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37261743

ABSTRACT

Many claims have been made about links between musical expertise and language ability. Rhythm ability, in particular, has been shown to predict phonological, grammatical, and second-language (L2) abilities, whereas music training often predicts reading and speech-perception skills. Here, we asked whether musical expertise-musical ability and/or music training-relates to L2 (English) abilities of Portuguese native speakers. Participants (N = 154) rated their L2 ability on seven 7-point scales, one each for speaking, reading, writing, comprehension, vocabulary, fluency, and accent. They also completed a test of general cognitive ability, an objective test of musical ability with melody and rhythm subtests, and a questionnaire that measured music training and other aspects of musical behaviors. L2 ability correlated positively with education and cognitive ability but not with music training. It also had no association with musical ability or with self-reports of musical behaviors. Moreover, Bayesian analyses provided evidence for the null hypotheses (i.e., no link between L2 and rhythm ability, no link between L2 and years of music lessons). In short, our findings-based on participants' self-reports of L2 ability-raise doubts about proposed associations between musical and second-language abilities, which may be limited to specific populations or measures. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Subject(s)
Music , Humans , Music/psychology , Self Report , Bayes Theorem , Language , Cognition
8.
Q J Exp Psychol (Hove) ; 76(7): 1585-1598, 2023 Jul.
Article in English | MEDLINE | ID: mdl-36114609

ABSTRACT

Good musical abilities are typically considered to be a consequence of music training, such that they are studied in samples of formally trained individuals. Here, we asked what predicts musical abilities in the absence of music training. Participants with no formal music training (N = 190) completed the Goldsmiths Musical Sophistication Index, measures of personality and cognitive ability, and the Musical Ear Test (MET). The MET is an objective test of musical abilities that provides a Total score and separate scores for its two subtests (Melody and Rhythm), which require listeners to determine whether standard and comparison auditory sequences are identical. MET scores had no associations with personality traits. They correlated positively, however, with informal musical experience and cognitive abilities. Informal musical experience was a better predictor of Melody than of Rhythm scores. Some participants (12%) had Total scores higher than the mean from a sample of musically trained individuals (⩾6 years of formal training), tested previously by Correia et al. Untrained participants with particularly good musical abilities (top 25%, n = 51) scored higher than trained participants on the Rhythm subtest and similarly on the Melody subtest. High-ability untrained participants were also similar to trained ones in cognitive ability, but lower in the personality trait openness-to-experience. These results imply that formal music training is not required to achieve musician-like performance on tests of musical and cognitive abilities. They also suggest that informal music practice and music-related predispositions should be considered in studies of musical expertise.


Subject(s)
Music , Humans , Adult , Music/psychology , Individuality , Cognition , Personality , Aptitude , Auditory Perception
9.
Brain Sci ; 12(11)2022 Nov 17.
Article in English | MEDLINE | ID: mdl-36421891

ABSTRACT

Using the arousal and mood hypothesis as a theoretical framework, we examined whether community-dwelling older adults (N = 132) exhibited cognitive benefits after listening to music. Participants listened to shorter (≈2.5 min) or longer (≈8 min) excerpts from recordings of happy- or sad-sounding music or from a spoken-word recording. Before and after listening, they completed tasks measuring visuospatial working memory (WM), cognitive flexibility and speed, verbal fluency, and mathematical ability, as well as measures of arousal and mood. In general, older adults improved from pre- to post-test on the cognitive tasks. For the test of WM, the increase was greater for participants who heard happy-sounding music compared to those in the other two groups. The happy-sounding group also exhibited larger increases in arousal and mood, although improvements in mood were evident only for the long-duration condition. At the individual level, however, improvements in WM were unrelated to changes in arousal or mood. In short, the results were partially consistent with the arousal and mood hypothesis. For older adults, listening to happy-sounding music may optimize arousal levels and mood, and improve performance on some cognitive tasks (i.e., WM), even though there is no direct link between changes in arousal/mood and changes in WM.

10.
Neurosci Biobehav Rev ; 140: 104777, 2022 09.
Article in English | MEDLINE | ID: mdl-35843347

ABSTRACT

It is often claimed that music training improves auditory and linguistic skills. Results of individual studies are mixed, however, and most evidence is correlational, precluding inferences of causation. Here, we evaluated data from 62 longitudinal studies that examined whether music training programs affect behavioral and brain measures of auditory and linguistic processing (N = 3928). For the behavioral data, a multivariate meta-analysis revealed a small positive effect of music training on both auditory and linguistic measures, regardless of the type of assignment (random vs. non-random), training (instrumental vs. non-instrumental), and control group (active vs. passive). The trim-and-fill method provided suggestive evidence of publication bias, but meta-regression methods (PET-PEESE) did not. For the brain data, a narrative synthesis also documented benefits of music training, namely for measures of auditory processing and for measures of speech and prosody processing. Thus, the available literature provides evidence that music training produces small neurobehavioral enhancements in auditory and linguistic processing, although future studies are needed to confirm that such enhancements are not due to publication bias.


Subject(s)
Music , Auditory Perception , Brain , Humans , Linguistics , Speech
11.
Cogn Affect Behav Neurosci ; 22(5): 1044-1062, 2022 10.
Article in English | MEDLINE | ID: mdl-35501427

ABSTRACT

Music training has been linked to facilitated processing of emotional sounds. However, most studies have focused on speech, and less is known about musicians' brain responses to other emotional sounds and in relation to instrument-specific experience. The current study combined behavioral and EEG methods to address two novel questions related to the perception of auditory emotional cues: whether and how long-term music training relates to a distinct emotional processing of nonverbal vocalizations and music; and whether distinct training profiles (vocal vs. instrumental) modulate brain responses to emotional sounds from early to late processing stages. Fifty-eight participants completed an EEG implicit emotional processing task, in which musical and vocal sounds differing in valence were presented as nontarget stimuli. After this task, participants explicitly evaluated the same sounds regarding the emotion being expressed, their valence, and arousal. Compared with nonmusicians, musicians displayed enhanced salience detection (P2), attention orienting (P3), and elaborative processing (Late Positive Potential) of musical (vs. vocal) sounds in event-related potential (ERP) data. The explicit evaluation of musical sounds also was distinct in musicians: accuracy in the emotional recognition of musical sounds was similar across valence types in musicians, who also judged musical sounds to be more pleasant and more arousing than nonmusicians. Specific profiles of music training (singers vs. instrumentalists) did not relate to differences in the processing of vocal vs. musical sounds. Together, these findings reveal that music has a privileged status in the auditory system of long-term musically trained listeners, irrespective of their instrument-specific experience.


Subject(s)
Music , Singing , Voice , Acoustic Stimulation , Auditory Perception/physiology , Electroencephalography , Humans
12.
Cortex ; 151: 116-132, 2022 06.
Article in English | MEDLINE | ID: mdl-35405538

ABSTRACT

Previous research has documented perceptual and brain differences between spontaneous and volitional emotional vocalizations. However, the time course of emotional authenticity processing remains unclear. We used event-related potentials (ERPs) to address this question, and we focused on the processing of laughter and crying. We additionally tested whether the neural encoding of authenticity is influenced by attention, by manipulating task focus (authenticity versus emotional category) and visual condition (with versus without visual deprivation). ERPs were recorded from 43 participants while they listened to vocalizations and evaluated their authenticity (volitional versus spontaneous) or emotional meaning (sad versus amused). Twenty-two of the participants were blindfolded and tested in a dark room, and 21 were tested in standard visual conditions. As compared to volitional vocalizations, spontaneous ones were associated with reduced N1 amplitude in the case of laughter, and increased P2 in the case of crying. At later cognitive processing stages, more positive amplitudes were observed for spontaneous (versus volitional) laughs and cries (1000-1400 msec), with earlier effects for laughs (700-1000 msec). Visual condition affected brain responses to emotional authenticity at early (P2 range) and late processing stages (middle and late LPP ranges). Task focus did not influence neural responses to authenticity. Our findings suggest that authenticity information is encoded early and automatically during vocal emotional processing. They also point to a potentially faster encoding of authenticity in laughter compared to crying.


Subject(s)
Laughter , Voice , Auditory Perception/physiology , Emotions/physiology , Evoked Potentials , Humans , Laughter/physiology
13.
Neurosci Conscious ; 2022(1): niac002, 2022.
Article in English | MEDLINE | ID: mdl-35145758

ABSTRACT

Auditory verbal hallucinations (AVHs)-or hearing voices-occur in clinical and non-clinical populations, but their mechanisms remain unclear. Predictive processing models of psychosis have proposed that hallucinations arise from an over-weighting of prior expectations in perception. It is unknown, however, whether this reflects (i) a sensitivity to explicit modulation of prior knowledge or (ii) a pre-existing tendency to spontaneously use such knowledge in ambiguous contexts. Four experiments were conducted to examine this question in healthy participants listening to ambiguous speech stimuli. In experiments 1a (n = 60) and 1b (n = 60), participants discriminated intelligible and unintelligible sine-wave speech before and after exposure to the original language templates (i.e. a modulation of expectation). No relationship was observed between top-down modulation and two common measures of hallucination-proneness. Experiment 2 (n = 99) confirmed this pattern with a different stimulus-sine-vocoded speech (SVS)-that was designed to minimize ceiling effects in discrimination and more closely model previous top-down effects reported in psychosis. In Experiment 3 (n = 134), participants were exposed to SVS without prior knowledge that it contained speech (i.e. naïve listening). AVH-proneness significantly predicted both pre-exposure identification of speech and successful recall for words hidden in SVS, indicating that participants could actually decode the hidden signal spontaneously. Altogether, these findings support a pre-existing tendency to spontaneously draw upon prior knowledge in healthy people prone to AVH, rather than a sensitivity to temporary modulations of expectation. We propose a model of clinical and non-clinical hallucinations, across auditory and visual modalities, with testable predictions for future research.

14.
J Int Neuropsychol Soc ; 28(1): 48-61, 2022 01.
Article in English | MEDLINE | ID: mdl-33660594

ABSTRACT

OBJECTIVE: The ability to recognize others' emotions is a central aspect of socioemotional functioning. Emotion recognition impairments are well documented in Alzheimer's disease and other dementias, but it is less understood whether they are also present in mild cognitive impairment (MCI). Results on facial emotion recognition are mixed, and crucially, it remains unclear whether the potential impairments are specific to faces or extend across sensory modalities. METHOD: In the current study, 32 MCI patients and 33 cognitively intact controls completed a comprehensive neuropsychological assessment and two forced-choice emotion recognition tasks, including visual and auditory stimuli. The emotion recognition tasks required participants to categorize emotions in facial expressions and in nonverbal vocalizations (e.g., laughter, crying) expressing neutrality, anger, disgust, fear, happiness, pleasure, surprise, or sadness. RESULTS: MCI patients performed worse than controls for both facial expressions and vocalizations. The effect was large, similar across tasks and individual emotions, and it was not explained by sensory losses or affective symptomatology. Emotion recognition impairments were more pronounced among patients with lower global cognitive performance, but they did not correlate with the ability to perform activities of daily living. CONCLUSIONS: These findings indicate that MCI is associated with emotion recognition difficulties and that such difficulties extend beyond vision, plausibly reflecting a failure at supramodal levels of emotional processing. This highlights the importance of considering emotion recognition abilities as part of standard neuropsychological testing in MCI, and as a target of interventions aimed at improving social cognition in these patients.


Subject(s)
Cognitive Dysfunction , Facial Recognition , Activities of Daily Living , Emotions , Facial Expression , Humans , Neuropsychological Tests , Recognition, Psychology
15.
Behav Res Methods ; 54(2): 955-969, 2022 04.
Article in English | MEDLINE | ID: mdl-34382202

ABSTRACT

We sought to determine whether an objective test of musical ability could be successfully administered online. A sample of 754 participants was tested with an online version of the Musical Ear Test (MET), which had Melody and Rhythm subtests. Both subtests had 52 trials, each of which required participants to determine whether standard and comparison auditory sequences were identical. The testing session also included the Goldsmiths Musical Sophistication Index (Gold-MSI), a test of general cognitive ability, and self-report questionnaires that measured basic demographics (age, education, gender), mind-wandering, and personality. Approximately 20% of the participants were excluded for incomplete responding or failing to finish the testing session. For the final sample (N = 608), findings were similar to those from in-person testing in many respects: (1) the internal reliability of the MET was maintained, (2) construct validity was confirmed by strong associations with Gold-MSI scores, (3) correlations with other measures (e.g., openness to experience, cognitive ability, mind-wandering) were as predicted, (4) mean levels of performance were similar for individuals with no music training, and (5) musical sophistication was a better predictor of performance on the Melody than on the Rhythm subtest. In sum, online administration of the MET proved to be a reliable and valid way to measure musical ability.


Subject(s)
Music , Cognition , Humans , Music/psychology , Personality , Reproducibility of Results
16.
Emotion ; 22(5): 894-906, 2022 Aug.
Article in English | MEDLINE | ID: mdl-32718172

ABSTRACT

Music training is widely assumed to enhance several nonmusical abilities, including speech perception, executive functions, reading, and emotion recognition. This assumption is based primarily on cross-sectional comparisons between musicians and nonmusicians. It remains unclear, however, whether training itself is necessary to explain the musician advantages, or whether factors such as innate predispositions and informal musical experience could produce similar effects. Here, we sought to clarify this issue by examining the association between music training, music perception abilities and vocal emotion recognition. The sample (N = 169) comprised musically trained and untrained listeners who varied widely in their musical skills, as assessed through self-report and performance-based measures. The emotion recognition tasks required listeners to categorize emotions in nonverbal vocalizations (e.g., laughter, crying) and in speech prosody. Music training was associated positively with emotion recognition across tasks, but the effect was small. We also found a positive association between music perception abilities and emotion recognition in the entire sample, even with music training held constant. In fact, untrained participants with good musical abilities were as good as highly trained musicians at recognizing vocal emotions. Moreover, the association between music training and emotion recognition was fully mediated by auditory and music perception skills. Thus, in the absence of formal music training, individuals who were "naturally" musical showed musician-like performance at recognizing vocal emotions. These findings highlight an important role for factors other than music training (e.g., predispositions and informal musical experience) in associations between musical and nonmusical domains. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Subject(s)
Music , Speech Perception , Auditory Perception , Cross-Sectional Studies , Emotions , Humans , Music/psychology , Recognition, Psychology
17.
Philos Trans R Soc Lond B Biol Sci ; 376(1840): 20200402, 2021 12 20.
Article in English | MEDLINE | ID: mdl-34719249

ABSTRACT

The human voice is a primary tool for verbal and nonverbal communication. Studies on laughter emphasize a distinction between spontaneous laughter, which reflects a genuinely felt emotion, and volitional laughter, associated with more intentional communicative acts. Listeners can reliably differentiate the two. It remains unclear, however, if they can detect authenticity in other vocalizations, and whether authenticity determines the affective and social impressions that we form about others. Here, 137 participants listened to laughs and cries that could be spontaneous or volitional and rated them on authenticity, valence, arousal, trustworthiness and dominance. Bayesian mixed models indicated that listeners detect authenticity similarly well in laughter and crying. Speakers were also perceived to be more trustworthy, and in a higher arousal state, when their laughs and cries were spontaneous. Moreover, spontaneous laughs were evaluated as more positive than volitional ones, and we found that the same acoustic features predicted perceived authenticity and trustworthiness in laughter: high pitch, spectral variability and less voicing. For crying, associations between acoustic features and ratings were less reliable. These findings indicate that emotional authenticity shapes affective and social trait inferences from voices, and that the ability to detect authenticity in vocalizations is not limited to laughter. This article is part of the theme issue 'Voice modulation: from origin and mechanism to social impact (Part I)'.


Subject(s)
Laughter , Voice , Bayes Theorem , Emotions , Humans , Laughter/psychology , Sociological Factors
18.
R Soc Open Sci ; 8(11): 211412, 2021 Nov.
Article in English | MEDLINE | ID: mdl-34804582

ABSTRACT

The human voice is a primary channel for emotional communication. It is often presumed that being able to recognize vocal emotions is important for everyday socio-emotional functioning, but evidence for this assumption remains scarce. Here, we examined relationships between vocal emotion recognition and socio-emotional adjustment in children. The sample included 141 6- to 8-year-old children, and the emotion tasks required them to categorize five emotions (anger, disgust, fear, happiness, sadness, plus neutrality), as conveyed by two types of vocal emotional cues: speech prosody and non-verbal vocalizations such as laughter. Socio-emotional adjustment was evaluated by the children's teachers using a multidimensional questionnaire of self-regulation and social behaviour. Based on frequentist and Bayesian analyses, we found that, for speech prosody, higher emotion recognition related to better general socio-emotional adjustment. This association remained significant even when the children's cognitive ability, age, sex and parental education were held constant. Follow-up analyses indicated that higher emotional prosody recognition was more robustly related to the socio-emotional dimensions of prosocial behaviour and cognitive and behavioural self-regulation. For emotion recognition in non-verbal vocalizations, no associations with socio-emotional adjustment were found. A similar null result was obtained for an additional task focused on facial emotion recognition. Overall, these results support the close link between children's emotional prosody recognition skills and their everyday social behaviour.

19.
Cortex ; 141: 280-292, 2021 08.
Article in English | MEDLINE | ID: mdl-34102411

ABSTRACT

The ability to recognize the emotions of others is a crucial skill. In the visual modality, sensorimotor mechanisms provide an important route for emotion recognition. Perceiving facial expressions often evokes activity in facial muscles and in motor and somatosensory systems, and this activity relates to performance in emotion tasks. It remains unclear whether and how similar mechanisms extend to audition. Here we examined facial electromyographic and electrodermal responses to nonverbal vocalizations that varied in emotional authenticity. Participants (N = 100) passively listened to laughs and cries that could reflect an authentic or a posed emotion. Bayesian mixed models indicated that listening to laughter evoked stronger facial responses than listening to crying. These responses were sensitive to emotional authenticity. Authentic laughs evoked more activity than posed laughs in the zygomaticus and orbicularis, muscles typically associated with positive affect. We also found that activity in the orbicularis and corrugator related to subjective evaluations in a subsequent authenticity perception task. Stronger responses in the orbicularis predicted higher perceived laughter authenticity. Stronger responses in the corrugator, a muscle associated with negative affect, predicted lower perceived laughter authenticity. Moreover, authentic laughs elicited stronger skin conductance responses than posed laughs. This arousal effect did not predict task performance, however. For crying, physiological responses were not associated with authenticity judgments. Altogether, these findings indicate that emotional authenticity affects peripheral nervous system responses to vocalizations. They also point to a role of sensorimotor mechanisms in the evaluation of authenticity in the auditory modality.


Subject(s)
Emotions , Laughter , Auditory Perception , Bayes Theorem , Electromyography , Facial Expression , Facial Muscles , Humans
20.
Sci Rep ; 11(1): 3733, 2021 02 12.
Article in English | MEDLINE | ID: mdl-33580104

ABSTRACT

The ability to infer the authenticity of other's emotional expressions is a social cognitive process taking place in all human interactions. Although the neurocognitive correlates of authenticity recognition have been probed, its potential recruitment of the peripheral autonomic nervous system is not known. In this work, we asked participants to rate the authenticity of authentic and acted laughs and cries, while simultaneously recording their pupil size, taken as proxy of cognitive effort and arousal. We report, for the first time, that acted laughs elicited higher pupil dilation than authentic ones and, reversely, authentic cries elicited higher pupil dilation than acted ones. We tentatively suggest the lack of authenticity in others' laughs elicits increased pupil dilation through demanding higher cognitive effort; and that, reversely, authenticity in cries increases pupil dilation, through eliciting higher emotional arousal. We also show authentic vocalizations and laughs (i.e. main effects of authenticity and emotion) to be perceived as more authentic, arousing and contagious than acted vocalizations and cries, respectively. In conclusion, we show new evidence that the recognition of emotional authenticity can be manifested at the level of the autonomic nervous system in humans. Notwithstanding, given its novelty, further independent research is warranted to ascertain its psychological meaning.

SELECTION OF CITATIONS
SEARCH DETAIL