Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 116
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Emotion ; 2024 Mar 21.
Artigo em Inglês | MEDLINE | ID: mdl-38512197

RESUMO

Although emotional mimicry is ubiquitous in social interactions, its mechanisms and roles remain disputed. A prevalent view is that imitating others' expressions facilitates emotional understanding, but the evidence is mixed and almost entirely based on facial emotions. In a preregistered study, we asked whether inhibiting orofacial mimicry affects authenticity perception in vocal emotions. Participants listened to authentic and posed laughs and cries, while holding a pen between the teeth and lips to inhibit orofacial responses (n = 75), or while responding freely without a pen (n = 75). They made authenticity judgments and rated how much they felt the conveyed emotions (emotional contagion). Mimicry inhibition decreased the accuracy of authenticity perception in laughter and crying, and in posed and authentic vocalizations. It did not affect contagion ratings, however, nor performance in a cognitive control task, ruling out the effort of holding the pen as an explanation for the decrements in authenticity perception. Laughter was more contagious than crying, and authentic vocalizations were more contagious than posed ones, regardless of whether mimicry was inhibited or not. These findings confirm the role of mimicry in emotional understanding and extend it to auditory emotions. They also imply that perceived emotional contagion can be unrelated to mimicry. (PsycInfo Database Record (c) 2024 APA, all rights reserved).

2.
Neurosci Lett ; 825: 137690, 2024 Mar 10.
Artigo em Inglês | MEDLINE | ID: mdl-38373631

RESUMO

We present a questionnaire exploring everyday laughter experience. We developed a 30-item questionnaire in English and collected data on an English-speaking sample (N = 823). Based on Principal Component Analysis (PCA), we identified four dimensions which accounted for variations in people's experiences of laughter: laughter frequency ('Frequency'), social usage of laughter ('Usage'), understanding of other people's laughter ('Understanding'), and feelings towards laughter ('Liking'). Reliability and validity of the LPPQ were assessed. To explore potential similarities and differences based on culture and language, we collected data with Mandarin Chinese-speaking population (N = 574). A PCA suggested the extraction of the same four dimensions, with some item differences between English and Chinese versions. The Laughter Production and Perception Questionnaire (LPPQ) will advance research into the experience of human laughter, which has a potentially crucial role in everyday life.


Assuntos
Riso , Humanos , Emoções , Reprodutibilidade dos Testes , Inquéritos e Questionários
3.
Cortex ; 172: 254-270, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38123404

RESUMO

The ability to distinguish spontaneous from volitional emotional expressions is an important social skill. How do blind individuals perceive emotional authenticity? Unlike sighted individuals, they cannot rely on facial and body language cues, relying instead on vocal cues alone. Here, we combined behavioral and ERP measures to investigate authenticity perception in laughter and crying in individuals with early- or late-blindness onset. Early-blind, late-blind, and sighted control participants (n = 17 per group, N = 51) completed authenticity and emotion discrimination tasks while EEG data were recorded. The stimuli consisted of laughs and cries that were either spontaneous or volitional. The ERP analysis focused on the N1, P2, and late positive potential (LPP). Behaviorally, early-blind participants showed intact authenticity perception, but late-blind participants performed worse than controls. There were no group differences in the emotion discrimination task. In brain responses, all groups were sensitive to laughter authenticity at the P2 stage, and to crying authenticity at the early LPP stage. Nevertheless, only early-blind participants were sensitive to crying authenticity at the N1 and middle LPP stages, and to laughter authenticity at the early LPP stage. Furthermore, early-blind and sighted participants were more sensitive than late-blind ones to crying authenticity at the P2 and late LPP stages. Altogether, these findings suggest that early blindness relates to facilitated brain processing of authenticity in voices, both at early sensory and late cognitive-evaluative stages. Late-onset blindness, in contrast, relates to decreased sensitivity to authenticity at behavioral and brain levels.


Assuntos
Riso , Voz , Humanos , Emoções/fisiologia , Cegueira , Riso/fisiologia , Percepção Social , Eletroencefalografia , Potenciais Evocados/fisiologia
4.
Nat Rev Neurosci ; 24(11): 711-722, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37783820

RESUMO

Is the singing voice processed distinctively in the human brain? In this Perspective, we discuss what might distinguish song processing from speech processing in light of recent work suggesting that some cortical neuronal populations respond selectively to song and we outline the implications for our understanding of auditory processing. We review the literature regarding the neural and physiological mechanisms of song production and perception and show that this provides evidence for key differences between song and speech processing. We conclude by discussing the significance of the notion that song processing is special in terms of how this might contribute to theories of the neurobiological origins of vocal communication and to our understanding of the neural circuitry underlying sound processing in the human cortex.


Assuntos
Córtex Auditivo , Humanos , Percepção Auditiva/fisiologia , Fala/fisiologia , Encéfalo/fisiologia , Estimulação Acústica
6.
Philos Trans R Soc Lond B Biol Sci ; 377(1863): 20210178, 2022 11 07.
Artigo em Inglês | MEDLINE | ID: mdl-36126667

RESUMO

Robert Provine made several critically important contributions to science, and in this paper, we will elaborate some of his research into laughter and behavioural contagion. To do this, we will employ Provine's observational methods and use a recorded example of naturalistic laughter to frame our discussion of Provine's work. The laughter is from a cricket commentary broadcast by the British Broadcasting Corporation in 1991, in which Jonathan Agnew and Brian Johnston attempted to summarize that day's play, at one point becoming overwhelmed by laughter. We will use this laughter to demonstrate some of Provine's key points about laughter and contagious behaviour, and we will finish with some observations about the importance and implications of the differences between humans and other mammals in their use of contagious laughter. This article is part of the theme issue 'Cracking the laugh code: laughter through the lens of biology, psychology and neuroscience'.


Assuntos
Riso , Neurociências , Animais , Humanos , Riso/psicologia , Mamíferos
7.
Nat Rev Neurosci ; 23(8): 453-454, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35760905
8.
Cognition ; 225: 105171, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-35598405

RESUMO

The effect of non-speech sounds, such as breathing noise, on the perception of speech timing is currently unclear. In this paper we report the results of three studies investigating participants' ability to detect a silent gap located adjacent to breath sounds during naturalistic speech. Experiment 1 (n = 24, in-person) asked whether participants could either detect or locate a silent gap that was added adjacent to breath sounds during speech. In Experiment 2 (n = 182; online), we investigated whether different placements within an utterance were more likely to elicit successful detection of gaps. In Experiment 3 (n = 102; online), we manipulated the breath sounds themselves to examine the effect of breath-specific characteristics on gap identification. Across the study, we document consistent effects of gap duration, as well as gap placement. Moreover, in Experiment 2, whether a gap was positioned before or after an interjected breath significantly predicted accuracy as well as the duration threshold at which gaps were detected, suggesting that nonverbal aspects of audible speech production specifically shape listeners' temporal expectations. We also describe the influences of the breath sounds themselves, as well as the surrounding speech context, that can disrupt objective gap detection performance. We conclude by contextualising our findings within the literature, arguing that the verbal acoustic signal is not "speech itself" per se, but rather one part of an integrated percept that includes speech-related respiration, which could be more fully explored in speech perception studies.


Assuntos
Percepção da Fala , Fala , Estimulação Acústica , Humanos , Ruído , Respiração , Fatores de Tempo
9.
J Acoust Soc Am ; 151(3): 2002, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-35364952

RESUMO

The amplitude of the speech signal varies over time, and the speech envelope is an attempt to characterise this variation in the form of an acoustic feature. Although tacitly assumed, the similarity between the speech envelope-derived time series and that of phonetic objects (e.g., vowels) remains empirically unestablished. The current paper, therefore, evaluates several speech envelope extraction techniques, such as the Hilbert transform, by comparing different acoustic landmarks (e.g., peaks in the speech envelope) with manual phonetic annotation in a naturalistic and diverse dataset. Joint speech tasks are also introduced to determine which acoustic landmarks are most closely coordinated when voices are aligned. Finally, the acoustic landmarks are evaluated as predictors for the temporal characterisation of speaking style using classification tasks. The landmark that performed most closely to annotated vowel onsets was peaks in the first derivative of a human audition-informed envelope, consistent with converging evidence from neural and behavioural data. However, differences also emerged based on language and speaking style. Overall, the results show that both the choice of speech envelope extraction technique and the form of speech under study affect how sensitive an engineered feature is at capturing aspects of speech rhythm, such as the timing of vowels.


Assuntos
Percepção da Fala , Voz , Humanos , Idioma , Fonética , Fala
10.
Cortex ; 151: 116-132, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-35405538

RESUMO

Previous research has documented perceptual and brain differences between spontaneous and volitional emotional vocalizations. However, the time course of emotional authenticity processing remains unclear. We used event-related potentials (ERPs) to address this question, and we focused on the processing of laughter and crying. We additionally tested whether the neural encoding of authenticity is influenced by attention, by manipulating task focus (authenticity versus emotional category) and visual condition (with versus without visual deprivation). ERPs were recorded from 43 participants while they listened to vocalizations and evaluated their authenticity (volitional versus spontaneous) or emotional meaning (sad versus amused). Twenty-two of the participants were blindfolded and tested in a dark room, and 21 were tested in standard visual conditions. As compared to volitional vocalizations, spontaneous ones were associated with reduced N1 amplitude in the case of laughter, and increased P2 in the case of crying. At later cognitive processing stages, more positive amplitudes were observed for spontaneous (versus volitional) laughs and cries (1000-1400 msec), with earlier effects for laughs (700-1000 msec). Visual condition affected brain responses to emotional authenticity at early (P2 range) and late processing stages (middle and late LPP ranges). Task focus did not influence neural responses to authenticity. Our findings suggest that authenticity information is encoded early and automatically during vocal emotional processing. They also point to a potentially faster encoding of authenticity in laughter compared to crying.


Assuntos
Riso , Voz , Percepção Auditiva/fisiologia , Emoções/fisiologia , Potenciais Evocados , Humanos , Riso/fisiologia
11.
Neurosci Conscious ; 2022(1): niac002, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35145758

RESUMO

Auditory verbal hallucinations (AVHs)-or hearing voices-occur in clinical and non-clinical populations, but their mechanisms remain unclear. Predictive processing models of psychosis have proposed that hallucinations arise from an over-weighting of prior expectations in perception. It is unknown, however, whether this reflects (i) a sensitivity to explicit modulation of prior knowledge or (ii) a pre-existing tendency to spontaneously use such knowledge in ambiguous contexts. Four experiments were conducted to examine this question in healthy participants listening to ambiguous speech stimuli. In experiments 1a (n = 60) and 1b (n = 60), participants discriminated intelligible and unintelligible sine-wave speech before and after exposure to the original language templates (i.e. a modulation of expectation). No relationship was observed between top-down modulation and two common measures of hallucination-proneness. Experiment 2 (n = 99) confirmed this pattern with a different stimulus-sine-vocoded speech (SVS)-that was designed to minimize ceiling effects in discrimination and more closely model previous top-down effects reported in psychosis. In Experiment 3 (n = 134), participants were exposed to SVS without prior knowledge that it contained speech (i.e. naïve listening). AVH-proneness significantly predicted both pre-exposure identification of speech and successful recall for words hidden in SVS, indicating that participants could actually decode the hidden signal spontaneously. Altogether, these findings support a pre-existing tendency to spontaneously draw upon prior knowledge in healthy people prone to AVH, rather than a sensitivity to temporary modulations of expectation. We propose a model of clinical and non-clinical hallucinations, across auditory and visual modalities, with testable predictions for future research.

13.
Front Neurosci ; 16: 1076374, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36590301

RESUMO

Sound is processed in primate brains along anatomically and functionally distinct streams: this pattern can be seen in both human and non-human primates. We have previously proposed a general auditory processing framework in which these different perceptual profiles are associated with different computational characteristics. In this paper we consider how recent work supports our framework.

14.
Philos Trans R Soc Lond B Biol Sci ; 377(1841): 20200404, 2022 01 03.
Artigo em Inglês | MEDLINE | ID: mdl-34775822

RESUMO

Laughter is a ubiquitous social signal. Recent work has highlighted distinctions between spontaneous and volitional laughter, which differ in terms of both production mechanisms and perceptual features. Here, we test listeners' ability to infer group identity from volitional and spontaneous laughter, as well as the perceived positivity of these laughs across cultures. Dutch (n = 273) and Japanese (n = 131) participants listened to decontextualized laughter clips and judged (i) whether the laughing person was from their cultural in-group or an out-group; and (ii) whether they thought the laughter was produced spontaneously or volitionally. They also rated the positivity of each laughter clip. Using frequentist and Bayesian analyses, we show that listeners were able to infer group membership from both spontaneous and volitional laughter, and that performance was equivalent for both types of laughter. Spontaneous laughter was rated as more positive than volitional laughter across the two cultures, and in-group laughs were perceived as more positive than out-group laughs by Dutch but not Japanese listeners. Our results demonstrate that both spontaneous and volitional laughter can be used by listeners to infer laughers' cultural group identity. This article is part of the theme issue 'Voice modulation: from origin and mechanism to social impact (Part II)'.


Assuntos
Riso , Percepção Auditiva , Teorema de Bayes , Emoções , Processos Grupais , Humanos
15.
Philos Trans R Soc Lond B Biol Sci ; 377(1841): 20200395, 2022 01 03.
Artigo em Inglês | MEDLINE | ID: mdl-34775825

RESUMO

The networks of cortical and subcortical fields that contribute to speech production have benefitted from many years of detailed study, and have been used as a framework for human volitional vocal production more generally. In this article, I will argue that we need to consider speech production as an expression of the human voice in a more general sense. I will also argue that the neural control of the voice can and should be considered to be a flexible system, into which more right hemispheric networks are differentially recruited, based on the factors that are modulating vocal production. I will explore how this flexible network is recruited to express aspects of non-verbal information in the voice, such as identity and social traits. Finally, I will argue that we need to widen out the kinds of vocal behaviours that we explore, if we want to understand the neural underpinnings of the true range of sound-making capabilities of the human voice. This article is part of the theme issue 'Voice modulation: from origin and mechanism to social impact (Part II)'.


Assuntos
Fala , Voz , Humanos
16.
Philos Trans R Soc Lond B Biol Sci ; 376(1840): 20200402, 2021 12 20.
Artigo em Inglês | MEDLINE | ID: mdl-34719249

RESUMO

The human voice is a primary tool for verbal and nonverbal communication. Studies on laughter emphasize a distinction between spontaneous laughter, which reflects a genuinely felt emotion, and volitional laughter, associated with more intentional communicative acts. Listeners can reliably differentiate the two. It remains unclear, however, if they can detect authenticity in other vocalizations, and whether authenticity determines the affective and social impressions that we form about others. Here, 137 participants listened to laughs and cries that could be spontaneous or volitional and rated them on authenticity, valence, arousal, trustworthiness and dominance. Bayesian mixed models indicated that listeners detect authenticity similarly well in laughter and crying. Speakers were also perceived to be more trustworthy, and in a higher arousal state, when their laughs and cries were spontaneous. Moreover, spontaneous laughs were evaluated as more positive than volitional ones, and we found that the same acoustic features predicted perceived authenticity and trustworthiness in laughter: high pitch, spectral variability and less voicing. For crying, associations between acoustic features and ratings were less reliable. These findings indicate that emotional authenticity shapes affective and social trait inferences from voices, and that the ability to detect authenticity in vocalizations is not limited to laughter. This article is part of the theme issue 'Voice modulation: from origin and mechanism to social impact (Part I)'.


Assuntos
Riso , Voz , Teorema de Bayes , Emoções , Humanos , Riso/psicologia , Fatores Sociológicos
17.
Cortex ; 143: 57-68, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34388558

RESUMO

Functional near-infrared spectroscopy and behavioural methods were used to examine the neural basis of the behavioural contagion and authenticity of laughter. We demonstrate that the processing of laughter sounds recruits networks previously shown to be related to empathy and auditory-motor mirror networks. Additionally, we found that the differences in the levels of activation in response to volitional and spontaneous laughter could predict an individual's perception of how contagious they found the laughter to be.


Assuntos
Riso , Percepção Auditiva , Empatia , Humanos , Som , Volição
18.
Cortex ; 142: 186-203, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34273798

RESUMO

Laughter is a fundamental communicative signal in our relations with other people and is used to convey a diverse repertoire of social and emotional information. It is therefore potentially a useful probe of impaired socio-emotional signal processing in neurodegenerative diseases. Here we investigated the cognitive and affective processing of laughter in forty-seven patients representing all major syndromes of frontotemporal dementia, a disease spectrum characterised by severe socio-emotional dysfunction (twenty-two with behavioural variant frontotemporal dementia, twelve with semantic variant primary progressive aphasia, thirteen with nonfluent-agrammatic variant primary progressive aphasia), in relation to fifteen patients with typical amnestic Alzheimer's disease and twenty healthy age-matched individuals. We assessed cognitive labelling (identification) and valence rating (affective evaluation) of samples of spontaneous (mirthful and hostile) and volitional (posed) laughter versus two auditory control conditions (a synthetic laughter-like stimulus and spoken numbers). Neuroanatomical associations of laughter processing were assessed using voxel-based morphometry of patients' brain MR images. While all dementia syndromes were associated with impaired identification of laughter subtypes relative to healthy controls, this was significantly more severe overall in frontotemporal dementia than in Alzheimer's disease and particularly in the behavioural and semantic variants, which also showed abnormal affective evaluation of laughter. Over the patient cohort, laughter identification accuracy was correlated with measures of daily-life socio-emotional functioning. Certain striking syndromic signatures emerged, including enhanced liking for hostile laughter in behavioural variant frontotemporal dementia, impaired processing of synthetic laughter in the nonfluent-agrammatic variant (consistent with a generic complex auditory perceptual deficit) and enhanced liking for numbers ('numerophilia') in the semantic variant. Across the patient cohort, overall laughter identification accuracy correlated with regional grey matter in a core network encompassing inferior frontal and cingulo-insular cortices; and more specific correlates of laughter identification accuracy were delineated in cortical regions mediating affective disambiguation (identification of hostile and posed laughter in orbitofrontal cortex) and authenticity (social intent) decoding (identification of mirthful and posed laughter in anteromedial prefrontal cortex) (all p < .05 after correction for multiple voxel-wise comparisons over the whole brain). These findings reveal a rich diversity of cognitive and affective laughter phenotypes in canonical dementia syndromes and suggest that laughter is an informative probe of neural mechanisms underpinning socio-emotional dysfunction in neurodegenerative disease.


Assuntos
Demência Frontotemporal , Riso , Doenças Neurodegenerativas , Afasia Primária Progressiva não Fluente , Emoções , Demência Frontotemporal/diagnóstico por imagem , Humanos , Imageamento por Ressonância Magnética , Testes Neuropsicológicos
19.
Trends Cogn Sci ; 25(8): 645-647, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-34144894

RESUMO

There are anatomical and functional links between auditory and somatosensory processing. We suggest that these links form the basis for the popular internet phenomenon where people enjoy a sense of touch from auditory (and often audiovisual) stimuli.


Assuntos
Percepção do Tato , Tato , Percepção Auditiva , Emoções , Humanos
20.
Cortex ; 141: 280-292, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-34102411

RESUMO

The ability to recognize the emotions of others is a crucial skill. In the visual modality, sensorimotor mechanisms provide an important route for emotion recognition. Perceiving facial expressions often evokes activity in facial muscles and in motor and somatosensory systems, and this activity relates to performance in emotion tasks. It remains unclear whether and how similar mechanisms extend to audition. Here we examined facial electromyographic and electrodermal responses to nonverbal vocalizations that varied in emotional authenticity. Participants (N = 100) passively listened to laughs and cries that could reflect an authentic or a posed emotion. Bayesian mixed models indicated that listening to laughter evoked stronger facial responses than listening to crying. These responses were sensitive to emotional authenticity. Authentic laughs evoked more activity than posed laughs in the zygomaticus and orbicularis, muscles typically associated with positive affect. We also found that activity in the orbicularis and corrugator related to subjective evaluations in a subsequent authenticity perception task. Stronger responses in the orbicularis predicted higher perceived laughter authenticity. Stronger responses in the corrugator, a muscle associated with negative affect, predicted lower perceived laughter authenticity. Moreover, authentic laughs elicited stronger skin conductance responses than posed laughs. This arousal effect did not predict task performance, however. For crying, physiological responses were not associated with authenticity judgments. Altogether, these findings indicate that emotional authenticity affects peripheral nervous system responses to vocalizations. They also point to a role of sensorimotor mechanisms in the evaluation of authenticity in the auditory modality.


Assuntos
Emoções , Riso , Percepção Auditiva , Teorema de Bayes , Eletromiografia , Expressão Facial , Músculos Faciais , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...