Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 118
Filtrar
1.
Sci Rep ; 14(1): 11590, 2024 05 21.
Artículo en Inglés | MEDLINE | ID: mdl-38773178

RESUMEN

Human interaction is immersed in laughter; though genuine and posed laughter are acoustically distinct, they are both crucial socio-emotional signals. In this novel study, autistic and non-autistic adults explicitly rated the affective properties of genuine and posed laughter. Additionally, we explored whether their self-reported everyday experiences with laughter differ. Both groups could differentiate between these two types of laughter. However, autistic adults rated posed laughter as more authentic and emotionally arousing than non-autistic adults, perceiving it to be similar to genuine laughter. Autistic adults reported laughing less, deriving less enjoyment from laughter, and experiencing difficulty in understanding the social meaning of other people's laughter compared to non-autistic people. Despite these differences, autistic adults reported using laughter socially as often as non-autistic adults, leveraging it to mediate social contexts. Our findings suggest that autistic adults show subtle differences in their perception of laughter, which may be associated with their struggles in comprehending the social meaning of laughter, as well as their diminished frequency and enjoyment of laughter in everyday scenarios. By combining experimental evidence with first-person experiences, this study suggests that autistic adults likely employ different strategies to understand laughter in everyday contexts, potentially leaving them socially vulnerable in communication.


Asunto(s)
Trastorno Autístico , Risa , Humanos , Risa/psicología , Masculino , Adulto , Femenino , Trastorno Autístico/psicología , Trastorno Autístico/fisiopatología , Adulto Joven , Emociones/fisiología , Persona de Mediana Edad
2.
Cereb Cortex ; 34(5)2024 May 02.
Artículo en Inglés | MEDLINE | ID: mdl-38752979

RESUMEN

Spontaneous and conversational laughter are important socio-emotional communicative signals. Neuroimaging findings suggest that non-autistic people engage in mentalizing to understand the meaning behind conversational laughter. Autistic people may thus face specific challenges in processing conversational laughter, due to their mentalizing difficulties. Using fMRI, we explored neural differences during implicit processing of these two types of laughter. Autistic and non-autistic adults passively listened to funny words, followed by spontaneous laughter, conversational laughter, or noise-vocoded vocalizations. Behaviourally, words plus spontaneous laughter were rated as funnier than words plus conversational laughter, and the groups did not differ. However, neuroimaging results showed that non-autistic adults exhibited greater medial prefrontal cortex activation while listening to words plus conversational laughter, than words plus genuine laughter, while autistic adults showed no difference in medial prefrontal cortex activity between these two laughter types. Our findings suggest a crucial role for the medial prefrontal cortex in understanding socio-emotionally ambiguous laughter via mentalizing. Our study also highlights the possibility that autistic people may face challenges in understanding the essence of the laughter we frequently encounter in everyday life, especially in processing conversational laughter that carries complex meaning and social ambiguity, potentially leading to social vulnerability. Therefore, we advocate for clearer communication with autistic people.


Asunto(s)
Trastorno Autístico , Mapeo Encefálico , Encéfalo , Risa , Imagen por Resonancia Magnética , Humanos , Risa/fisiología , Risa/psicología , Masculino , Femenino , Adulto , Trastorno Autístico/fisiopatología , Trastorno Autístico/diagnóstico por imagen , Trastorno Autístico/psicología , Adulto Joven , Encéfalo/diagnóstico por imagen , Encéfalo/fisiopatología , Encéfalo/fisiología , Corteza Prefrontal/diagnóstico por imagen , Corteza Prefrontal/fisiopatología , Corteza Prefrontal/fisiología , Estimulación Acústica
3.
Emotion ; 2024 Mar 21.
Artículo en Inglés | MEDLINE | ID: mdl-38512197

RESUMEN

Although emotional mimicry is ubiquitous in social interactions, its mechanisms and roles remain disputed. A prevalent view is that imitating others' expressions facilitates emotional understanding, but the evidence is mixed and almost entirely based on facial emotions. In a preregistered study, we asked whether inhibiting orofacial mimicry affects authenticity perception in vocal emotions. Participants listened to authentic and posed laughs and cries, while holding a pen between the teeth and lips to inhibit orofacial responses (n = 75), or while responding freely without a pen (n = 75). They made authenticity judgments and rated how much they felt the conveyed emotions (emotional contagion). Mimicry inhibition decreased the accuracy of authenticity perception in laughter and crying, and in posed and authentic vocalizations. It did not affect contagion ratings, however, nor performance in a cognitive control task, ruling out the effort of holding the pen as an explanation for the decrements in authenticity perception. Laughter was more contagious than crying, and authentic vocalizations were more contagious than posed ones, regardless of whether mimicry was inhibited or not. These findings confirm the role of mimicry in emotional understanding and extend it to auditory emotions. They also imply that perceived emotional contagion can be unrelated to mimicry. (PsycInfo Database Record (c) 2024 APA, all rights reserved).

4.
Neurosci Lett ; 825: 137690, 2024 Mar 10.
Artículo en Inglés | MEDLINE | ID: mdl-38373631

RESUMEN

We present a questionnaire exploring everyday laughter experience. We developed a 30-item questionnaire in English and collected data on an English-speaking sample (N = 823). Based on Principal Component Analysis (PCA), we identified four dimensions which accounted for variations in people's experiences of laughter: laughter frequency ('Frequency'), social usage of laughter ('Usage'), understanding of other people's laughter ('Understanding'), and feelings towards laughter ('Liking'). Reliability and validity of the LPPQ were assessed. To explore potential similarities and differences based on culture and language, we collected data with Mandarin Chinese-speaking population (N = 574). A PCA suggested the extraction of the same four dimensions, with some item differences between English and Chinese versions. The Laughter Production and Perception Questionnaire (LPPQ) will advance research into the experience of human laughter, which has a potentially crucial role in everyday life.


Asunto(s)
Risa , Humanos , Emociones , Reproducibilidad de los Resultados , Encuestas y Cuestionarios
5.
Cortex ; 172: 254-270, 2024 03.
Artículo en Inglés | MEDLINE | ID: mdl-38123404

RESUMEN

The ability to distinguish spontaneous from volitional emotional expressions is an important social skill. How do blind individuals perceive emotional authenticity? Unlike sighted individuals, they cannot rely on facial and body language cues, relying instead on vocal cues alone. Here, we combined behavioral and ERP measures to investigate authenticity perception in laughter and crying in individuals with early- or late-blindness onset. Early-blind, late-blind, and sighted control participants (n = 17 per group, N = 51) completed authenticity and emotion discrimination tasks while EEG data were recorded. The stimuli consisted of laughs and cries that were either spontaneous or volitional. The ERP analysis focused on the N1, P2, and late positive potential (LPP). Behaviorally, early-blind participants showed intact authenticity perception, but late-blind participants performed worse than controls. There were no group differences in the emotion discrimination task. In brain responses, all groups were sensitive to laughter authenticity at the P2 stage, and to crying authenticity at the early LPP stage. Nevertheless, only early-blind participants were sensitive to crying authenticity at the N1 and middle LPP stages, and to laughter authenticity at the early LPP stage. Furthermore, early-blind and sighted participants were more sensitive than late-blind ones to crying authenticity at the P2 and late LPP stages. Altogether, these findings suggest that early blindness relates to facilitated brain processing of authenticity in voices, both at early sensory and late cognitive-evaluative stages. Late-onset blindness, in contrast, relates to decreased sensitivity to authenticity at behavioral and brain levels.


Asunto(s)
Risa , Voz , Humanos , Emociones/fisiología , Ceguera , Risa/fisiología , Percepción Social , Electroencefalografía , Potenciales Evocados/fisiología
6.
Nat Rev Neurosci ; 24(11): 711-722, 2023 11.
Artículo en Inglés | MEDLINE | ID: mdl-37783820

RESUMEN

Is the singing voice processed distinctively in the human brain? In this Perspective, we discuss what might distinguish song processing from speech processing in light of recent work suggesting that some cortical neuronal populations respond selectively to song and we outline the implications for our understanding of auditory processing. We review the literature regarding the neural and physiological mechanisms of song production and perception and show that this provides evidence for key differences between song and speech processing. We conclude by discussing the significance of the notion that song processing is special in terms of how this might contribute to theories of the neurobiological origins of vocal communication and to our understanding of the neural circuitry underlying sound processing in the human cortex.


Asunto(s)
Corteza Auditiva , Humanos , Percepción Auditiva/fisiología , Habla/fisiología , Encéfalo/fisiología , Estimulación Acústica
8.
Philos Trans R Soc Lond B Biol Sci ; 377(1863): 20210178, 2022 11 07.
Artículo en Inglés | MEDLINE | ID: mdl-36126667

RESUMEN

Robert Provine made several critically important contributions to science, and in this paper, we will elaborate some of his research into laughter and behavioural contagion. To do this, we will employ Provine's observational methods and use a recorded example of naturalistic laughter to frame our discussion of Provine's work. The laughter is from a cricket commentary broadcast by the British Broadcasting Corporation in 1991, in which Jonathan Agnew and Brian Johnston attempted to summarize that day's play, at one point becoming overwhelmed by laughter. We will use this laughter to demonstrate some of Provine's key points about laughter and contagious behaviour, and we will finish with some observations about the importance and implications of the differences between humans and other mammals in their use of contagious laughter. This article is part of the theme issue 'Cracking the laugh code: laughter through the lens of biology, psychology and neuroscience'.


Asunto(s)
Risa , Neurociencias , Animales , Humanos , Risa/psicología , Mamíferos
9.
Nat Rev Neurosci ; 23(8): 453-454, 2022 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-35760905
10.
Cognition ; 225: 105171, 2022 08.
Artículo en Inglés | MEDLINE | ID: mdl-35598405

RESUMEN

The effect of non-speech sounds, such as breathing noise, on the perception of speech timing is currently unclear. In this paper we report the results of three studies investigating participants' ability to detect a silent gap located adjacent to breath sounds during naturalistic speech. Experiment 1 (n = 24, in-person) asked whether participants could either detect or locate a silent gap that was added adjacent to breath sounds during speech. In Experiment 2 (n = 182; online), we investigated whether different placements within an utterance were more likely to elicit successful detection of gaps. In Experiment 3 (n = 102; online), we manipulated the breath sounds themselves to examine the effect of breath-specific characteristics on gap identification. Across the study, we document consistent effects of gap duration, as well as gap placement. Moreover, in Experiment 2, whether a gap was positioned before or after an interjected breath significantly predicted accuracy as well as the duration threshold at which gaps were detected, suggesting that nonverbal aspects of audible speech production specifically shape listeners' temporal expectations. We also describe the influences of the breath sounds themselves, as well as the surrounding speech context, that can disrupt objective gap detection performance. We conclude by contextualising our findings within the literature, arguing that the verbal acoustic signal is not "speech itself" per se, but rather one part of an integrated percept that includes speech-related respiration, which could be more fully explored in speech perception studies.


Asunto(s)
Percepción del Habla , Habla , Estimulación Acústica , Humanos , Ruido , Respiración , Factores de Tiempo
11.
J Acoust Soc Am ; 151(3): 2002, 2022 03.
Artículo en Inglés | MEDLINE | ID: mdl-35364952

RESUMEN

The amplitude of the speech signal varies over time, and the speech envelope is an attempt to characterise this variation in the form of an acoustic feature. Although tacitly assumed, the similarity between the speech envelope-derived time series and that of phonetic objects (e.g., vowels) remains empirically unestablished. The current paper, therefore, evaluates several speech envelope extraction techniques, such as the Hilbert transform, by comparing different acoustic landmarks (e.g., peaks in the speech envelope) with manual phonetic annotation in a naturalistic and diverse dataset. Joint speech tasks are also introduced to determine which acoustic landmarks are most closely coordinated when voices are aligned. Finally, the acoustic landmarks are evaluated as predictors for the temporal characterisation of speaking style using classification tasks. The landmark that performed most closely to annotated vowel onsets was peaks in the first derivative of a human audition-informed envelope, consistent with converging evidence from neural and behavioural data. However, differences also emerged based on language and speaking style. Overall, the results show that both the choice of speech envelope extraction technique and the form of speech under study affect how sensitive an engineered feature is at capturing aspects of speech rhythm, such as the timing of vowels.


Asunto(s)
Percepción del Habla , Voz , Humanos , Lenguaje , Fonética , Habla
12.
Cortex ; 151: 116-132, 2022 06.
Artículo en Inglés | MEDLINE | ID: mdl-35405538

RESUMEN

Previous research has documented perceptual and brain differences between spontaneous and volitional emotional vocalizations. However, the time course of emotional authenticity processing remains unclear. We used event-related potentials (ERPs) to address this question, and we focused on the processing of laughter and crying. We additionally tested whether the neural encoding of authenticity is influenced by attention, by manipulating task focus (authenticity versus emotional category) and visual condition (with versus without visual deprivation). ERPs were recorded from 43 participants while they listened to vocalizations and evaluated their authenticity (volitional versus spontaneous) or emotional meaning (sad versus amused). Twenty-two of the participants were blindfolded and tested in a dark room, and 21 were tested in standard visual conditions. As compared to volitional vocalizations, spontaneous ones were associated with reduced N1 amplitude in the case of laughter, and increased P2 in the case of crying. At later cognitive processing stages, more positive amplitudes were observed for spontaneous (versus volitional) laughs and cries (1000-1400 msec), with earlier effects for laughs (700-1000 msec). Visual condition affected brain responses to emotional authenticity at early (P2 range) and late processing stages (middle and late LPP ranges). Task focus did not influence neural responses to authenticity. Our findings suggest that authenticity information is encoded early and automatically during vocal emotional processing. They also point to a potentially faster encoding of authenticity in laughter compared to crying.


Asunto(s)
Risa , Voz , Percepción Auditiva/fisiología , Emociones/fisiología , Potenciales Evocados , Humanos , Risa/fisiología
14.
Neurosci Conscious ; 2022(1): niac002, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35145758

RESUMEN

Auditory verbal hallucinations (AVHs)-or hearing voices-occur in clinical and non-clinical populations, but their mechanisms remain unclear. Predictive processing models of psychosis have proposed that hallucinations arise from an over-weighting of prior expectations in perception. It is unknown, however, whether this reflects (i) a sensitivity to explicit modulation of prior knowledge or (ii) a pre-existing tendency to spontaneously use such knowledge in ambiguous contexts. Four experiments were conducted to examine this question in healthy participants listening to ambiguous speech stimuli. In experiments 1a (n = 60) and 1b (n = 60), participants discriminated intelligible and unintelligible sine-wave speech before and after exposure to the original language templates (i.e. a modulation of expectation). No relationship was observed between top-down modulation and two common measures of hallucination-proneness. Experiment 2 (n = 99) confirmed this pattern with a different stimulus-sine-vocoded speech (SVS)-that was designed to minimize ceiling effects in discrimination and more closely model previous top-down effects reported in psychosis. In Experiment 3 (n = 134), participants were exposed to SVS without prior knowledge that it contained speech (i.e. naïve listening). AVH-proneness significantly predicted both pre-exposure identification of speech and successful recall for words hidden in SVS, indicating that participants could actually decode the hidden signal spontaneously. Altogether, these findings support a pre-existing tendency to spontaneously draw upon prior knowledge in healthy people prone to AVH, rather than a sensitivity to temporary modulations of expectation. We propose a model of clinical and non-clinical hallucinations, across auditory and visual modalities, with testable predictions for future research.

15.
Philos Trans R Soc Lond B Biol Sci ; 377(1841): 20200404, 2022 01 03.
Artículo en Inglés | MEDLINE | ID: mdl-34775822

RESUMEN

Laughter is a ubiquitous social signal. Recent work has highlighted distinctions between spontaneous and volitional laughter, which differ in terms of both production mechanisms and perceptual features. Here, we test listeners' ability to infer group identity from volitional and spontaneous laughter, as well as the perceived positivity of these laughs across cultures. Dutch (n = 273) and Japanese (n = 131) participants listened to decontextualized laughter clips and judged (i) whether the laughing person was from their cultural in-group or an out-group; and (ii) whether they thought the laughter was produced spontaneously or volitionally. They also rated the positivity of each laughter clip. Using frequentist and Bayesian analyses, we show that listeners were able to infer group membership from both spontaneous and volitional laughter, and that performance was equivalent for both types of laughter. Spontaneous laughter was rated as more positive than volitional laughter across the two cultures, and in-group laughs were perceived as more positive than out-group laughs by Dutch but not Japanese listeners. Our results demonstrate that both spontaneous and volitional laughter can be used by listeners to infer laughers' cultural group identity. This article is part of the theme issue 'Voice modulation: from origin and mechanism to social impact (Part II)'.


Asunto(s)
Risa , Percepción Auditiva , Teorema de Bayes , Emociones , Procesos de Grupo , Humanos
16.
Philos Trans R Soc Lond B Biol Sci ; 377(1841): 20200395, 2022 01 03.
Artículo en Inglés | MEDLINE | ID: mdl-34775825

RESUMEN

The networks of cortical and subcortical fields that contribute to speech production have benefitted from many years of detailed study, and have been used as a framework for human volitional vocal production more generally. In this article, I will argue that we need to consider speech production as an expression of the human voice in a more general sense. I will also argue that the neural control of the voice can and should be considered to be a flexible system, into which more right hemispheric networks are differentially recruited, based on the factors that are modulating vocal production. I will explore how this flexible network is recruited to express aspects of non-verbal information in the voice, such as identity and social traits. Finally, I will argue that we need to widen out the kinds of vocal behaviours that we explore, if we want to understand the neural underpinnings of the true range of sound-making capabilities of the human voice. This article is part of the theme issue 'Voice modulation: from origin and mechanism to social impact (Part II)'.


Asunto(s)
Habla , Voz , Humanos
17.
Front Neurosci ; 16: 1076374, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36590301

RESUMEN

Sound is processed in primate brains along anatomically and functionally distinct streams: this pattern can be seen in both human and non-human primates. We have previously proposed a general auditory processing framework in which these different perceptual profiles are associated with different computational characteristics. In this paper we consider how recent work supports our framework.

18.
Philos Trans R Soc Lond B Biol Sci ; 376(1840): 20200402, 2021 12 20.
Artículo en Inglés | MEDLINE | ID: mdl-34719249

RESUMEN

The human voice is a primary tool for verbal and nonverbal communication. Studies on laughter emphasize a distinction between spontaneous laughter, which reflects a genuinely felt emotion, and volitional laughter, associated with more intentional communicative acts. Listeners can reliably differentiate the two. It remains unclear, however, if they can detect authenticity in other vocalizations, and whether authenticity determines the affective and social impressions that we form about others. Here, 137 participants listened to laughs and cries that could be spontaneous or volitional and rated them on authenticity, valence, arousal, trustworthiness and dominance. Bayesian mixed models indicated that listeners detect authenticity similarly well in laughter and crying. Speakers were also perceived to be more trustworthy, and in a higher arousal state, when their laughs and cries were spontaneous. Moreover, spontaneous laughs were evaluated as more positive than volitional ones, and we found that the same acoustic features predicted perceived authenticity and trustworthiness in laughter: high pitch, spectral variability and less voicing. For crying, associations between acoustic features and ratings were less reliable. These findings indicate that emotional authenticity shapes affective and social trait inferences from voices, and that the ability to detect authenticity in vocalizations is not limited to laughter. This article is part of the theme issue 'Voice modulation: from origin and mechanism to social impact (Part I)'.


Asunto(s)
Risa , Voz , Teorema de Bayes , Emociones , Humanos , Risa/psicología , Factores Sociológicos
19.
Cortex ; 143: 57-68, 2021 10.
Artículo en Inglés | MEDLINE | ID: mdl-34388558

RESUMEN

Functional near-infrared spectroscopy and behavioural methods were used to examine the neural basis of the behavioural contagion and authenticity of laughter. We demonstrate that the processing of laughter sounds recruits networks previously shown to be related to empathy and auditory-motor mirror networks. Additionally, we found that the differences in the levels of activation in response to volitional and spontaneous laughter could predict an individual's perception of how contagious they found the laughter to be.


Asunto(s)
Risa , Percepción Auditiva , Empatía , Humanos , Sonido , Volición
20.
Cortex ; 142: 186-203, 2021 09.
Artículo en Inglés | MEDLINE | ID: mdl-34273798

RESUMEN

Laughter is a fundamental communicative signal in our relations with other people and is used to convey a diverse repertoire of social and emotional information. It is therefore potentially a useful probe of impaired socio-emotional signal processing in neurodegenerative diseases. Here we investigated the cognitive and affective processing of laughter in forty-seven patients representing all major syndromes of frontotemporal dementia, a disease spectrum characterised by severe socio-emotional dysfunction (twenty-two with behavioural variant frontotemporal dementia, twelve with semantic variant primary progressive aphasia, thirteen with nonfluent-agrammatic variant primary progressive aphasia), in relation to fifteen patients with typical amnestic Alzheimer's disease and twenty healthy age-matched individuals. We assessed cognitive labelling (identification) and valence rating (affective evaluation) of samples of spontaneous (mirthful and hostile) and volitional (posed) laughter versus two auditory control conditions (a synthetic laughter-like stimulus and spoken numbers). Neuroanatomical associations of laughter processing were assessed using voxel-based morphometry of patients' brain MR images. While all dementia syndromes were associated with impaired identification of laughter subtypes relative to healthy controls, this was significantly more severe overall in frontotemporal dementia than in Alzheimer's disease and particularly in the behavioural and semantic variants, which also showed abnormal affective evaluation of laughter. Over the patient cohort, laughter identification accuracy was correlated with measures of daily-life socio-emotional functioning. Certain striking syndromic signatures emerged, including enhanced liking for hostile laughter in behavioural variant frontotemporal dementia, impaired processing of synthetic laughter in the nonfluent-agrammatic variant (consistent with a generic complex auditory perceptual deficit) and enhanced liking for numbers ('numerophilia') in the semantic variant. Across the patient cohort, overall laughter identification accuracy correlated with regional grey matter in a core network encompassing inferior frontal and cingulo-insular cortices; and more specific correlates of laughter identification accuracy were delineated in cortical regions mediating affective disambiguation (identification of hostile and posed laughter in orbitofrontal cortex) and authenticity (social intent) decoding (identification of mirthful and posed laughter in anteromedial prefrontal cortex) (all p < .05 after correction for multiple voxel-wise comparisons over the whole brain). These findings reveal a rich diversity of cognitive and affective laughter phenotypes in canonical dementia syndromes and suggest that laughter is an informative probe of neural mechanisms underpinning socio-emotional dysfunction in neurodegenerative disease.


Asunto(s)
Demencia Frontotemporal , Risa , Enfermedades Neurodegenerativas , Afasia Progresiva Primaria no Fluente , Emociones , Demencia Frontotemporal/diagnóstico por imagen , Humanos , Imagen por Resonancia Magnética , Pruebas Neuropsicológicas
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...