Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters











Database
Language
Publication year range
1.
Cereb Cortex ; 34(5)2024 May 02.
Article in English | MEDLINE | ID: mdl-38752979

ABSTRACT

Spontaneous and conversational laughter are important socio-emotional communicative signals. Neuroimaging findings suggest that non-autistic people engage in mentalizing to understand the meaning behind conversational laughter. Autistic people may thus face specific challenges in processing conversational laughter, due to their mentalizing difficulties. Using fMRI, we explored neural differences during implicit processing of these two types of laughter. Autistic and non-autistic adults passively listened to funny words, followed by spontaneous laughter, conversational laughter, or noise-vocoded vocalizations. Behaviourally, words plus spontaneous laughter were rated as funnier than words plus conversational laughter, and the groups did not differ. However, neuroimaging results showed that non-autistic adults exhibited greater medial prefrontal cortex activation while listening to words plus conversational laughter, than words plus genuine laughter, while autistic adults showed no difference in medial prefrontal cortex activity between these two laughter types. Our findings suggest a crucial role for the medial prefrontal cortex in understanding socio-emotionally ambiguous laughter via mentalizing. Our study also highlights the possibility that autistic people may face challenges in understanding the essence of the laughter we frequently encounter in everyday life, especially in processing conversational laughter that carries complex meaning and social ambiguity, potentially leading to social vulnerability. Therefore, we advocate for clearer communication with autistic people.


Subject(s)
Autistic Disorder , Brain Mapping , Brain , Laughter , Magnetic Resonance Imaging , Humans , Laughter/physiology , Laughter/psychology , Male , Female , Adult , Autistic Disorder/physiopathology , Autistic Disorder/diagnostic imaging , Autistic Disorder/psychology , Young Adult , Brain/diagnostic imaging , Brain/physiopathology , Brain/physiology , Prefrontal Cortex/diagnostic imaging , Prefrontal Cortex/physiopathology , Prefrontal Cortex/physiology , Acoustic Stimulation
3.
Philos Trans R Soc Lond B Biol Sci ; 377(1863): 20210178, 2022 11 07.
Article in English | MEDLINE | ID: mdl-36126667

ABSTRACT

Robert Provine made several critically important contributions to science, and in this paper, we will elaborate some of his research into laughter and behavioural contagion. To do this, we will employ Provine's observational methods and use a recorded example of naturalistic laughter to frame our discussion of Provine's work. The laughter is from a cricket commentary broadcast by the British Broadcasting Corporation in 1991, in which Jonathan Agnew and Brian Johnston attempted to summarize that day's play, at one point becoming overwhelmed by laughter. We will use this laughter to demonstrate some of Provine's key points about laughter and contagious behaviour, and we will finish with some observations about the importance and implications of the differences between humans and other mammals in their use of contagious laughter. This article is part of the theme issue 'Cracking the laugh code: laughter through the lens of biology, psychology and neuroscience'.


Subject(s)
Laughter , Neurosciences , Animals , Humans , Laughter/psychology , Mammals
4.
J Acoust Soc Am ; 151(3): 2002, 2022 03.
Article in English | MEDLINE | ID: mdl-35364952

ABSTRACT

The amplitude of the speech signal varies over time, and the speech envelope is an attempt to characterise this variation in the form of an acoustic feature. Although tacitly assumed, the similarity between the speech envelope-derived time series and that of phonetic objects (e.g., vowels) remains empirically unestablished. The current paper, therefore, evaluates several speech envelope extraction techniques, such as the Hilbert transform, by comparing different acoustic landmarks (e.g., peaks in the speech envelope) with manual phonetic annotation in a naturalistic and diverse dataset. Joint speech tasks are also introduced to determine which acoustic landmarks are most closely coordinated when voices are aligned. Finally, the acoustic landmarks are evaluated as predictors for the temporal characterisation of speaking style using classification tasks. The landmark that performed most closely to annotated vowel onsets was peaks in the first derivative of a human audition-informed envelope, consistent with converging evidence from neural and behavioural data. However, differences also emerged based on language and speaking style. Overall, the results show that both the choice of speech envelope extraction technique and the form of speech under study affect how sensitive an engineered feature is at capturing aspects of speech rhythm, such as the timing of vowels.


Subject(s)
Speech Perception , Voice , Humans , Language , Phonetics , Speech
SELECTION OF CITATIONS
SEARCH DETAIL