RESUMO
Is the singing voice processed distinctively in the human brain? In this Perspective, we discuss what might distinguish song processing from speech processing in light of recent work suggesting that some cortical neuronal populations respond selectively to song and we outline the implications for our understanding of auditory processing. We review the literature regarding the neural and physiological mechanisms of song production and perception and show that this provides evidence for key differences between song and speech processing. We conclude by discussing the significance of the notion that song processing is special in terms of how this might contribute to theories of the neurobiological origins of vocal communication and to our understanding of the neural circuitry underlying sound processing in the human cortex.
Assuntos
Córtex Auditivo , Humanos , Percepção Auditiva/fisiologia , Fala/fisiologia , Encéfalo/fisiologia , Estimulação AcústicaRESUMO
Spontaneous and conversational laughter are important socio-emotional communicative signals. Neuroimaging findings suggest that non-autistic people engage in mentalizing to understand the meaning behind conversational laughter. Autistic people may thus face specific challenges in processing conversational laughter, due to their mentalizing difficulties. Using fMRI, we explored neural differences during implicit processing of these two types of laughter. Autistic and non-autistic adults passively listened to funny words, followed by spontaneous laughter, conversational laughter, or noise-vocoded vocalizations. Behaviourally, words plus spontaneous laughter were rated as funnier than words plus conversational laughter, and the groups did not differ. However, neuroimaging results showed that non-autistic adults exhibited greater medial prefrontal cortex activation while listening to words plus conversational laughter, than words plus genuine laughter, while autistic adults showed no difference in medial prefrontal cortex activity between these two laughter types. Our findings suggest a crucial role for the medial prefrontal cortex in understanding socio-emotionally ambiguous laughter via mentalizing. Our study also highlights the possibility that autistic people may face challenges in understanding the essence of the laughter we frequently encounter in everyday life, especially in processing conversational laughter that carries complex meaning and social ambiguity, potentially leading to social vulnerability. Therefore, we advocate for clearer communication with autistic people.
Assuntos
Transtorno Autístico , Mapeamento Encefálico , Encéfalo , Riso , Imageamento por Ressonância Magnética , Humanos , Riso/fisiologia , Riso/psicologia , Masculino , Feminino , Adulto , Transtorno Autístico/fisiopatologia , Transtorno Autístico/diagnóstico por imagem , Transtorno Autístico/psicologia , Adulto Jovem , Encéfalo/diagnóstico por imagem , Encéfalo/fisiopatologia , Encéfalo/fisiologia , Córtex Pré-Frontal/diagnóstico por imagem , Córtex Pré-Frontal/fisiopatologia , Córtex Pré-Frontal/fisiologia , Estimulação AcústicaRESUMO
There are functional and anatomical distinctions between the neural systems involved in the recognition of sounds in the environment and those involved in the sensorimotor guidance of sound production and the spatial processing of sound. Evidence for the separation of these processes has historically come from disparate literatures on the perception and production of speech, music and other sounds. More recent evidence indicates that there are computational distinctions between the rostral and caudal primate auditory cortex that may underlie functional differences in auditory processing. These functional differences may originate from differences in the response times and temporal profiles of neurons in the rostral and caudal auditory cortex, suggesting that computational accounts of primate auditory pathways should focus on the implications of these temporal response differences.
Assuntos
Estimulação Acústica/métodos , Córtex Auditivo/fisiologia , Vias Auditivas/fisiologia , Percepção Auditiva/fisiologia , Animais , HumanosRESUMO
The amplitude of the speech signal varies over time, and the speech envelope is an attempt to characterise this variation in the form of an acoustic feature. Although tacitly assumed, the similarity between the speech envelope-derived time series and that of phonetic objects (e.g., vowels) remains empirically unestablished. The current paper, therefore, evaluates several speech envelope extraction techniques, such as the Hilbert transform, by comparing different acoustic landmarks (e.g., peaks in the speech envelope) with manual phonetic annotation in a naturalistic and diverse dataset. Joint speech tasks are also introduced to determine which acoustic landmarks are most closely coordinated when voices are aligned. Finally, the acoustic landmarks are evaluated as predictors for the temporal characterisation of speaking style using classification tasks. The landmark that performed most closely to annotated vowel onsets was peaks in the first derivative of a human audition-informed envelope, consistent with converging evidence from neural and behavioural data. However, differences also emerged based on language and speaking style. Overall, the results show that both the choice of speech envelope extraction technique and the form of speech under study affect how sensitive an engineered feature is at capturing aspects of speech rhythm, such as the timing of vowels.
Assuntos
Percepção da Fala , Voz , Humanos , Idioma , Fonética , FalaRESUMO
Evidence for perceptual processing in models of speech production is often drawn from investigations in which the sound of a talker's voice is altered in real time to induce "errors." Methods of acoustic manipulation vary but are assumed to engage the same neural network and psychological processes. This paper aims to review fMRI and PET studies of altered auditory feedback and assess the strength of the evidence these studies provide for a speech error correction mechanism. Studies included were functional neuroimaging studies of speech production in neurotypical adult humans, using natural speech errors or one of three predefined speech manipulation techniques (frequency altered feedback, delayed auditory feedback, and masked auditory feedback). Seventeen studies met the inclusion criteria. In a systematic review, we evaluated whether each study (1) used an ecologically valid speech production task, (2) controlled for auditory activation caused by hearing the perturbation, (3) statistically controlled for multiple comparisons, and (4) measured behavioral compensation correlating with perturbation. None of the studies met all four criteria. We then conducted an activation likelihood estimation meta-analysis of brain coordinates from 16 studies that reported brain responses to manipulated over unmanipulated speech feedback, using the GingerALE toolbox. These foci clustered in bilateral superior temporal gyri, anterior to cortical fields typically linked to error correction. Within the limits of our analysis, we conclude that existing neuroimaging evidence is insufficient to determine whether error monitoring occurs in the posterior superior temporal gyrus regions proposed by models of speech production.
Assuntos
Percepção da Fala , Fala , Estimulação Acústica , Adulto , Mapeamento Encefálico , Humanos , Funções Verossimilhança , Imageamento por Ressonância Magnética , Lobo Temporal/diagnóstico por imagemRESUMO
Studies of classical musicians have demonstrated that expertise modulates neural responses during auditory perception. However, it remains unclear whether such expertise-dependent plasticity is modulated by the instrument that a musician plays. To examine whether the recruitment of sensorimotor regions during music perception is modulated by instrument-specific experience, we studied nonclassical musicians-beatboxers, who predominantly use their vocal apparatus to produce sound, and guitarists, who use their hands. We contrast fMRI activity in 20 beatboxers, 20 guitarists, and 20 nonmusicians as they listen to novel beatboxing and guitar pieces. All musicians show enhanced activity in sensorimotor regions (IFG, IPC, and SMA), but only when listening to the musical instrument they can play. Using independent component analysis, we find expertise-selective enhancement in sensorimotor networks, which are distinct from changes in attentional networks. These findings suggest that long-term sensorimotor experience facilitates access to the posterodorsal "how" pathway during auditory processing.
Assuntos
Percepção Auditiva/fisiologia , Música , Plasticidade Neuronal , Córtex Sensório-Motor/fisiologia , Estimulação Acústica , Adulto , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Competência ProfissionalRESUMO
Auditory verbal hallucinations (hearing voices) are typically associated with psychosis, but a minority of the general population also experience them frequently and without distress. Such 'non-clinical' experiences offer a rare and unique opportunity to study hallucinations apart from confounding clinical factors, thus allowing for the identification of symptom-specific mechanisms. Recent theories propose that hallucinations result from an imbalance of prior expectation and sensory information, but whether such an imbalance also influences auditory-perceptual processes remains unknown. We examine for the first time the cortical processing of ambiguous speech in people without psychosis who regularly hear voices. Twelve non-clinical voice-hearers and 17 matched controls completed a functional magnetic resonance imaging scan while passively listening to degraded speech ('sine-wave' speech), that was either potentially intelligible or unintelligible. Voice-hearers reported recognizing the presence of speech in the stimuli before controls, and before being explicitly informed of its intelligibility. Across both groups, intelligible sine-wave speech engaged a typical left-lateralized speech processing network. Notably, however, voice-hearers showed stronger intelligibility responses than controls in the dorsal anterior cingulate cortex and in the superior frontal gyrus. This suggests an enhanced involvement of attention and sensorimotor processes, selectively when speech was potentially intelligible. Altogether, these behavioural and neural findings indicate that people with hallucinatory experiences show distinct responses to meaningful auditory stimuli. A greater weighting towards prior knowledge and expectation might cause non-veridical auditory sensations in these individuals, but it might also spontaneously facilitate perceptual processing where such knowledge is required. This has implications for the understanding of hallucinations in clinical and non-clinical populations, and is consistent with current 'predictive processing' theories of psychosis.
Assuntos
Giro do Cíngulo/fisiologia , Alucinações/fisiopatologia , Córtex Pré-Frontal/fisiologia , Estimulação Acústica , Adulto , Percepção Auditiva/fisiologia , Estudos de Casos e Controles , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Incerteza , Adulto JovemRESUMO
UNLABELLED: Synchronized behavior (chanting, singing, praying, dancing) is found in all human cultures and is central to religious, military, and political activities, which require people to act collaboratively and cohesively; however, we know little about the neural underpinnings of many kinds of synchronous behavior (e.g., vocal behavior) or its role in establishing and maintaining group cohesion. In the present study, we measured neural activity using fMRI while participants spoke simultaneously with another person. We manipulated whether the couple spoke the same sentence (allowing synchrony) or different sentences (preventing synchrony), and also whether the voice the participant heard was "live" (allowing rich reciprocal interaction) or prerecorded (with no such mutual influence). Synchronous speech was associated with increased activity in posterior and anterior auditory fields. When, and only when, participants spoke with a partner who was both synchronous and "live," we observed a lack of the suppression of auditory cortex, which is commonly seen as a neural correlate of speech production. Instead, auditory cortex responded as though it were processing another talker's speech. Our results suggest that detecting synchrony leads to a change in the perceptual consequences of one's own actions: they are processed as though they were other-, rather than self-produced. This may contribute to our understanding of synchronized behavior as a group-bonding tool. SIGNIFICANCE STATEMENT: Synchronized human behavior, such as chanting, dancing, and singing, are cultural universals with functional significance: these activities increase group cohesion and cause participants to like each other and behave more prosocially toward each other. Here we use fMRI brain imaging to investigate the neural basis of one common form of cohesive synchronized behavior: joint speaking (e.g., the synchronous speech seen in chants, prayers, pledges). Results showed that joint speech recruits additional right hemisphere regions outside the classic speech production network. Additionally, we found that a neural marker of self-produced speech, suppression of sensory cortices, did not occur during joint synchronized speech, suggesting that joint synchronized behavior may alter self-other distinctions in sensory processing.
Assuntos
Encéfalo/fisiologia , Percepção Social , Percepção da Fala/fisiologia , Fala/fisiologia , Estimulação Acústica/métodos , Adulto , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Mapeamento Encefálico , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , MasculinoRESUMO
Spoken conversations typically take place in noisy environments, and different kinds of masking sounds place differing demands on cognitive resources. Previous studies, examining the modulation of neural activity associated with the properties of competing sounds, have shown that additional speech streams engage the superior temporal gyrus. However, the absence of a condition in which target speech was heard without additional masking made it difficult to identify brain networks specific to masking and to ascertain the extent to which competing speech was processed equivalently to target speech. In this study, we scanned young healthy adults with continuous fMRI, while they listened to stories masked by sounds that differed in their similarity to speech. We show that auditory attention and control networks are activated during attentive listening to masked speech in the absence of an overt behavioral task. We demonstrate that competing speech is processed predominantly in the left hemisphere within the same pathway as target speech but is not treated equivalently within that stream and that individuals who perform better in speech in noise tasks activate the left mid-posterior superior temporal gyrus more. Finally, we identify neural responses associated with the onset of sounds in the auditory environment; activity was found within right lateralized frontal regions consistent with a phasic alerting response. Taken together, these results provide a comprehensive account of the neural processes involved in listening in noise.
Assuntos
Atenção/fisiologia , Córtex Auditivo/fisiologia , Função Executiva/fisiologia , Rede Nervosa/fisiologia , Mascaramento Perceptivo/fisiologia , Percepção da Fala/fisiologia , Adulto , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Adulto JovemRESUMO
Humans can generate mental auditory images of voices or songs, sometimes perceiving them almost as vividly as perceptual experiences. The functional networks supporting auditory imagery have been described, but less is known about the systems associated with interindividual differences in auditory imagery. Combining voxel-based morphometry and fMRI, we examined the structural basis of interindividual differences in how auditory images are subjectively perceived, and explored associations between auditory imagery, sensory-based processing, and visual imagery. Vividness of auditory imagery correlated with gray matter volume in the supplementary motor area (SMA), parietal cortex, medial superior frontal gyrus, and middle frontal gyrus. An analysis of functional responses to different types of human vocalizations revealed that the SMA and parietal sites that predict imagery are also modulated by sound type. Using representational similarity analysis, we found that higher representational specificity of heard sounds in SMA predicts vividness of imagery, indicating a mechanistic link between sensory- and imagery-based processing in sensorimotor cortex. Vividness of imagery in the visual domain also correlated with SMA structure, and with auditory imagery scores. Altogether, these findings provide evidence for a signature of imagery in brain structure, and highlight a common role of perceptual-motor interactions for processing heard and internally generated auditory information.
Assuntos
Percepção Auditiva/fisiologia , Córtex Cerebral/anatomia & histologia , Córtex Cerebral/fisiologia , Imaginação/fisiologia , Individualidade , Ruído , Estimulação Acústica , Adulto , Idoso , Idoso de 80 Anos ou mais , Mapeamento Encefálico , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Vias Neurais/irrigação sanguínea , Vias Neurais/fisiologia , Oxigênio/sangue , Análise de Regressão , Adulto JovemRESUMO
When talkers speak in masking sounds, their speech undergoes a variety of acoustic and phonetic changes. These changes are known collectively as the Lombard effect. Most behavioural research and neuroimaging research in this area has concentrated on the effect of energetic maskers such as white noise on Lombard speech. Previous fMRI studies have argued that neural responses to speaking in noise are driven by the quality of auditory feedback-that is, the audibility of the speaker's voice over the masker. However, we also frequently produce speech in the presence of informational maskers such as another talker. Here, speakers read sentences over a range of maskers varying in their informational and energetic content: speech, rotated speech, speech modulated noise, and white noise. Subjects also spoke in quiet and listened to the maskers without speaking. When subjects spoke in masking sounds, their vocal intensity increased in line with the energetic content of the masker. However, the opposite pattern was found neurally. In the superior temporal gyrus, activation was most strongly associated with increases in informational, rather than energetic, masking. This suggests that the neural activations associated with speaking in noise are more complex than a simple feedback response.
Assuntos
Mascaramento Perceptivo/fisiologia , Medida da Produção da Fala , Fala/fisiologia , Imagem de Difusão por Ressonância Magnética , Humanos , Ruído , FonéticaRESUMO
It is well established that categorising the emotional content of facial expressions may differ depending on contextual information. Whether this malleability is observed in the auditory domain and in genuine emotion expressions is poorly explored. We examined the perception of authentic laughter and crying in the context of happy, neutral and sad facial expressions. Participants rated the vocalisations on separate unipolar scales of happiness and sadness and on arousal. Although they were instructed to focus exclusively on the vocalisations, consistent context effects were found: For both laughter and crying, emotion judgements were shifted towards the information expressed by the face. These modulations were independent of response latencies and were larger for more emotionally ambiguous vocalisations. No effects of context were found for arousal ratings. These findings suggest that the automatic encoding of contextual information during emotion perception generalises across modalities, to purely non-verbal vocalisations, and is not confined to acted expressions.
Assuntos
Percepção Auditiva/fisiologia , Choro , Expressão Facial , Riso , Adolescente , Adulto , Nível de Alerta , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Tempo de Reação , Adulto JovemRESUMO
There is much interest in the idea that musicians perform better than non-musicians in understanding speech in background noise. Research in this area has often used energetic maskers, which have their effects primarily at the auditory periphery. However, masking interference can also occur at more central auditory levels, known as informational masking. This experiment extends existing research by using multiple maskers that vary in their informational content and similarity to speech, in order to examine differences in perception of masked speech between trained musicians (n = 25) and non-musicians (n = 25). Although musicians outperformed non-musicians on a measure of frequency discrimination, they showed no advantage in perceiving masked speech. Further analysis revealed that non-verbal IQ, rather than musicianship, significantly predicted speech reception thresholds in noise. The results strongly suggest that the contribution of general cognitive abilities needs to be taken into account in any investigations of individual variability for perceiving speech in noise.
Assuntos
Música , Mascaramento Perceptivo/fisiologia , Inteligibilidade da Fala/fisiologia , Percepção da Fala/fisiologia , Adulto , Atenção , Limiar Auditivo/fisiologia , Feminino , Humanos , Inteligência , Masculino , Ruído , Ocupações , Discriminação da Altura Tonal/fisiologia , Psicoacústica , Desempenho Psicomotor , Razão Sinal-Ruído , Teste de Stroop , Inquéritos e Questionários , Percepção do Tempo/fisiologia , Escalas de Wechsler , Adulto JovemRESUMO
The melodic contour of speech forms an important perceptual aspect of tonal and nontonal languages and an important limiting factor on the intelligibility of speech heard through a cochlear implant. Previous work exploring the neural correlates of speech comprehension identified a left-dominant pathway in the temporal lobes supporting the extraction of an intelligible linguistic message, whereas the right anterior temporal lobe showed an overall preference for signals clearly conveying dynamic pitch information [Johnsrude, I. S., Penhune, V. B., & Zatorre, R. J. Functional specificity in the right human auditory cortex for perceiving pitch direction. Brain, 123, 155-163, 2000; Scott, S. K., Blank, C. C., Rosen, S., & Wise, R. J. Identification of a pathway for intelligible speech in the left temporal lobe. Brain, 123, 2400-2406, 2000]. The current study combined modulations of overall intelligibility (through vocoding and spectral inversion) with a manipulation of pitch contour (normal vs. falling) to investigate the processing of spoken sentences in functional MRI. Our overall findings replicate and extend those of Scott et al. [Scott, S. K., Blank, C. C., Rosen, S., & Wise, R. J. Identification of a pathway for intelligible speech in the left temporal lobe. Brain, 123, 2400-2406, 2000], where greater sentence intelligibility was predominately associated with increased activity in the left STS, and the greatest response to normal sentence melody was found in right superior temporal gyrus. These data suggest a spatial distinction between brain areas associated with intelligibility and those involved in the processing of dynamic pitch information in speech. By including a set of complexity-matched unintelligible conditions created by spectral inversion, this is additionally the first study reporting a fully factorial exploration of spectrotemporal complexity and spectral inversion as they relate to the neural processing of speech intelligibility. Perhaps surprisingly, there was little evidence for an interaction between the two factors-we discuss the implications for the processing of sound and speech in the dorsolateral temporal lobes.
Assuntos
Mapeamento Encefálico/métodos , Percepção da Altura Sonora/fisiologia , Inteligibilidade da Fala/fisiologia , Percepção da Fala/fisiologia , Lobo Temporal/fisiologia , Adolescente , Adulto , Humanos , Imageamento por Ressonância Magnética , Pessoa de Meia-Idade , Adulto JovemRESUMO
The motor theory of speech perception assumes that activation of the motor system is essential in the perception of speech. However, deficits in speech perception and comprehension do not arise from damage that is restricted to the motor cortex, few functional imaging studies reveal activity in the motor cortex during speech perception, and the motor cortex is strongly activated by many different sound categories. Here, we evaluate alternative roles for the motor cortex in spoken communication and suggest a specific role in sensorimotor processing in conversation. We argue that motor cortex activation is essential in joint speech, particularly for the timing of turn taking.
Assuntos
Mapeamento Encefálico , Comunicação , Córtex Motor/fisiologia , Percepção da Fala/fisiologia , Rede Nervosa/fisiologia , Fala/fisiologiaRESUMO
Although emotional mimicry is ubiquitous in social interactions, its mechanisms and roles remain disputed. A prevalent view is that imitating others' expressions facilitates emotional understanding, but the evidence is mixed and almost entirely based on facial emotions. In a preregistered study, we asked whether inhibiting orofacial mimicry affects authenticity perception in vocal emotions. Participants listened to authentic and posed laughs and cries, while holding a pen between the teeth and lips to inhibit orofacial responses (n = 75), or while responding freely without a pen (n = 75). They made authenticity judgments and rated how much they felt the conveyed emotions (emotional contagion). Mimicry inhibition decreased the accuracy of authenticity perception in laughter and crying, and in posed and authentic vocalizations. It did not affect contagion ratings, however, nor performance in a cognitive control task, ruling out the effort of holding the pen as an explanation for the decrements in authenticity perception. Laughter was more contagious than crying, and authentic vocalizations were more contagious than posed ones, regardless of whether mimicry was inhibited or not. These findings confirm the role of mimicry in emotional understanding and extend it to auditory emotions. They also imply that perceived emotional contagion can be unrelated to mimicry. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Assuntos
Choro , Emoções , Expressão Facial , Riso , Percepção Social , Humanos , Feminino , Masculino , Adulto , Adulto Jovem , Riso/fisiologia , Choro/fisiologia , Emoções/fisiologia , Comportamento Imitativo/fisiologia , Percepção Auditiva/fisiologiaRESUMO
The ability to distinguish spontaneous from volitional emotional expressions is an important social skill. How do blind individuals perceive emotional authenticity? Unlike sighted individuals, they cannot rely on facial and body language cues, relying instead on vocal cues alone. Here, we combined behavioral and ERP measures to investigate authenticity perception in laughter and crying in individuals with early- or late-blindness onset. Early-blind, late-blind, and sighted control participants (n = 17 per group, N = 51) completed authenticity and emotion discrimination tasks while EEG data were recorded. The stimuli consisted of laughs and cries that were either spontaneous or volitional. The ERP analysis focused on the N1, P2, and late positive potential (LPP). Behaviorally, early-blind participants showed intact authenticity perception, but late-blind participants performed worse than controls. There were no group differences in the emotion discrimination task. In brain responses, all groups were sensitive to laughter authenticity at the P2 stage, and to crying authenticity at the early LPP stage. Nevertheless, only early-blind participants were sensitive to crying authenticity at the N1 and middle LPP stages, and to laughter authenticity at the early LPP stage. Furthermore, early-blind and sighted participants were more sensitive than late-blind ones to crying authenticity at the P2 and late LPP stages. Altogether, these findings suggest that early blindness relates to facilitated brain processing of authenticity in voices, both at early sensory and late cognitive-evaluative stages. Late-onset blindness, in contrast, relates to decreased sensitivity to authenticity at behavioral and brain levels.
Assuntos
Riso , Voz , Humanos , Emoções/fisiologia , Cegueira , Riso/fisiologia , Percepção Social , Eletroencefalografia , Potenciais Evocados/fisiologiaRESUMO
Human interaction is immersed in laughter; though genuine and posed laughter are acoustically distinct, they are both crucial socio-emotional signals. In this novel study, autistic and non-autistic adults explicitly rated the affective properties of genuine and posed laughter. Additionally, we explored whether their self-reported everyday experiences with laughter differ. Both groups could differentiate between these two types of laughter. However, autistic adults rated posed laughter as more authentic and emotionally arousing than non-autistic adults, perceiving it to be similar to genuine laughter. Autistic adults reported laughing less, deriving less enjoyment from laughter, and experiencing difficulty in understanding the social meaning of other people's laughter compared to non-autistic people. Despite these differences, autistic adults reported using laughter socially as often as non-autistic adults, leveraging it to mediate social contexts. Our findings suggest that autistic adults show subtle differences in their perception of laughter, which may be associated with their struggles in comprehending the social meaning of laughter, as well as their diminished frequency and enjoyment of laughter in everyday scenarios. By combining experimental evidence with first-person experiences, this study suggests that autistic adults likely employ different strategies to understand laughter in everyday contexts, potentially leaving them socially vulnerable in communication.
Assuntos
Transtorno Autístico , Riso , Humanos , Riso/psicologia , Masculino , Adulto , Feminino , Transtorno Autístico/psicologia , Transtorno Autístico/fisiopatologia , Adulto Jovem , Emoções/fisiologia , Pessoa de Meia-IdadeRESUMO
We present a questionnaire exploring everyday laughter experience. We developed a 30-item questionnaire in English and collected data on an English-speaking sample (N = 823). Based on Principal Component Analysis (PCA), we identified four dimensions which accounted for variations in people's experiences of laughter: laughter frequency ('Frequency'), social usage of laughter ('Usage'), understanding of other people's laughter ('Understanding'), and feelings towards laughter ('Liking'). Reliability and validity of the LPPQ were assessed. To explore potential similarities and differences based on culture and language, we collected data with Mandarin Chinese-speaking population (N = 574). A PCA suggested the extraction of the same four dimensions, with some item differences between English and Chinese versions. The Laughter Production and Perception Questionnaire (LPPQ) will advance research into the experience of human laughter, which has a potentially crucial role in everyday life.