RESUMO
Laughter conveys a wide range of information relevant for social interaction. In previous research we have shown that laughter can convey information about the sender's emotional state, however other research did not find such an effect. This paper aims to replicate our previous study using participant samples of diverse cultural backgrounds. 161 participants from Poland, the UK, India, Hong Kong, and other countries classified 121 spontaneously emitted German laughter sounds according to the laughter type, i.e., joyful, schadenfreude, and tickling laughter. Results showed that all participant groups classified the laughter sounds above chance level, and that there is a slight ingroup advantage for Western listeners. This suggests that classification of laughter according to the sender's emotional state is possible across different cultures, and that there might be a small advantage for classifying laughter of close cultural proximity.
Assuntos
Riso , Humanos , Riso/psicologia , Emoções , Felicidade , Sensação , Reconhecimento PsicológicoRESUMO
It has been shown that the acoustical signal of posed laughter can convey affective information to the listener. However, because posed and spontaneous laughter differ in a number of significant aspects, it is unclear whether affective communication generalises to spontaneous laughter. To answer this question, we created a stimulus set of 381 spontaneous laughter audio recordings, produced by 51 different speakers, resembling different types of laughter. In Experiment 1, 159 participants were presented with these audio recordings without any further information about the situational context of the speakers and asked to classify the laughter sounds. Results showed that joyful, tickling, and schadenfreude laughter could be classified significantly above chance level. In Experiment 2, 209 participants were presented with a subset of 121 laughter recordings correctly classified in Experiment 1 and asked to rate the laughter according to four emotional dimensions, i.e., arousal, dominance, sender's valence, and receiver-directed valence. Results showed that laughter types differed significantly in their ratings on all dimensions. Joyful laughter and tickling laughter both showed a positive sender's valence and receiver-directed valence, whereby tickling laughter had a particularly high arousal. Schadenfreude had a negative receiver-directed valence and a high dominance, thus providing empirical evidence for the existence of a dark side in spontaneous laughter. The present results suggest that with the evolution of human social communication laughter diversified from the former play signal of non-human primates to a much more fine-grained signal that can serve a multitude of social functions in order to regulate group structure and hierarchy.
Assuntos
Riso , Voz , Animais , Nível de Alerta , Emoções/fisiologia , Humanos , Riso/fisiologia , Riso/psicologia , SensaçãoRESUMO
Laughter is an ancient signal of social communication among humans and non-human primates. Laughter types with complex social functions (e.g., taunt and joy) presumably evolved from the unequivocal and reflex-like social bonding signal of tickling laughter already present in non-human primates. Here, we investigated the modulations of cerebral connectivity associated with different laughter types as well as the effects of attention shifts between implicit and explicit processing of social information conveyed by laughter using functional magnetic resonance imaging (fMRI). Complex social laughter types and tickling laughter were found to modulate connectivity in two distinguishable but partially overlapping parts of the laughter perception network irrespective of task instructions. Connectivity changes, presumably related to the higher acoustic complexity of tickling laughter, occurred between areas in the prefrontal cortex and the auditory association cortex, potentially reflecting higher demands on acoustic analysis associated with increased information load on auditory attention, working memory, evaluation and response selection processes. In contrast, the higher degree of socio-relational information in complex social laughter types was linked to increases of connectivity between auditory association cortices, the right dorsolateral prefrontal cortex and brain areas associated with mentalizing as well as areas in the visual associative cortex. These modulations might reflect automatic analysis of acoustic features, attention direction to informative aspects of the laughter signal and the retention of those in working memory during evaluation processes. These processes may be associated with visual imagery supporting the formation of inferences on the intentions of our social counterparts. Here, the right dorsolateral precentral cortex appears as a network node potentially linking the functions of auditory and visual associative sensory cortices with those of the mentalizing-associated anterior mediofrontal cortex during the decoding of social information in laughter.
Assuntos
Córtex Auditivo/metabolismo , Riso/fisiologia , Processos Mentais/fisiologia , Rede Nervosa/fisiologia , Córtex Pré-Frontal/fisiologia , Comportamento Social , Estimulação Acústica , Adulto , Mapeamento Encefálico , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância MagnéticaRESUMO
Although laughter plays an essential part in emotional vocal communication, little is known about the acoustical correlates that encode different emotional dimensions. In this study we examined the acoustical structure of laughter sounds differing along four emotional dimensions: arousal, dominance, sender's valence, and receiver-directed valence. Correlation of 43 acoustic parameters with individual emotional dimensions revealed that each emotional dimension was associated with a number of vocal cues. Common patterns of cues were found with emotional expression in speech, supporting the hypothesis of a common underlying mechanism for the vocal expression of emotions.
Assuntos
Sinais (Psicologia) , Emoções , Riso/psicologia , Percepção da Fala , Estimulação Acústica , Adulto , Nível de Alerta , Feminino , Humanos , Masculino , Testes NeuropsicológicosRESUMO
Although laughter is an important aspect of nonverbal vocalization, its acoustic properties are still not fully understood. Extreme articulation during laughter production, such as wide jaw opening, suggests that laughter can have very high first formant (F(1)) frequencies. We measured fundamental frequency and formant frequencies of the vowels produced in the vocalic segments of laughter. Vocalic segments showed higher average F(1) frequencies than those previously reported and individual values could be as high as 1100 Hz for male speakers and 1500 Hz for female speakers. To our knowledge, these are the highest F(1) frequencies reported to date for human vocalizations, exceeding even the F(1) frequencies reported for trained soprano singers. These exceptionally high F(1) values are likely to be based on the extreme positions adopted by the vocal tract during laughter in combination with physiological constraints accompanying the production of a "pressed" voice.
Assuntos
Laringe/fisiologia , Riso , Fonação , Acústica da Fala , Feminino , Humanos , Laringe/anatomia & histologia , Masculino , Fatores Sexuais , Processamento de Sinais Assistido por Computador , Espectrografia do Som , Fatores de TempoRESUMO
Laughter is highly relevant for social interaction in human beings and non-human primates. In humans as well as in non-human primates laughter can be induced by tickling. Human laughter, however, has further diversified and encompasses emotional laughter types with various communicative functions, e.g. joyful and taunting laughter. Here, it was evaluated if this evolutionary diversification of ecological functions is associated with distinct cerebral responses underlying laughter perception. Functional MRI revealed a double-dissociation of cerebral responses during perception of tickling laughter and emotional laughter (joy and taunt) with higher activations in the anterior rostral medial frontal cortex (arMFC) when emotional laughter was perceived, and stronger responses in the right superior temporal gyrus (STG) during appreciation of tickling laughter. Enhanced activation of the arMFC for emotional laughter presumably reflects increasing demands on social cognition processes arising from the greater social salience of these laughter types. Activation increase in the STG for tickling laughter may be linked to the higher acoustic complexity of this laughter type. The observed dissociation of cerebral responses for emotional laughter and tickling laughter was independent of task-directed focusing of attention. These findings support the postulated diversification of human laughter in the course of evolution from an unequivocal play signal to laughter with distinct emotional contents subserving complex social functions.
Assuntos
Percepção Auditiva/fisiologia , Mapeamento Encefálico , Encéfalo/fisiologia , Riso/fisiologia , Adulto , Encéfalo/anatomia & histologia , Emoções/fisiologia , Feminino , Humanos , Interpretação de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , MasculinoRESUMO
Although listeners are able to decode the underlying emotions embedded in acoustical laughter sounds, little is known about the acoustical cues that differentiate between the emotions. This study investigated the acoustical correlates of laughter expressing four different emotions: joy, tickling, taunting, and schadenfreude. Analysis of 43 acoustic parameters showed that the four emotions could be accurately discriminated on the basis of a small parameter set. Vowel quality contributed only minimally to emotional differentiation whereas prosodic parameters were more effective. Emotions are expressed by similar prosodic parameters in both laughter and speech.
Assuntos
Emoções , Riso , Acústica , Análise de Variância , Feminino , Felicidade , Humanos , Masculino , Fonética , Espectrografia do Som , Adulto JovemRESUMO
Although laughter is important in human social interaction, its role as a communicative signal is poorly understood. Because laughter is expressed in various emotional contexts, the question arises as to whether different emotions are communicated. In the present study, participants had to appraise 4 types of laughter sounds (joy, tickling, taunting, schadenfreude) either by classifying them according to the underlying emotion or by rating them according to different emotional dimensions. The authors found that emotions in laughter (a) can be classified into different emotional categories, and (b) can have distinctive profiles on W. Wundt's (1905) emotional dimensions. This shows that laughter is a multifaceted social behavior that can adopt various emotional connotations. The findings support the postulated function of laughter in establishing group structure, whereby laughter is used either to include or to exclude individuals from group coherence.
Assuntos
Emoções/fisiologia , Riso/fisiologia , Comportamento Social , Humanos , Relações Interpessoais , Comunicação não VerbalRESUMO
Both intonation (affective prosody) and lexical meaning of verbal utterances participate in the vocal expression of a speaker's emotional state, an important aspect of human communication. However, it is still a matter of debate how the information of these two 'channels' is integrated during speech perception. In order to further analyze the impact of affective prosody on lexical access, so-called interjections, i.e., short verbal emotional utterances, were investigated. The results of a series of psychoacoustic studies indicate the processing of emotional interjections to be mediated by a divided cognitive mechanism encompassing both lexical access and the encoding of prosodic data. Emotional interjections could be separated into elements with high- or low-lexical content. As concerns the former items, both prosodic and propositional cues have a significant influence upon recognition rates, whereas the processing of the low-lexical cognates rather solely depends upon prosodic information. Incongruencies between lexical and prosodic data structures compromise stimulus identification. Thus, the analysis of utterances characterized by a dissociation of the prosodic and lexical dimension revealed prosody to exert a stronger impact upon listeners' judgments than lexicality. Taken together, these findings indicate that both propositional and prosodic speech components closely interact during speech perception.