Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Cogn Emot ; : 1-19, 2023 Nov 24.
Artigo em Inglês | MEDLINE | ID: mdl-37997898

RESUMO

When we hear another person laugh or scream, can we tell the kind of situation they are in - for example, whether they are playing or fighting? Nonverbal expressions are theorised to vary systematically across behavioural contexts. Perceivers might be sensitive to these putative systematic mappings and thereby correctly infer contexts from others' vocalisations. Here, in two pre-registered experiments, we test the prediction that listeners can accurately deduce production contexts (e.g. being tickled, discovering threat) from spontaneous nonverbal vocalisations, like sighs and grunts. In Experiment 1, listeners (total n = 3120) matched 200 nonverbal vocalisations to one of 10 contexts using yes/no response options. Using signal detection analysis, we show that listeners were accurate at matching vocalisations to nine of the contexts. In Experiment 2, listeners (n = 337) categorised the production contexts by selecting from 10 response options in a forced-choice task. By analysing unbiased hit rates, we show that participants categorised all 10 contexts at better-than-chance levels. Together, these results demonstrate that perceivers can infer contexts from nonverbal vocalisations at rates that exceed that of random selection, suggesting that listeners are sensitive to systematic mappings between acoustic structures in vocalisations and behavioural contexts.

2.
Proc Biol Sci ; 287(1929): 20201148, 2020 06 24.
Artigo em Inglês | MEDLINE | ID: mdl-32546102

RESUMO

Vocalizations linked to emotional states are partly conserved among phylogenetically related species. This continuity may allow humans to accurately infer affective information from vocalizations produced by chimpanzees. In two pre-registered experiments, we examine human listeners' ability to infer behavioural contexts (e.g. discovering food) and core affect dimensions (arousal and valence) from 155 vocalizations produced by 66 chimpanzees in 10 different positive and negative contexts at high, medium or low arousal levels. In experiment 1, listeners (n = 310), categorized the vocalizations in a forced-choice task with 10 response options, and rated arousal and valence. In experiment 2, participants (n = 3120) matched vocalizations to production contexts using yes/no response options. The results show that listeners were accurate at matching vocalizations of most contexts in addition to inferring arousal and valence. Judgments were more accurate for negative as compared to positive vocalizations. An acoustic analysis demonstrated that, listeners made use of brightness and duration cues, and relied on noisiness in making context judgements, and pitch to infer core affect dimensions. Overall, the results suggest that human listeners can infer affective information from chimpanzee vocalizations beyond core affect, indicating phylogenetic continuity in the mapping of vocalizations to behavioural contexts.


Assuntos
Percepção Auditiva , Pan troglodytes , Acústica , Afeto , Animais , Sinais (Psicologia) , Emoções , Feminino , Humanos , Masculino , Ruído
3.
Chem Senses ; 43(6): 419-426, 2018 07 05.
Artigo em Inglês | MEDLINE | ID: mdl-29796589

RESUMO

In a double-blind experiment, participants were exposed to facial images of anger, disgust, fear, and neutral expressions under 2 body odor conditions: fear and neutral sweat. They had to indicate the valence of the gradually emerging facial image. Two alternative hypotheses were tested, namely a "general negative evaluative state" hypothesis and a "discrete emotion" hypothesis. These hypotheses suggest 2 distinctive data patterns for muscle activation and classification speed of facial expressions. The pattern of results that would support a "discrete emotions perspective" would be expected to reveal significantly increased activity in the medial frontalis (eyebrow raiser) and corrugator supercilii (frown) muscles associated with fear, and significantly decreased reaction times (RTs) to "only" fear faces in the fear odor condition. Conversely, a pattern of results characterized by only a significantly increased corrugator supercilii activity together with decreased RTs for fear, disgust, and anger faces in the fear odor condition would support an interpretation in line with a general negative evaluative state perspective. The data support the discrete emotion account for facial affect perception primed with fear odor. This study provides a first demonstration of perception of discrete negative facial expressions using olfactory priming.


Assuntos
Expressão Facial , Medo , Odorantes , Suor , Método Duplo-Cego , Humanos
4.
Emotion ; 2024 Sep 19.
Artigo em Inglês | MEDLINE | ID: mdl-39298240

RESUMO

What does it mean to feel good? Is our experience of gazing in awe at a majestic mountain fundamentally different than erupting with triumph when our favorite team wins the championship? Here, we use a semantic space approach to test which positive emotional experiences are distinct from each other based on in-depth personal narratives of experiences involving 22 positive emotions (n = 165; 3,592 emotional events). A bottom-up computational analysis was applied to the transcribed text, with unsupervised clustering employed to maximize internal granular consistency (i.e., the clusters being maximally different and maximally internally homogeneous). The analysis yielded four emotions that map onto distinct clusters of subjective experiences: amusement, interest, lust, and tenderness. The application of the semantic space approach to in-depth personal accounts yields a nuanced understanding of positive emotional experiences. Moreover, this analytical method allows for the bottom-up development of emotion taxonomies, showcasing its potential for broader applications in the study of subjective experiences. (PsycInfo Database Record (c) 2024 APA, all rights reserved).

5.
Emotion ; 24(2): 397-411, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37616109

RESUMO

The COVID-19 pandemic presents challenges to psychological well-being, but how can we predict when people suffer or cope during sustained stress? Here, we test the prediction that specific types of momentary emotional experiences are differently linked to psychological well-being during the pandemic. Study 1 used survey data collected from 24,221 participants in 51 countries during the COVID-19 outbreak. We show that, across countries, well-being is linked to individuals' recent emotional experiences, including calm, hope, anxiety, loneliness, and sadness. Consistent results are found in two age, sex, and ethnicity-representative samples in the United Kingdom (n = 971) and the United States (n = 961) with preregistered analyses (Study 2). A prospective 30-day daily diary study conducted in the United Kingdom (n = 110) confirms the key role of these five emotions and demonstrates that emotional experiences precede changes in well-being (Study 3). Our findings highlight differential relationships between specific types of momentary emotional experiences and well-being and point to the cultivation of calm and hope as candidate routes for well-being interventions during periods of sustained stress. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Assuntos
COVID-19 , Pandemias , Humanos , Bem-Estar Psicológico , Estudos Prospectivos , Emoções
7.
Philos Trans R Soc Lond B Biol Sci ; 377(1841): 20200404, 2022 01 03.
Artigo em Inglês | MEDLINE | ID: mdl-34775822

RESUMO

Laughter is a ubiquitous social signal. Recent work has highlighted distinctions between spontaneous and volitional laughter, which differ in terms of both production mechanisms and perceptual features. Here, we test listeners' ability to infer group identity from volitional and spontaneous laughter, as well as the perceived positivity of these laughs across cultures. Dutch (n = 273) and Japanese (n = 131) participants listened to decontextualized laughter clips and judged (i) whether the laughing person was from their cultural in-group or an out-group; and (ii) whether they thought the laughter was produced spontaneously or volitionally. They also rated the positivity of each laughter clip. Using frequentist and Bayesian analyses, we show that listeners were able to infer group membership from both spontaneous and volitional laughter, and that performance was equivalent for both types of laughter. Spontaneous laughter was rated as more positive than volitional laughter across the two cultures, and in-group laughs were perceived as more positive than out-group laughs by Dutch but not Japanese listeners. Our results demonstrate that both spontaneous and volitional laughter can be used by listeners to infer laughers' cultural group identity. This article is part of the theme issue 'Voice modulation: from origin and mechanism to social impact (Part II)'.


Assuntos
Riso , Percepção Auditiva , Teorema de Bayes , Emoções , Processos Grupais , Humanos
8.
J Nonverbal Behav ; 45(4): 419-454, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34744232

RESUMO

The human voice communicates emotion through two different types of vocalizations: nonverbal vocalizations (brief non-linguistic sounds like laughs) and speech prosody (tone of voice). Research examining recognizability of emotions from the voice has mostly focused on either nonverbal vocalizations or speech prosody, and included few categories of positive emotions. In two preregistered experiments, we compare human listeners' (total n = 400) recognition performance for 22 positive emotions from nonverbal vocalizations (n = 880) to that from speech prosody (n = 880). The results show that listeners were more accurate in recognizing most positive emotions from nonverbal vocalizations compared to prosodic expressions. Furthermore, acoustic classification experiments with machine learning models demonstrated that positive emotions are expressed with more distinctive acoustic patterns for nonverbal vocalizations as compared to speech prosody. Overall, the results suggest that vocal expressions of positive emotions are communicated more successfully when expressed as nonverbal vocalizations compared to speech prosody. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s10919-021-00375-1.

9.
Psychon Bull Rev ; 27(2): 237-265, 2020 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-31898261

RESUMO

Researchers examining nonverbal communication of emotions are becoming increasingly interested in differentiations between different positive emotional states like interest, relief, and pride. But despite the importance of the voice in communicating emotion in general and positive emotion in particular, there is to date no systematic review of what characterizes vocal expressions of different positive emotions. Furthermore, integration and synthesis of current findings are lacking. In this review, we comprehensively review studies (N = 108) investigating acoustic features relating to specific positive emotions in speech prosody and nonverbal vocalizations. We find that happy voices are generally loud with considerable variability in loudness, have high and variable pitch, and are high in the first two formant frequencies. When specific positive emotions are directly compared with each other, pitch mean, loudness mean, and speech rate differ across positive emotions, with patterns mapping onto clusters of emotions, so-called emotion families. For instance, pitch is higher for epistemological emotions (amusement, interest, relief), moderate for savouring emotions (contentment and pleasure), and lower for a prosocial emotion (admiration). Some, but not all, of the differences in acoustic patterns also map on to differences in arousal levels. We end by pointing to limitations in extant work and making concrete proposals for future research on positive emotions in the voice.


Assuntos
Emoções/fisiologia , Comunicação não Verbal/fisiologia , Fala/fisiologia , Voz/fisiologia , Humanos
10.
Emotion ; 20(8): 1435-1445, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-31478724

RESUMO

Are emotional expressions shaped by specialized innate mechanisms that guide learning, or do they develop exclusively from learning without innate preparedness? Here we test whether nonverbal affective vocalisations produced by bilaterally congenitally deaf adults contain emotional information that is recognisable to naive listeners. Because these deaf individuals have had no opportunity for auditory learning, the presence of such an association would imply that mappings between emotions and vocalizations are buffered against the absence of input that is typically important for their development and thus at least partly innate. We recorded nonverbal vocalizations expressing 9 emotions from 8 deaf individuals (435 tokens) and 8 matched hearing individuals (536 tokens). These vocalizations were submitted to an acoustic analysis and used in a recognition study in which naive listeners (n = 812) made forced-choice judgments. Our results show that naive listeners can reliably infer many emotional states from nonverbal vocalizations produced by deaf individuals. In particular, deaf vocalizations of fear, disgust, sadness, amusement, sensual pleasure, surprise, and relief were recognized at better-than-chance levels, whereas anger and achievement/triumph vocalizations were not. Differences were found on most acoustic features of the vocalizations produced by deaf as compared with hearing individuals. Our results suggest that there is an innate component to the associations between human emotions and vocalizations. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Assuntos
Percepção Auditiva/fisiologia , Emoções/fisiologia , Adulto , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA