Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 42
Filtrar
1.
Cortex ; 175: 1-11, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38691922

RESUMO

Studies have reported substantial variability in emotion recognition ability (ERA) - an important social skill - but possible neural underpinnings for such individual differences are not well understood. This functional magnetic resonance imaging (fMRI) study investigated neural responses during emotion recognition in young adults (N = 49) who were selected for inclusion based on their performance (high or low) during previous testing of ERA. Participants were asked to judge brief video recordings in a forced-choice emotion recognition task, wherein stimuli were presented in visual, auditory and multimodal (audiovisual) blocks. Emotion recognition rates during brain scanning confirmed that individuals with high (vs low) ERA received higher accuracy for all presentation blocks. fMRI-analyses focused on key regions of interest (ROIs) involved in the processing of multimodal emotion expressions, based on previous meta-analyses. In neural response to emotional stimuli contrasted with neutral stimuli, individuals with high (vs low) ERA showed higher activation in the following ROIs during the multimodal condition: right middle superior temporal gyrus (mSTG), right posterior superior temporal sulcus (PSTS), and right inferior frontal cortex (IFC). Overall, results suggest that individual variability in ERA may be reflected across several stages of decisional processing, including extraction (mSTG), integration (PSTS) and evaluation (IFC) of emotional information.


Assuntos
Mapeamento Encefálico , Emoções , Individualidade , Imageamento por Ressonância Magnética , Reconhecimento Psicológico , Humanos , Masculino , Feminino , Emoções/fisiologia , Adulto Jovem , Adulto , Reconhecimento Psicológico/fisiologia , Encéfalo/fisiologia , Encéfalo/diagnóstico por imagem , Expressão Facial , Estimulação Luminosa/métodos , Reconhecimento Facial/fisiologia
2.
PeerJ ; 11: e16235, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38099307

RESUMO

The ability to recognize and work with patients' emotions is considered an important part of most psychotherapy approaches. Surprisingly, there is little systematic research on psychotherapists' ability to recognize other people's emotional expressions. In this study, we compared trainee psychotherapists' nonverbal emotion recognition accuracy to a control group of undergraduate students at two time points: at the beginning and at the end of one and a half years of theoretical and practical psychotherapy training. Emotion recognition accuracy (ERA) was assessed using two standardized computer tasks, one for recognition of dynamic multimodal (facial, bodily, vocal) expressions and one for recognition of facial micro expressions. Initially, 154 participants enrolled in the study, 72 also took part in the follow-up. The trainee psychotherapists were moderately better at recognizing multimodal expressions, and slightly better at recognizing facial micro expressions, than the control group at the first test occasion. However, mixed multilevel modeling indicated that the ERA change trajectories for the two groups differed significantly. While the control group improved in their ability to recognize multimodal emotional expressions from pretest to follow-up, the trainee psychotherapists did not. Both groups improved their micro expression recognition accuracy, but the slope for the control group was significantly steeper than the trainee psychotherapists'. These results suggest that psychotherapy education and clinical training do not always contribute to improved emotion recognition accuracy beyond what could be expected due to time or other factors. Possible reasons for that finding as well as implications for the psychotherapy education are discussed.


Assuntos
Psicoterapeutas , Psicoterapia , Humanos , Psicoterapia/educação , Emoções , Estudantes , Expressão Facial
3.
Front Psychol ; 14: 1188634, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37546436

RESUMO

Introduction: Psychotherapists' emotional and empathic competencies have a positive influence on psychotherapy outcome and alliance. However, it is doubtful whether psychotherapy education in itself leads to improvements in trainee psychotherapists' emotion recognition accuracy (ERA), which is an essential part of these competencies. Methods: In a randomized, controlled, double-blind study (N = 68), we trained trainee psychotherapists (57% psychodynamic therapy and 43% cognitive behavioral therapy) to detect non-verbal emotional expressions in others using standardized computerized trainings - one for multimodal emotion recognition accuracy and one for micro expression recognition accuracy - and compared their results to an active control group one week after the training (n = 60) and at the one-year follow up (n = 55). The participants trained once weekly during a three-week period. As outcome measures, we used a multimodal emotion recognition accuracy task, a micro expression recognition accuracy task and an emotion recognition accuracy task for verbal and non-verbal (combined) emotional expressions in medical settings. Results: The results of mixed multilevel analyses suggest that the multimodal emotion recognition accuracy training led to significantly steeper increases than the other two conditions from pretest to the posttest one week after the last training session. When comparing the pretest to follow-up differences in slopes, the superiority of the multimodal training group was still detectable in the unimodal audio modality and the unimodal video modality (in comparison to the control training group), but not when considering the multimodal audio-video modality or the total score of the multimodal emotion recognition accuracy measure. The micro expression training group showed a significantly steeper change trajectory from pretest to posttest compared to the control training group, but not compared to the multimodal training group. However, the effect vanished again until the one-year follow-up. There were no differences in change trajectories for the outcome measure about emotion recognition accuracy in medical settings. Discussion: We conclude that trainee psychotherapists' emotion recognition accuracy can be effectively trained, especially multimodal emotion recognition accuracy, and suggest that the changes in unimodal emotion recognition accuracy (audio-only and video-only) are long-lasting. Implications of these findings for the psychotherapy education are discussed.

4.
Front Psychiatry ; 14: 1111896, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37426085

RESUMO

Background: Psychopathic traits have been associated with impaired emotion recognition in criminal, clinical and community samples. A recent study however, suggested that cognitive impairment reduced the relationship between psychopathy and emotion recognition. We therefore investigated if reasoning ability and psychomotor speed were impacting emotion recognition in individuals with psychotic spectrum disorders (PSD) with and without a history of aggression, as well as in healthy individuals, more than self-rated psychopathy ratings on the Triarchic Psychopathy Measure (TriPM). Methods: Eighty individuals with PSD (schizophrenia, schizoaffective disorder, delusional disorder, other psychoses, psychotic bipolar disorder) and documented history of aggression (PSD+Agg) were compared with 54 individuals with PSD without prior aggression (PSD-Agg) and with 86 healthy individuals on the Emotion Recognition Assessment in Multiple Modalities (ERAM test). Individuals were psychiatrically stable and in remission from possible substance use disorders. Scaled scores on matrix reasoning, averages of dominant hand psychomotor speed and self-rated TriPM scores were obtained. Results: Associations existed between low reasoning ability, low psychomotor speed, patient status and prior aggression with total accuracy on the ERAM test. PSD groups performed worse than the healthy group. Whole group correlations between total and subscale scores of TriPM to ERAM were found, but no associations with TriPM scores within each group or in general linear models when accounting for reasoning ability, psychomotor speed, understanding of emotion words and prior aggression. Conclusion: Self-rated psychopathy was not independently linked to emotion recognition in PSD groups when considering prior aggression, patient status, reasoning ability, psychomotor speed and emotion word understanding.

5.
Pers Soc Psychol Bull ; 48(7): 1087-1104, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-34296644

RESUMO

The current study investigated what can be understood from another person's tone of voice. Participants from five English-speaking nations (Australia, India, Kenya, Singapore, and the United States) listened to vocal expressions of nine positive and nine negative affective states recorded by actors from their own nation. In response, they wrote open-ended judgments of what they believed the actor was trying to express. Responses cut across the chronological emotion process and included descriptions of situations, cognitive appraisals, feeling states, physiological arousal, expressive behaviors, emotion regulation, and attempts at social influence. Accuracy in terms of emotion categories was overall modest, whereas accuracy in terms of valence and arousal was more substantial. Coding participants' 57,380 responses yielded a taxonomy of 56 categories, which included affective states as well as person descriptors, communication behaviors, and abnormal states. Open-ended responses thus reveal a wide range of ways in which people spontaneously perceive the intent behind emotional speech prosody.


Assuntos
Fala , Voz , Nível de Alerta/fisiologia , Emoções/fisiologia , Humanos , Julgamento/fisiologia
6.
Acta Psychol (Amst) ; 220: 103422, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-34592586

RESUMO

Individuals vary in emotion recognition ability (ERA), but the causes and correlates of this variability are not well understood. Previous studies have largely focused on unimodal facial or vocal expressions and a small number of emotion categories, which may not reflect how emotions are expressed in everyday interactions. We investigated individual differences in ERA using a brief test containing dynamic multimodal (facial and vocal) expressions of 5 positive and 7 negative emotions (the ERAM test). Study 1 (N = 593) showed that ERA was positively correlated with emotional understanding, empathy, and openness, and negatively correlated with alexithymia. Women also had higher ERA than men. Study 2 was conducted online and replicated the recognition rates from Study 1 (which was conducted in lab) in a different sample (N = 106). Study 2 also showed that participants who had higher ERA were more accurate in their meta-cognitive judgments about their own accuracy. Recognition rates for visual, auditory, and audio-visual expressions were substantially correlated in both studies. Results provide further clues about the underlying structure of ERA and its links to broader affective processes. The ERAM test can be used for both lab and online research, and is freely available for academic research.


Assuntos
Expressão Facial , Individualidade , Sintomas Afetivos , Emoções , Feminino , Humanos , Masculino , Reconhecimento Psicológico
7.
Front Psychol ; 12: 708867, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34475841

RESUMO

Nonverbal emotion recognition accuracy (ERA) is a central feature of successful communication and interaction, and is of importance for many professions. We developed and evaluated two ERA training programs-one focusing on dynamic multimodal expressions (audio, video, audio-video) and one focusing on facial micro expressions. Sixty-seven subjects were randomized to one of two experimental groups (multimodal, micro expression) or an active control group (emotional working memory task). Participants trained once weekly with a brief computerized training program for three consecutive weeks. Pre-post outcome measures consisted of a multimodal ERA task, a micro expression recognition task, and a task about patients' emotional cues. Post measurement took place approximately a week after the last training session. Non-parametric mixed analyses of variance using the Aligned Rank Transform were used to evaluate the effectiveness of the training programs. Results showed that multimodal training was significantly more effective in improving multimodal ERA compared to micro expression training or the control training; and the micro expression training was significantly more effective in improving micro expression ERA compared to the other two training conditions. Both pre-post effects can be interpreted as large. No group differences were found for the outcome measure about recognizing patients' emotion cues. There were no transfer effects of the training programs, meaning that participants only improved significantly for the specific facet of ERA that they had trained on. Further, low baseline ERA was associated with larger ERA improvements. Results are discussed with regard to methodological and conceptual aspects, and practical implications and future directions are explored.

8.
Sci Rep ; 11(1): 2647, 2021 01 29.
Artigo em Inglês | MEDLINE | ID: mdl-33514829

RESUMO

Age-related differences in emotion recognition have predominantly been investigated using static pictures of facial expressions, and positive emotions beyond happiness have rarely been included. The current study instead used dynamic facial and vocal stimuli, and included a wider than usual range of positive emotions. In Task 1, younger and older adults were tested for their abilities to recognize 12 emotions from brief video recordings presented in visual, auditory, and multimodal blocks. Task 2 assessed recognition of 18 emotions conveyed by non-linguistic vocalizations (e.g., laughter, sobs, and sighs). Results from both tasks showed that younger adults had significantly higher overall recognition rates than older adults. In Task 1, significant group differences (younger > older) were only observed for the auditory block (across all emotions), and for expressions of anger, irritation, and relief (across all presentation blocks). In Task 2, significant group differences were observed for 6 out of 9 positive, and 8 out of 9 negative emotions. Overall, results indicate that recognition of both positive and negative emotions show age-related differences. This suggests that the age-related positivity effect in emotion recognition may become less evident when dynamic emotional stimuli are used and happiness is not the only positive emotion under study.


Assuntos
Envelhecimento/fisiologia , Ira , Expressão Facial , Felicidade , Reconhecimento Psicológico/fisiologia , Adolescente , Adulto , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
9.
Emotion ; 21(6): 1281-1301, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-32940485

RESUMO

Emotional expression is crucial for social interaction. Yet researchers disagree about whether nonverbal expressions truly reflect felt emotions and whether they convey discrete emotions to perceivers in everyday life. In the present study, 384 clips of vocal expression recorded in a field setting were rated by the speakers themselves and by naïve listeners with regard to their emotional contents. Results suggested that most expressions in everyday life are reflective of felt emotions in speakers. Seventy-three percent of the voice clips involved moderate to high emotion intensity. Speaker-listener agreement concerning expressed emotions was 5 times higher than would be expected from chance alone, and agreement was significantly higher for voice clips with high emotion intensity than for clips with low intensity. Acoustic analysis of the clips revealed emotion-specific patterns of voice cues. "Mixed emotions" occurred in 41% of the clips. Such expressions were typically interpreted by listeners as conveying one or the other of the two felt emotions. Mixed emotions were rarely recognized as such. The results are discussed regarding their implications for the domain of emotional expression in general, and vocal expression in particular. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Assuntos
Reconhecimento Psicológico , Voz , Sinais (Psicologia) , Emoções , Emoções Manifestas , Humanos
10.
PeerJ Comput Sci ; 7: e804, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-35036530

RESUMO

We investigated emotion classification from brief video recordings from the GEMEP database wherein actors portrayed 18 emotions. Vocal features consisted of acoustic parameters related to frequency, intensity, spectral distribution, and durations. Facial features consisted of facial action units. We first performed a series of person-independent supervised classification experiments. Best performance (AUC = 0.88) was obtained by merging the output from the best unimodal vocal (Elastic Net, AUC = 0.82) and facial (Random Forest, AUC = 0.80) classifiers using a late fusion approach and the product rule method. All 18 emotions were recognized with above-chance recall, although recognition rates varied widely across emotions (e.g., high for amusement, anger, and disgust; and low for shame). Multimodal feature patterns for each emotion are described in terms of the vocal and facial features that contributed most to classifier performance. Next, a series of exploratory unsupervised classification experiments were performed to gain more insight into how emotion expressions are organized. Solutions from traditional clustering techniques were interpreted using decision trees in order to explore which features underlie clustering. Another approach utilized various dimensionality reduction techniques paired with inspection of data visualizations. Unsupervised methods did not cluster stimuli in terms of emotion categories, but several explanatory patterns were observed. Some could be interpreted in terms of valence and arousal, but actor and gender specific aspects also contributed to clustering. Identifying explanatory patterns holds great potential as a meta-heuristic when unsupervised methods are used in complex classification tasks.

11.
Psychol Aging ; 34(5): 686-697, 2019 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-31157537

RESUMO

In everyday life throughout the life span, people frequently evaluate faces to obtain information crucial for social interactions. We investigated age-related differences in judgments of a wide range of social attributes based on facial appearance. Seventy-one younger and 60 older participants rated 196 computer-generated faces that systematically varied in facial features such as shape and reflectance to convey different intensity levels of seven social attributes (i.e., attractiveness, competence, dominance, extraversion, likeability, threat, and trustworthiness). Older compared to younger participants consistently gave higher attractiveness ratings to faces representing both high and low levels of attractiveness. Older participants were also less sensitive to the likeability of faces and tended to evaluate faces representing low likeability as more likable. The age groups did, however, not differ substantially in their evaluations of the other social attributes. Results are in line with previous research showing that aging is associated with preference toward positive and away from negative information and extend this positivity effect to social perception of faces. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Assuntos
Face/fisiologia , Relações Interpessoais , Adolescente , Adulto , Fatores Etários , Feminino , Humanos , Masculino , Percepção Social , Fatores Sociológicos , Adulto Jovem
12.
J Acoust Soc Am ; 145(5): 3058, 2019 05.
Artigo em Inglês | MEDLINE | ID: mdl-31153307

RESUMO

The auditory gating paradigm was adopted to study how much acoustic information is needed to recognize emotions from speech prosody and music performances. In Study 1, brief utterances conveying ten emotions were segmented into temporally fine-grained gates and presented to listeners, whereas Study 2 instead used musically expressed emotions. Emotion recognition accuracy increased with increasing gate duration and generally stabilized after a certain duration, with different trajectories for different emotions. Above-chance accuracy was observed for ≤100 ms stimuli for anger, happiness, neutral, and sadness, and for ≤250 ms stimuli for most other emotions, for both speech and music. This suggests that emotion recognition is a fast process that allows discrimination of several emotions based on low-level physical characteristics. The emotion identification points, which reflect the amount of information required for stable recognition, were shortest for anger and happiness for both speech and music, but recognition took longer to stabilize for music vs speech. This, in turn, suggests that acoustic cues that develop over time also play a role for emotion inferences (especially for music). Finally, acoustic cue patterns were positively correlated between speech and music, suggesting a shared acoustic code for expressing emotions.


Assuntos
Percepção Auditiva/fisiologia , Emoções/fisiologia , Música/psicologia , Percepção da Fala , Fala/fisiologia , Adulto , Ira/fisiologia , Feminino , Felicidade , Humanos , Masculino , Percepção da Fala/fisiologia , Fatores de Tempo
13.
Nat Hum Behav ; 3(4): 369-382, 2019 04.
Artigo em Inglês | MEDLINE | ID: mdl-30971794

RESUMO

Central to emotion science is the degree to which categories, such as Awe, or broader affective features, such as Valence, underlie the recognition of emotional expression. To explore the processes by which people recognize emotion from prosody, US and Indian participants were asked to judge the emotion categories or affective features communicated by 2,519 speech samples produced by 100 actors from 5 cultures. With large-scale statistical inference methods, we find that prosody can communicate at least 12 distinct kinds of emotion that are preserved across the 2 cultures. Analyses of the semantic and acoustic structure of the recognition of emotions reveal that emotion categories drive the recognition of emotions more so than affective features, including Valence. In contrast to discrete emotion theories, however, emotion categories are bridged by gradients representing blends of emotions. Our findings, visualized within an interactive map, reveal a complex, high-dimensional space of emotional states recognized cross-culturally in speech prosody.


Assuntos
Emoções , Psicolinguística , Reconhecimento Psicológico , Percepção Social , Percepção da Fala , Adulto , Comparação Transcultural , Feminino , Humanos , Índia , Masculino , Semântica , Acústica da Fala , Estados Unidos
14.
Am Psychol ; 74(6): 698-712, 2019 09.
Artigo em Inglês | MEDLINE | ID: mdl-30570267

RESUMO

Emotional vocalizations are central to human social life. Recent studies have documented that people recognize at least 13 emotions in brief vocalizations. This capacity emerges early in development, is preserved in some form across cultures, and informs how people respond emotionally to music. What is poorly understood is how emotion recognition from vocalization is structured within what we call a semantic space, the study of which addresses questions critical to the field: How many distinct kinds of emotions can be expressed? Do expressions convey emotion categories or affective appraisals (e.g., valence, arousal)? Is the recognition of emotion expressions discrete or continuous? Guided by a new theoretical approach to emotion taxonomies, we apply large-scale data collection and analysis techniques to judgments of 2,032 emotional vocal bursts produced in laboratory settings (Study 1) and 48 found in the real world (Study 2) by U.S. English speakers (N = 1,105). We find that vocal bursts convey at least 24 distinct kinds of emotion. Emotion categories (sympathy, awe), more so than affective appraisals (including valence and arousal), organize emotion recognition. In contrast to discrete emotion theories, the emotion categories conveyed by vocal bursts are bridged by smooth gradients with continuously varying meaning. We visualize the complex, high-dimensional space of emotion conveyed by brief human vocalization within an online interactive map. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Assuntos
Comunicação , Emoções/classificação , Reconhecimento Psicológico/fisiologia , Percepção Social , Voz/fisiologia , Adolescente , Adulto , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Semântica , Adulto Jovem
15.
Soc Cogn Affect Neurosci ; 13(9): 921-932, 2018 09 11.
Artigo em Inglês | MEDLINE | ID: mdl-30137550

RESUMO

Intranasal oxytocin (OT) has previously been found to increase spirituality, an effect moderated by OT-related genotypes. This pre-registered study sought to conceptually replicate and extend those findings. Using a single dose of intranasal OT vs placebo (PL), we investigated experimental treatment effects, and moderation by OT-related genotypes on spirituality, mystical experiences, and the sensed presence of a sentient being. A more exploratory aim was to test for interactions between treatment and the personality disposition absorption on these spirituality-related outcomes. A priming plus sensory deprivation procedure that has facilitated spiritual experiences in previous studies was used. The sample (N = 116) contained both sexes and was drawn from a relatively secular context. Results failed to conceptually replicate both the main effects of treatment and the treatment by genotype interactions on spirituality. Similarly, there were no such effects on mystical experiences or sensed presence. However, the data suggested an interaction between treatment and absorption. Relative to PL, OT seemed to enhance spiritual experiences in participants scoring low in absorption and dampen spirituality in participants scoring high in absorption.


Assuntos
Ocitocina/farmacologia , Espiritualidade , Administração Intranasal , Adulto , DNA/genética , Feminino , Genótipo , Humanos , Individualidade , Masculino , Mucosa Nasal/metabolismo , Ocitocina/administração & dosagem , Ocitocina/farmacocinética , Receptores de Ocitocina/genética , Adulto Jovem
16.
J Nonverbal Behav ; 42(1): 1-40, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29497220

RESUMO

It has been the subject of much debate in the study of vocal expression of emotions whether posed expressions (e.g., actor portrayals) are different from spontaneous expressions. In the present investigation, we assembled a new database consisting of 1877 voice clips from 23 datasets, and used it to systematically compare spontaneous and posed expressions across 3 experiments. Results showed that (a) spontaneous expressions were generally rated as more genuinely emotional than were posed expressions, even when controlling for differences in emotion intensity, (b) there were differences between the two stimulus types with regard to their acoustic characteristics, and (c) spontaneous expressions with a high emotion intensity conveyed discrete emotions to listeners to a similar degree as has previously been found for posed expressions, supporting a dose-response relationship between intensity of expression and discreteness in perceived emotions. Our conclusion is that there are reliable differences between spontaneous and posed expressions, though not necessarily in the ways commonly assumed. Implications for emotion theories and the use of emotion portrayals in studies of vocal expression are discussed.

17.
Scand J Psychol ; 59(2): 105-112, 2018 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-29411386

RESUMO

It has been the matter of much debate whether perceivers are able to distinguish spontaneous vocal expressions of emotion from posed vocal expressions (e.g., emotion portrayals). In this experiment, we show that such discrimination can be manifested in the autonomic arousal of listeners during implicit processing of vocal emotions. Participants (N = 21, age: 20-55 years) listened to two consecutive blocks of brief voice clips and judged the gender of the speaker in each clip, while we recorded three measures of sympathetic arousal of the autonomic nervous system (skin conductance level, mean arterial blood pressure, pulse rate). Unbeknownst to the listeners, the blocks consisted of two types of emotional speech: spontaneous and posed clips. As predicted, spontaneous clips yielded higher arousal levels than posed clips, suggesting that listeners implicitly distinguished between the two kinds of expression, even in the absence of any requirement to retrieve emotional information from the voice. We discuss the results with regard to theories of emotional contagion and the use of posed stimuli in studies of emotions.


Assuntos
Pressão Sanguínea/fisiologia , Emoções/fisiologia , Resposta Galvânica da Pele/fisiologia , Frequência Cardíaca/fisiologia , Percepção Social , Percepção da Fala/fisiologia , Sistema Nervoso Simpático/fisiologia , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
18.
Soc Cogn Affect Neurosci ; 13(2): 173-181, 2018 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-29194499

RESUMO

The ability to correctly understand the emotional expression of another person is essential for social relationships and appears to be a partly inherited trait. The neuropeptides oxytocin and vasopressin have been shown to influence this ability as well as face processing in humans. Here, recognition of the emotional content of faces and voices, separately and combined, was investigated in 492 subjects, genotyped for 25 single nucleotide polymorphisms (SNPs) in eight genes encoding proteins important for oxytocin and vasopressin neurotransmission. The SNP rs4778599 in the gene encoding aryl hydrocarbon receptor nuclear translocator 2 (ARNT2), a transcription factor that participates in the development of hypothalamic oxytocin and vasopressin neurons, showed an association that survived correction for multiple testing with emotion recognition of audio-visual stimuli in women (n = 309). This study demonstrates evidence for an association that further expands previous findings of oxytocin and vasopressin involvement in emotion recognition.


Assuntos
Translocador Nuclear Receptor Aril Hidrocarboneto/genética , Fatores de Transcrição Hélice-Alça-Hélice Básicos/genética , Emoções , Vias Neurais/fisiologia , Ocitocina/fisiologia , Reconhecimento Psicológico/fisiologia , Estimulação Acústica , Adolescente , Adulto , Expressão Facial , Feminino , Genótipo , Humanos , Masculino , Ocitocina/genética , Estimulação Luminosa , Polimorfismo de Nucleotídeo Único , Desempenho Psicomotor/fisiologia , Vasopressinas/genética , Vasopressinas/fisiologia , Voz , Adulto Jovem
19.
Sleep ; 40(11)2017 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-28958084

RESUMO

Objectives: Insufficient sleep has been associated with impaired recognition of facial emotions. However, previous studies have found inconsistent results, potentially stemming from the type of static picture task used. We therefore examined whether insufficient sleep was associated with decreased emotion recognition ability in two separate studies using a dynamic multimodal task. Methods: Study 1 used a cross-sectional design consisting of 291 participants with questionnaire measures assessing sleep duration and self-reported sleep quality for the previous night. Study 2 used an experimental design involving 181 participants where individuals were quasi-randomized into either a sleep-deprivation (N = 90) or a sleep-control (N = 91) condition. All participants from both studies were tested on the same forced-choice multimodal test of emotion recognition to assess the accuracy of emotion categorization. Results: Sleep duration, self-reported sleep quality (study 1), and sleep deprivation (study 2) did not predict overall emotion recognition accuracy or speed. Similarly, the responses to each of the twelve emotions tested showed no evidence of impaired recognition ability, apart from one positive association suggesting that greater self-reported sleep quality could predict more accurate recognition of disgust (study 1). Conclusions: The studies presented here involve considerably larger samples than previous studies and the results support the null hypotheses. Therefore, we suggest that the ability to accurately categorize the emotions of others is not associated with short-term sleep duration or sleep quality and is resilient to acute periods of insufficient sleep.


Assuntos
Emoções , Expressão Facial , Reconhecimento Psicológico , Privação do Sono/psicologia , Adolescente , Adulto , Estudos Transversais , Feminino , Humanos , Masculino , Autorrelato , Inquéritos e Questionários , Suécia , Fatores de Tempo , Adulto Jovem
20.
PLoS One ; 12(6): e0178423, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28570691

RESUMO

We investigated how memory for faces and voices (presented separately and in combination) varies as a function of sex and emotional expression (anger, disgust, fear, happiness, sadness, and neutral). At encoding, participants judged the expressed emotion of items in forced-choice tasks, followed by incidental Remember/Know recognition tasks. Results from 600 participants showed that accuracy (hits minus false alarms) was consistently higher for neutral compared to emotional items, whereas accuracy for specific emotions varied across the presentation modalities (i.e., faces, voices, and face-voice combinations). For the subjective sense of recollection ("remember" hits), neutral items received the highest hit rates only for faces, whereas for voices and face-voice combinations anger and fear expressions instead received the highest recollection rates. We also observed better accuracy for items by female expressers, and own-sex bias where female participants displayed memory advantage for female faces and face-voice combinations. Results further suggest that own-sex bias can be explained by recollection, rather than familiarity, rates. Overall, results show that memory for faces and voices may be influenced by the expressions that they carry, as well as by the sex of both items and participants. Emotion expressions may also enhance the subjective sense of recollection without enhancing memory accuracy.


Assuntos
Emoções , Reconhecimento Facial , Memória , Fatores Sexuais , Adolescente , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA