Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 20 de 30
Filtrar
1.
Biol Lett ; 19(11): 20230326, 2023 11.
Artículo en Inglés | MEDLINE | ID: mdl-37935372

RESUMEN

Music is a human communicative art whose evolutionary origins may lie in capacities that support cooperation and/or competition. A mixed account favouring simultaneous cooperation and competition draws on analogous interactive displays produced by collectively signalling non-human animals (e.g. crickets and frogs). In these displays, rhythmically coordinated calls serve as a beacon whereby groups of males 'cooperatively' attract potential female mates, while the likelihood of each male competitively attracting an actual mate depends on the precedence of his signal. Human behaviour consistent with the mixed account was previously observed in a renowned boys choir, where the basses-the oldest boys with the deepest voices-boosted their acoustic prominence by increasing energy in a high-frequency band of the vocal spectrum when girls were in an otherwise male audience. The current study tested female and male sensitivity and preferences for this subtle vocal modulation in online listening tasks. Results indicate that while female and male listeners are similarly sensitive to enhanced high-spectral energy elicited by the presence of girls in the audience, only female listeners exhibit a reliable preference for it. Findings suggest that human chorusing is a flexible form of social communicative behaviour that allows simultaneous group cohesion and sexually motivated competition.


Asunto(s)
Música , Voz , Humanos , Masculino , Femenino , Acústica , Conducta Social
2.
Cogn Emot ; 37(5): 863-873, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37310161

RESUMEN

Emotion recognition - a prerequisite for social interactions - varies among individuals. Sex differences have been proposed as a central source of individual differences, although the existing evidence is rather heterogeneous. In the current study (N = 426), we investigated the potential moderating effects of stimulus features, including modality, emotion specificity, and the sex of the encoder (referring to the sex of the actor) on the magnitude of sex differences in emotion recognition. Our findings replicated women's overall better emotion recognition, particularly evident for negative expressions (fear and anger) compared to men. This outperformance was observed across all modalities, with the largest differences for audiovisually expressed emotions, while the sex of the encoder had no impact. Given our findings, future studies should consider these and other potential moderator variables to better estimate sex differences.


Asunto(s)
Reconocimiento Facial , Caracteres Sexuales , Humanos , Femenino , Masculino , Expresión Facial , Emociones , Ira , Miedo
3.
Cogn Affect Behav Neurosci ; 21(1): 74-92, 2021 02.
Artículo en Inglés | MEDLINE | ID: mdl-33420711

RESUMEN

In social interactions, speakers often use their tone of voice ("prosody") to communicate their interpersonal stance to pragmatically mark an ironic intention (e.g., sarcasm). The neurocognitive effects of prosody as listeners process ironic statements in real time are still poorly understood. In this study, 30 participants judged the friendliness of literal and ironic criticisms and compliments in the absence of context while their electrical brain activity was recorded. Event-related potentials reflecting the uptake of prosodic information were tracked at two time points in the utterance. Prosody robustly modulated P200 and late positivity amplitudes from utterance onset. These early neural responses registered both the speaker's stance (positive/negative) and their intention (literal/ironic). At a later timepoint (You are such a great/horrible cook), P200, N400, and P600 amplitudes were all greater when the critical word valence was congruent with the speaker's vocal stance, suggesting that irony was contextually facilitated by early effects from prosody. Our results exemplify that rapid uptake of salient prosodic features allows listeners to make online predictions about the speaker's ironic intent. This process can constrain their representation of an utterance to uncover nonliteral meanings without violating contextual expectations held about the speaker, as described by parallel-constraint satisfaction models.


Asunto(s)
Percepción del Habla , Voz , Comprensión , Electroencefalografía , Potenciales Evocados , Femenino , Humanos , Intención , Masculino
4.
Annu Rev Psychol ; 70: 719-745, 2019 01 04.
Artículo en Inglés | MEDLINE | ID: mdl-30110576

RESUMEN

Much emotion research has focused on the end result of the emotion process, categorical emotions, as reported by the protagonist or diagnosed by the researcher, with the aim of differentiating these discrete states. In contrast, this review concentrates on the emotion process itself by examining how ( a) elicitation, or the appraisal of events, leads to ( b) differentiation, in particular, action tendencies accompanied by physiological responses and manifested in facial, vocal, and gestural expressions, before ( c) conscious representation or experience of these changes (feeling) and ( d) categorizing and labeling these changes according to the semantic profiles of emotion words. The review focuses on empirical, particularly experimental, studies from emotion research and neighboring domains that contribute to a better understanding of the unfolding emotion process and the underlying mechanisms, including the interactions among emotion components.


Asunto(s)
Sistema Nervioso Autónomo/fisiología , Emociones/fisiología , Expresión Facial , Motivación/fisiología , Conducta Social , Humanos
5.
Neuroimage ; 181: 582-597, 2018 11 01.
Artículo en Inglés | MEDLINE | ID: mdl-30031933

RESUMEN

In spoken language, verbal cues (what we say) and vocal cues (how we say it) contribute to person perception, the process for interpreting information and making inferences about other people. When someone has an accent, forming impressions from the speaker's voice may be influenced by social categorization processes (i.e., activating stereotypical traits of members of a perceived 'out-group') and by processes which differentiate the speaker based on their individual attributes (e.g., registering the vocal confidence level of the speaker in order to make a trust decision). The neural systems for using vocal cues that refer to the speaker's identity and to qualities of their vocal expression to generate inferences about others are not known. Here, we used functional magnetic resonance imaging (fMRI) to investigate how speaker categorization influences brain activity as Canadian-English listeners judged whether they believe statements produced by in-group (native) and out-group (regional, foreign) speakers. Each statement was expressed in a confident, doubtful, and neutral tone of voice. In-group speakers were perceived as more believable than speakers with out-group accents overall, confirming social categorization of speakers based on their accent. Superior parietal and middle temporal regions were uniquely activated when listening to out-group compared to in-group speakers suggesting that they may be involved in extracting the attributes of speaker believability from the lower-level acoustic variations. Basal ganglia, left cuneus and right fusiform gyrus were activated by confident expressions produced by out-group speakers. These regions appear to participate in abstracting more ambiguous believability attributes from accented speakers (where a conflict arises between the tendency to disbelieve an out-group speaker and the tendency to believe a confident voice). For out-group speakers, stronger impressions of believability selectively modulated activity in the bilateral superior and middle temporal regions. Moreover, the right superior temporal gyrus, a region that was associated with perceived speaker confidence, was found to be functionally connected to the left lingual gyrus and right middle temporal gyrus when out-group speakers were judged as more believable. These findings suggest that identity-related voice characteristics and associated biases may influence underlying neural activities for making social attributions about out-group speakers, affecting decisions about believability and trust. Specifically, inferences about out-group speakers seem to be mediated to a greater extent by stimulus-related features (i.e., vocal confidence cues) than for in-group speakers. Our approach highlights how the voice can be studied to advance models of person perception.


Asunto(s)
Ganglios Basales/fisiología , Corteza Cerebral/fisiología , Conectoma/métodos , Percepción Social , Percepción del Habla/fisiología , Adolescente , Adulto , Ganglios Basales/diagnóstico por imagen , Corteza Cerebral/diagnóstico por imagen , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Confianza , Adulto Joven
6.
Scand J Psychol ; 59(2): 105-112, 2018 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-29411386

RESUMEN

It has been the matter of much debate whether perceivers are able to distinguish spontaneous vocal expressions of emotion from posed vocal expressions (e.g., emotion portrayals). In this experiment, we show that such discrimination can be manifested in the autonomic arousal of listeners during implicit processing of vocal emotions. Participants (N = 21, age: 20-55 years) listened to two consecutive blocks of brief voice clips and judged the gender of the speaker in each clip, while we recorded three measures of sympathetic arousal of the autonomic nervous system (skin conductance level, mean arterial blood pressure, pulse rate). Unbeknownst to the listeners, the blocks consisted of two types of emotional speech: spontaneous and posed clips. As predicted, spontaneous clips yielded higher arousal levels than posed clips, suggesting that listeners implicitly distinguished between the two kinds of expression, even in the absence of any requirement to retrieve emotional information from the voice. We discuss the results with regard to theories of emotional contagion and the use of posed stimuli in studies of emotions.


Asunto(s)
Presión Sanguínea/fisiología , Emociones/fisiología , Respuesta Galvánica de la Piel/fisiología , Frecuencia Cardíaca/fisiología , Percepción Social , Percepción del Habla/fisiología , Sistema Nervioso Simpático/fisiología , Adulto , Femenino , Humanos , Masculino , Persona de Mediana Edad , Adulto Joven
7.
BMC Psychiatry ; 16: 218, 2016 07 07.
Artículo en Inglés | MEDLINE | ID: mdl-27388011

RESUMEN

BACKGROUND: Impaired interpretation of nonverbal emotional cues in patients with schizophrenia has been reported in several studies and a clinical relevance of these deficits for social functioning has been assumed. However, it is unclear to what extent the impairments depend on specific emotions or specific channels of nonverbal communication. METHODS: Here, the effect of cue modality and emotional categories on accuracy of emotion recognition was evaluated in 21 patients with schizophrenia and compared to a healthy control group (n = 21). To this end, dynamic stimuli comprising speakers of both genders in three different sensory modalities (auditory, visual and audiovisual) and five emotional categories (happy, alluring, neutral, angry and disgusted) were used. RESULTS: Patients with schizophrenia were found to be impaired in emotion recognition in comparison to the control group across all stimuli. Considering specific emotions more severe deficits were revealed in the recognition of alluring stimuli and less severe deficits in the recognition of disgusted stimuli as compared to all other emotions. Regarding cue modality the extent of the impairment in emotional recognition did not significantly differ between auditory and visual cues across all emotional categories. However, patients with schizophrenia showed significantly more severe disturbances for vocal as compared to facial cues when sexual interest is expressed (alluring stimuli), whereas more severe disturbances for facial as compared to vocal cues were observed when happiness or anger is expressed. CONCLUSION: Our results confirmed that perceptual impairments can be observed for vocal as well as facial cues conveying various social and emotional connotations. The observed differences in severity of impairments with most severe deficits for alluring expressions might be related to specific difficulties in recognizing the complex social emotional information of interpersonal intentions as compared to "basic" emotional states. Therefore, future studies evaluating perception of nonverbal cues should consider a broader range of social and emotional signals beyond basic emotions including attitudes and interpersonal intentions. Identifying specific domains of social perception particularly prone for misunderstandings in patients with schizophrenia might allow for a refinement of interventions aiming at improving social functioning.


Asunto(s)
Emociones , Comunicación no Verbal/psicología , Reconocimiento en Psicología , Psicología del Esquizofrénico , Estimulación Acústica , Adulto , Anciano , Estudios de Casos y Controles , Señales (Psicología) , Expresión Facial , Femenino , Humanos , Masculino , Persona de Mediana Edad , Estimulación Luminosa , Adulto Joven
8.
Cortex ; 175: 1-11, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38691922

RESUMEN

Studies have reported substantial variability in emotion recognition ability (ERA) - an important social skill - but possible neural underpinnings for such individual differences are not well understood. This functional magnetic resonance imaging (fMRI) study investigated neural responses during emotion recognition in young adults (N = 49) who were selected for inclusion based on their performance (high or low) during previous testing of ERA. Participants were asked to judge brief video recordings in a forced-choice emotion recognition task, wherein stimuli were presented in visual, auditory and multimodal (audiovisual) blocks. Emotion recognition rates during brain scanning confirmed that individuals with high (vs low) ERA received higher accuracy for all presentation blocks. fMRI-analyses focused on key regions of interest (ROIs) involved in the processing of multimodal emotion expressions, based on previous meta-analyses. In neural response to emotional stimuli contrasted with neutral stimuli, individuals with high (vs low) ERA showed higher activation in the following ROIs during the multimodal condition: right middle superior temporal gyrus (mSTG), right posterior superior temporal sulcus (PSTS), and right inferior frontal cortex (IFC). Overall, results suggest that individual variability in ERA may be reflected across several stages of decisional processing, including extraction (mSTG), integration (PSTS) and evaluation (IFC) of emotional information.


Asunto(s)
Mapeo Encefálico , Emociones , Individualidad , Imagen por Resonancia Magnética , Reconocimiento en Psicología , Humanos , Masculino , Femenino , Emociones/fisiología , Adulto Joven , Adulto , Reconocimiento en Psicología/fisiología , Encéfalo/fisiología , Encéfalo/diagnóstico por imagen , Expresión Facial , Estimulación Luminosa/métodos , Reconocimiento Facial/fisiología
9.
Autism Res ; 17(2): 395-409, 2024 02.
Artículo en Inglés | MEDLINE | ID: mdl-38151701

RESUMEN

In this study, we sought to objectively and quantitatively characterize the prosodic features of autism spectrum disorder (ASD) via the characteristics of prosody in a newly developed structured speech experiment. Male adults with high-functioning ASD and age/intelligence-matched men with typical development (TD) were asked to read 29 brief scripts aloud in response to preceding auditory stimuli. To investigate whether (1) highly structured acting-out tasks can uncover the prosodic of difference between those with ASD and TD, and (2) the prosodic stableness and flexibleness can be used for objective automatic assessment of ASD, we compared prosodic features such as fundamental frequency, intensity, and mora duration. The results indicate that individuals with ASD exhibit stable pitch registers or volume levels in some affective vocal-expression scenarios, such as those involving anger or sadness, compared with TD and those with TD. However, unstable prosody was observed in some timing control or emphasis tasks in the participants with ASD. Automatic classification of the ASD and TD groups using a support vector machine (SVM) with speech features exhibited an accuracy of 90.4%. A machine learning-based assessment of the degree of ASD core symptoms using support vector regression (SVR) also had good performance. These results may inform the development of a new easy-to-use assessment tool for ASD core symptoms using recorded audio signals.


Asunto(s)
Trastorno del Espectro Autista , Trastorno Autístico , Percepción del Habla , Voz , Adulto , Humanos , Masculino , Trastorno del Espectro Autista/diagnóstico , Trastorno del Espectro Autista/psicología , Habla/fisiología , Percepción del Habla/fisiología
10.
Brain Struct Funct ; 2024 Aug 03.
Artículo en Inglés | MEDLINE | ID: mdl-39096390

RESUMEN

Emotional arousal is caused by the activity of two parallel ascending systems targeting mostly the subcortical limbic regions and the prefrontal cortex. The aversive, negative arousal system is initiated by the activity of the mesolimbic cholinergic system and the hedonic, appetitive, arousal is initiated by the activity of the mesolimbic dopaminergic system. Both ascending projections have a diffused nature and arise from the rostral, tegmental part of the brain reticular activating system. The mesolimbic cholinergic system originates in the laterodorsal tegmental nucleus and the mesolimbic dopaminergic system in the ventral tegmental area. Cholinergic and dopaminergic arousal systems have converging input to the medial prefrontal cortex. The arousal system can modulate cortical EEG with alpha rhythms, which enhance synaptic strength as shown by an increase in long-term potentiation (LTP), whereas delta frequencies are associated with decreased arousal and a decrease in synaptic strength as shown by an increase in long-term depotentiation (LTD). It is postulated that the medial prefrontal cortex is an adaptable node with decision making capability and may control the switch between positive and negative affect and is responsible for modifying or changing emotional state and its expression.

11.
Int J Psychophysiol ; 183: 32-40, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36375630

RESUMEN

Previous studies have suggested that emotional primes, presented as visual stimuli, influence face memory (e.g., encoding and recognition). However, due to stimulus-associated issues, whether emotional primes affect face encoding when the priming stimuli are presented in an auditory modality remains controversial. Moreover, no studies have investigated whether the effects of emotional auditory primes are maintained in later stages of face memory, such as face recognition. To address these issues, participants in the present study were asked to memorize angry and neutral faces. The faces were presented after a simple nonlinguistic interjection expressed with angry or neutral prosodies. Subsequently, participants completed an old/new recognition task in which only faces were presented. Event-related potential (ERP) results showed that during the encoding phase, all faces preceded by an angry vocal expression elicited larger N170 responses than faces preceded by a neutral vocal expression. Angry vocal expression also enhanced the late positive potential (LPP) responses specifically to angry faces. In the subsequent recognition phase, preceding angry vocal primes reduced early LPP responses to both angry and neutral faces and late LPP responses specifically to neutral faces. These findings suggest that the negative emotion of auditory primes influenced face encoding and recognition.


Asunto(s)
Electroencefalografía , Potenciales Evocados , Humanos , Tiempo de Reacción/fisiología , Potenciales Evocados/fisiología , Emociones/fisiología , Reconocimiento en Psicología/fisiología , Expresión Facial
12.
Pers Soc Psychol Bull ; 48(7): 1087-1104, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-34296644

RESUMEN

The current study investigated what can be understood from another person's tone of voice. Participants from five English-speaking nations (Australia, India, Kenya, Singapore, and the United States) listened to vocal expressions of nine positive and nine negative affective states recorded by actors from their own nation. In response, they wrote open-ended judgments of what they believed the actor was trying to express. Responses cut across the chronological emotion process and included descriptions of situations, cognitive appraisals, feeling states, physiological arousal, expressive behaviors, emotion regulation, and attempts at social influence. Accuracy in terms of emotion categories was overall modest, whereas accuracy in terms of valence and arousal was more substantial. Coding participants' 57,380 responses yielded a taxonomy of 56 categories, which included affective states as well as person descriptors, communication behaviors, and abnormal states. Open-ended responses thus reveal a wide range of ways in which people spontaneously perceive the intent behind emotional speech prosody.


Asunto(s)
Habla , Voz , Nivel de Alerta/fisiología , Emociones/fisiología , Humanos , Juicio/fisiología
13.
Affect Sci ; 2(3): 301-310, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33870212

RESUMEN

Learners use the distributional properties of stimuli to identify environmentally relevant categories in a range of perceptual domains, including words, shapes, faces, and colors. We examined whether similar processes may also operate on affective information conveyed through the voice. In Experiment 1, we tested how adults (18-22-year-olds) and children (8-10-year-olds) categorized affective states communicated by vocalizations varying continuously from "calm" to "upset." We found that the threshold for categorizing both verbal (i.e., spoken word) and nonverbal (i.e., a yell) vocalizations as "upset" depended on the statistical distribution of the stimuli participants encountered. In Experiment 2, we replicated and extended these findings in adults using vocalizations that conveyed multiple negative affect states. These results suggest perceivers' flexibly and rapidly update their interpretation of affective vocal cues based upon context. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s42761-021-00038-w.

14.
Soc Neurosci ; 16(4): 423-438, 2021 08.
Artículo en Inglés | MEDLINE | ID: mdl-34102955

RESUMEN

Information in the tone of voice alters social impressions and underlying brain activity as listeners evaluate the interpersonal relevance of utterances. Here, we presented requests that expressed politeness distinctions through the voice (polite/rude) and explicit linguistic markers (half of the requests began with Please). Thirty participants performed a social perception task (rating friendliness) while their electroencephalogram was recorded. Behaviorally, vocal politeness strategies had a much stronger influence on the perceived friendliness than the linguistic marker. Event-related potentials revealed rapid effects of (im)polite voices on cortical activity prior to ~300 ms; P200 amplitudes increased for polite versus rude voices, suggesting that the speaker's polite stance was registered as more salient in our task. At later stages, politeness distinctions encoded by the speaker's voice and their use of Please interacted, modulating activity in the N400 (300-500 ms) and late positivity (600-800 ms) time windows. Patterns of results suggest that initial attention deployment to politeness cues is rapidly influenced by the motivational significance of a speaker's voice. At later stages, processes for integrating vocal and lexical information resulted in increased cognitive effort to reevaluate utterances with ambiguous/contradictory cues. The potential influence of social anxiety on the P200 effect is also discussed.


Asunto(s)
Percepción del Habla , Voz , Electroencefalografía , Potenciales Evocados , Femenino , Humanos , Masculino , Percepción Social
15.
PeerJ Comput Sci ; 7: e804, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-35036530

RESUMEN

We investigated emotion classification from brief video recordings from the GEMEP database wherein actors portrayed 18 emotions. Vocal features consisted of acoustic parameters related to frequency, intensity, spectral distribution, and durations. Facial features consisted of facial action units. We first performed a series of person-independent supervised classification experiments. Best performance (AUC = 0.88) was obtained by merging the output from the best unimodal vocal (Elastic Net, AUC = 0.82) and facial (Random Forest, AUC = 0.80) classifiers using a late fusion approach and the product rule method. All 18 emotions were recognized with above-chance recall, although recognition rates varied widely across emotions (e.g., high for amusement, anger, and disgust; and low for shame). Multimodal feature patterns for each emotion are described in terms of the vocal and facial features that contributed most to classifier performance. Next, a series of exploratory unsupervised classification experiments were performed to gain more insight into how emotion expressions are organized. Solutions from traditional clustering techniques were interpreted using decision trees in order to explore which features underlie clustering. Another approach utilized various dimensionality reduction techniques paired with inspection of data visualizations. Unsupervised methods did not cluster stimuli in terms of emotion categories, but several explanatory patterns were observed. Some could be interpreted in terms of valence and arousal, but actor and gender specific aspects also contributed to clustering. Identifying explanatory patterns holds great potential as a meta-heuristic when unsupervised methods are used in complex classification tasks.

16.
Front Psychol ; 12: 702106, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34484051

RESUMEN

Due to the COVID-19 pandemic, the significance of online research has been rising in the field of psychology. However, online experiments with child participants are rare compared to those with adults. In this study, we investigated the validity of web-based experiments with child participants 4-12 years old and adult participants. They performed simple emotional perception tasks in an experiment designed and conducted on the Gorilla Experiment Builder platform. After short communication with each participant via Zoom videoconferencing software, participants performed the auditory task (judging emotion from vocal expression) and the visual task (judging emotion from facial expression). The data collected were compared with data collected in our previous similar laboratory experiment, and similar tendencies were found. For the auditory task in particular, we replicated differences in accuracy perceiving vocal expressions between age groups and also found the same native language advantage. Furthermore, we discuss the possibility of using online cognitive studies for future developmental studies.

17.
Psychon Bull Rev ; 27(2): 237-265, 2020 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-31898261

RESUMEN

Researchers examining nonverbal communication of emotions are becoming increasingly interested in differentiations between different positive emotional states like interest, relief, and pride. But despite the importance of the voice in communicating emotion in general and positive emotion in particular, there is to date no systematic review of what characterizes vocal expressions of different positive emotions. Furthermore, integration and synthesis of current findings are lacking. In this review, we comprehensively review studies (N = 108) investigating acoustic features relating to specific positive emotions in speech prosody and nonverbal vocalizations. We find that happy voices are generally loud with considerable variability in loudness, have high and variable pitch, and are high in the first two formant frequencies. When specific positive emotions are directly compared with each other, pitch mean, loudness mean, and speech rate differ across positive emotions, with patterns mapping onto clusters of emotions, so-called emotion families. For instance, pitch is higher for epistemological emotions (amusement, interest, relief), moderate for savouring emotions (contentment and pleasure), and lower for a prosocial emotion (admiration). Some, but not all, of the differences in acoustic patterns also map on to differences in arousal levels. We end by pointing to limitations in extant work and making concrete proposals for future research on positive emotions in the voice.


Asunto(s)
Emociones/fisiología , Comunicación no Verbal/fisiología , Habla/fisiología , Voz/fisiología , Humanos
18.
PeerJ ; 8: e9118, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32435540

RESUMEN

The ability to accurately identify and label emotions in the self and others is crucial for successful social interactions and good mental health. In the current study we tested the longitudinal relationship between early language skills and recognition of facial and vocal emotion cues in a representative UK population cohort with diverse language and cognitive skills (N = 369), including a large sample of children that met criteria for Developmental Language Disorder (DLD, N = 97). Language skills, but not non-verbal cognitive ability, at age 5-6 predicted emotion recognition at age 10-12. Children that met the criteria for DLD showed a large deficit in recognition of facial and vocal emotion cues. The results highlight the importance of language in supporting identification of emotions from non-verbal cues. Impairments in emotion identification may be one mechanism by which language disorder in early childhood predisposes children to later adverse social and mental health outcomes.

19.
J Nonverbal Behav ; 43(2): 195-201, 2019 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-31404243

RESUMEN

Basic emotion theory (BET) has been, perhaps, the central narrative in the science of emotion. As Crivelli and Fridlund (J Nonverbal Behav 125:1-34, 2019, this issue) would have it, however, BET is ready to be put to rest, facing "last stands" and "fatal" empirical failures. Nothing could be further from the truth. Crivelli and Fridlund's outdated treatment of BET, narrow focus on facial expressions of six emotions, inattention to robust empirical literatures, and overreliance on singular "critical tests" of a multifaceted theory, undermine their critique and belie the considerable advances guided by basic emotion theory.

20.
Neuropsychologia ; 132: 107147, 2019 09.
Artículo en Inglés | MEDLINE | ID: mdl-31325481

RESUMEN

It has been shown that stimulus memory (e.g., encoding and recognition) is influenced by emotion. In terms of face memory, event-related potential (ERP) studies have shown that the encoding of emotional faces is influenced by the emotion of concomitant context, when contextual stimuli were input from a visual modality. Behavioral studies also investigated the effect of contextual emotion on subsequent recognition of neutral faces. However, there might be no studies ever investigating the context effect on face encoding and recognition, when contextual stimuli were input from other sensory modalities (e.g., an auditory modality). Additionally, it may be unknown about the neural mechanisms underlying context effects on recognition of emotional faces. Therefore, the present study aimed to use vocal expressions as contexts to investigate whether contextual emotion influences ERP responses during face encoding and recognition. To this end, participants in the present study were asked to memorize angry and neutral faces. The faces were presented concomitant with either angry or neutral vocal expressions. Subsequently, participants were asked to perform an old/new recognition task, in which only faces were presented. In the encoding phase, ERP results showed that compared to neutral vocal expression, angry vocal expressions led to smaller P1 and N170 responses to both angry and neutral faces. For angry faces, however, late positive potential (LPP) responses were increased in the angry voice condition. In the later recognition phase, N170 responses were larger for neutral-encoded faces that had been presented with angry compared to neutral vocal expressions. Preceding angry vocal expression increased FN400 and LPP responses to both neutral-encoded and angry-encoded faces, when the faces showed the encoded expression. Therefore, the present study indicates that contextual emotion with regard to vocal expression influences neural responses during face encoding and subsequent recognition.


Asunto(s)
Ira/fisiología , Percepción Auditiva/fisiología , Potenciales Evocados/fisiología , Expresión Facial , Reconocimiento Facial/fisiología , Percepción Social , Adolescente , Adulto , Electroencefalografía , Femenino , Humanos , Masculino , Voz , Adulto Joven
SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda