Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 35
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Cogn Emot ; 38(1): 23-43, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37715528

RESUMEN

There is debate within the literature as to whether emotion dysregulation (ED) in Attention-Deficit Hyperactivity Disorder (ADHD) reflects deviant attentional mechanisms or atypical perceptual emotion processing. Previous reviews have reliably examined the nature of facial, but not vocal, emotion recognition accuracy in ADHD. The present meta-analysis quantified vocal emotion recognition (VER) accuracy scores in ADHD and controls using robust variance estimation, gathered from 21 published and unpublished papers. Additional moderator analyses were carried out to determine whether the nature of VER accuracy in ADHD varied depending on emotion type. Findings revealed a medium effect size for the presence of VER deficits in ADHD, and moderator analyses showed VER accuracy in ADHD did not differ due to emotion type. These results support the theories which implicate the role of attentional mechanisms in driving VER deficits in ADHD. However, there is insufficient data within the behavioural VER literature to support the presence of emotion processing atypicalities in ADHD. Future neuro-imaging research could explore the interaction between attention and emotion processing in ADHD, taking into consideration ADHD subtypes and comorbidities.


Asunto(s)
Trastorno por Déficit de Atención con Hiperactividad , Voz , Humanos , Trastorno por Déficit de Atención con Hiperactividad/psicología , Emociones/fisiología , Reconocimiento en Psicología , Expresión Facial
2.
BMC Psychiatry ; 23(1): 760, 2023 10 17.
Artículo en Inglés | MEDLINE | ID: mdl-37848849

RESUMEN

BACKGROUND: Cognitive and emotional impairment are among the core features of schizophrenia; assessment of vocal emotion recognition may facilitate the detection of schizophrenia. We explored the differences between cognitive and social aspects of emotion using vocal emotion recognition and detailed clinical characterization. METHODS: Clinical symptoms and social and cognitive functioning were assessed by trained clinical psychiatrists. A vocal emotion perception test, including an assessment of emotion recognition and emotional intensity, was conducted. One-hundred-six patients with schizophrenia (SCZ) and 230 healthy controls (HCs) were recruited. RESULTS: Considering emotion recognition, scores for all emotion categories were significantly lower in SCZ compared to HC. Considering emotional intensity, scores for anger, calmness, sadness, and surprise were significantly lower in the SCZs. Vocal recognition patterns showed a trend of unification and simplification in SCZs. A direct correlation was confirmed between vocal recognition impairment and cognition. In diagnostic tests, only the total score of vocal emotion recognition was a reliable index for the presence of schizophrenia. CONCLUSIONS: This study shows that patients with schizophrenia are characterized by impaired vocal emotion perception. Furthermore, explicit and implicit vocal emotion perception processing in individuals with schizophrenia are viewed as distinct entities. This study provides a voice recognition tool to facilitate and improve the diagnosis of schizophrenia.


Asunto(s)
Esquizofrenia , Humanos , Esquizofrenia/diagnóstico , Emociones , Cognición , Ira , Percepción , Expresión Facial , Percepción Social
3.
J Child Lang ; : 1-11, 2023 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-37391267

RESUMEN

Infant-directed speech often has hyperarticulated features, such as point vowels whose formants are further apart than in adult-directed speech. This increased "vowel space" may reflect the caretaker's effort to speak more clearly to infants, thus benefiting language processing. However, hyperarticulation may also result from more positive valence (e.g., speaking with positive vocal emotion) often found in mothers' speech to infants. This study was designed to replicate others who have found hyperarticulation in maternal speech to their 6-month-olds, but also to examine their speech to a non-human infant (i.e., a puppy). We rated both kinds of maternal speech for their emotional valence and recorded mothers' speech to a human adult. We found that mothers produced more positively valenced utterances and some hyperarticulation in both their infant- and puppy-directed speech, compared to their adult-directed speech. This finding promotes looking at maternal speech from a multi-faceted perspective that includes emotional state.

4.
Cogn Affect Behav Neurosci ; 22(5): 1030-1043, 2022 10.
Artículo en Inglés | MEDLINE | ID: mdl-35474566

RESUMEN

There is growing evidence that both the basal ganglia and the cerebellum play functional roles in emotion processing, either directly or indirectly, through their connections with cortical and subcortical structures. However, the lateralization of this complex processing in emotion recognition remains unclear. To address this issue, we investigated emotional prosody recognition in individuals with Parkinson's disease (model of basal ganglia dysfunction) or cerebellar stroke patients, as well as in matched healthy controls (n = 24 in each group). We analysed performances according to the lateralization of the predominant brain degeneration/lesion. Results showed that a right (basal ganglia and cerebellar) hemispheric dysfunction was likely to induce greater deficits than a left one. Moreover, deficits following left hemispheric dysfunction were only observed in cerebellar stroke patients, and these deficits resembled those observed after degeneration of the right basal ganglia. Additional analyses taking disease duration / time since stroke into consideration revealed a worsening of performances in patients with predominantly right-sided lesions over time. These results point to the differential, but complementary, involvement of the cerebellum and basal ganglia in emotional prosody decoding, with a probable hemispheric specialization according to the level of cognitive integration.


Asunto(s)
Enfermedad de Parkinson , Accidente Cerebrovascular , Ganglios Basales , Cerebelo , Emociones , Humanos , Accidente Cerebrovascular/complicaciones
5.
Sensors (Basel) ; 22(19)2022 Oct 06.
Artículo en Inglés | MEDLINE | ID: mdl-36236658

RESUMEN

Vocal emotion recognition (VER) in natural speech, often referred to as speech emotion recognition (SER), remains challenging for both humans and computers. Applied fields including clinical diagnosis and intervention, social interaction research or Human Computer Interaction (HCI) increasingly benefit from efficient VER algorithms. Several feature sets were used with machine-learning (ML) algorithms for discrete emotion classification. However, there is no consensus for which low-level-descriptors and classifiers are optimal. Therefore, we aimed to compare the performance of machine-learning algorithms with several different feature sets. Concretely, seven ML algorithms were compared on the Berlin Database of Emotional Speech: Multilayer Perceptron Neural Network (MLP), J48 Decision Tree (DT), Support Vector Machine with Sequential Minimal Optimization (SMO), Random Forest (RF), k-Nearest Neighbor (KNN), Simple Logistic Regression (LOG) and Multinomial Logistic Regression (MLR) with 10-fold cross validation using four openSMILE feature sets (i.e., IS-09, emobase, GeMAPS and eGeMAPS). Results indicated that SMO, MLP and LOG show better performance (reaching to 87.85%, 84.00% and 83.74% accuracies, respectively) compared to RF, DT, MLR and KNN (with minimum 73.46%, 53.08%, 70.65% and 58.69% accuracies, respectively). Overall, the emobase feature set performed best. We discuss the implications of these findings for applications in diagnosis, intervention or HCI.


Asunto(s)
Aprendizaje Automático , Habla , Algoritmos , Emociones , Humanos , Redes Neurales de la Computación , Máquina de Vectores de Soporte
6.
Alcohol Clin Exp Res ; 42(9): 1715-1724, 2018 09.
Artículo en Inglés | MEDLINE | ID: mdl-30175417

RESUMEN

BACKGROUND: Alcoholism is associated with difficulties in perceiving emotions through nonverbal channels including prosody. The question whether these difficulties persist to long-term abstinence has, however, received little attention. METHODS: In a 2-part investigation, emotional prosody production was investigated in long-term abstained alcoholics and age- and education-matched healthy controls. First, participants were asked to produce semantically neutral sentences in different emotional tones of voice. Samples were then acoustically analyzed. Next, naïve listeners were asked to recognize the emotional intention of speakers from a randomly collected subset. Voice quality indicators were also assessed by the listeners. RESULTS: Findings revealed emotional prosody production differences between the 2 groups. Differences were particularly apparent when looking at pitch use. Alcoholics' mean and variability of pitch differed significantly from controls' use. The use of loudness was affected to a lesser extent. Crucially, naïve raters confirmed that the intended emotion was more difficult to recognize from exemplars produced by alcoholics. Differences between the 2 groups were also found with regard to voice quality. CONCLUSIONS: These results suggest that emotional communication difficulties can persist long after alcoholics have quit drinking.


Asunto(s)
Abstinencia de Alcohol/psicología , Abstinencia de Alcohol/tendencias , Alcohólicos/psicología , Alcoholismo/psicología , Comunicación , Emociones/fisiología , Adulto , Alcoholismo/diagnóstico , Femenino , Humanos , Masculino , Persona de Mediana Edad , Acústica del Lenguaje , Factores de Tiempo , Adulto Joven
7.
J Psycholinguist Res ; 47(1): 195-213, 2018 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-29080117

RESUMEN

This study investigates the discrepancy between the literal emotional content of speech and emotional tone in the identification of speakers' vocal emotions in both the listeners' native language (Japanese), and in an unfamiliar language (random-spliced Japanese). Both experiments involve a "congruent condition," in which the emotion contained in the literal meaning of speech (words and phrases) was compatible with vocal emotion, and an "incongruent condition," in which these forms of emotional information were discordant. Results for Japanese indicated that performance in identifying emotions did not differ significantly between the congruent and incongruent conditions. However, the results for random-spliced Japanese indicated that vocal emotion was correctly identified more often in the congruent than in the incongruent condition. The different results for Japanese and random-spliced Japanese suggested that the literal meaning of emotional phrases influences the listener's perception of the speaker's emotion, and that Japanese participants could infer speakers' intended emotions in the incongruent condition.


Asunto(s)
Percepción Auditiva , Emociones , Percepción del Habla , Habla , Voz , Femenino , Humanos , Japón , Masculino , Adulto Joven
8.
J Neurovirol ; 23(2): 304-312, 2017 04.
Artículo en Inglés | MEDLINE | ID: mdl-27943048

RESUMEN

We aimed to explore the brain imaging correlates of vocal emotion processing in a group of HIV+ individuals and to compare the vocal emotion processing of HIV+ individuals with a group of healthy adults. We conducted multiple linear regressions to determine the cerebral correlates of a newly designed vocal emotion processing test in a sub-group of HIV+ individuals who completed the cerebral magnetic resonance scan (n = 36). Separately, we test whether the association between our test scores and each cerebral measure persisted regardless of the presence of neurocognitive impairment. We also calculated differences in average test scores between the total HIV+ group (n = 100) and a healthy adult group (n = 46). We found a positive association between the test scores and several brain area volumes: right frontal, temporal and parietal lobes, bilateral thalamus, and left hippocampus. We found a negative association between inflammatory markers in frontal white matter and the test scores. After controlling by neurocognitive impairment, several brain area volumes remained positively associated to the prosody test scores. Moreover, the whole HIV+ sample had significantly poorer test scores than healthy adults, but only in the subset of HIV+ individuals with neurocognitive impairment. For the first time, our results suggest that cerebral dysfunctions in particular brain areas involved in the processing of emotional auditory stimuli may occur in HIV+ individuals. These results highlight the need for broad characterization of the neuropsychological consequence of HIV brain damages.


Asunto(s)
Síntomas Afectivos/fisiopatología , Percepción Auditiva , Disfunción Cognitiva/fisiopatología , Infecciones por VIH/fisiopatología , Adulto , Síntomas Afectivos/complicaciones , Síntomas Afectivos/diagnóstico por imagen , Síntomas Afectivos/virología , Mapeo Encefálico , Estudios de Casos y Controles , Disfunción Cognitiva/complicaciones , Disfunción Cognitiva/diagnóstico por imagen , Disfunción Cognitiva/virología , Femenino , Lóbulo Frontal/diagnóstico por imagen , Lóbulo Frontal/patología , Lóbulo Frontal/virología , Infecciones por VIH/complicaciones , Infecciones por VIH/diagnóstico por imagen , Infecciones por VIH/virología , Hipocampo/diagnóstico por imagen , Hipocampo/patología , Hipocampo/virología , Humanos , Imagen por Resonancia Magnética , Masculino , Persona de Mediana Edad , Pruebas Neuropsicológicas , Lóbulo Parietal/diagnóstico por imagen , Lóbulo Parietal/patología , Lóbulo Parietal/virología , Habla , Lóbulo Temporal/diagnóstico por imagen , Lóbulo Temporal/patología , Lóbulo Temporal/virología , Tálamo/diagnóstico por imagen , Tálamo/patología , Tálamo/virología , Sustancia Blanca/diagnóstico por imagen , Sustancia Blanca/patología , Sustancia Blanca/virología
9.
J Neurosci ; 34(24): 8098-105, 2014 Jun 11.
Artículo en Inglés | MEDLINE | ID: mdl-24920615

RESUMEN

The human voice carries speech as well as important nonlinguistic signals that influence our social interactions. Among these cues that impact our behavior and communication with other people is the perceived emotional state of the speaker. A theoretical framework for the neural processing stages of emotional prosody has suggested that auditory emotion is perceived in multiple steps (Schirmer and Kotz, 2006) involving low-level auditory analysis and integration of the acoustic information followed by higher-level cognition. Empirical evidence for this multistep processing chain, however, is still sparse. We examined this question using functional magnetic resonance imaging and a continuous carry-over design (Aguirre, 2007) to measure brain activity while volunteers listened to non-speech-affective vocalizations morphed on a continuum between anger and fear. Analyses dissociated neuronal adaptation effects induced by similarity in perceived emotional content between consecutive stimuli from those induced by their acoustic similarity. We found that bilateral voice-sensitive auditory regions as well as right amygdala coded the physical difference between consecutive stimuli. In contrast, activity in bilateral anterior insulae, medial superior frontal cortex, precuneus, and subcortical regions such as bilateral hippocampi depended predominantly on the perceptual difference between morphs. Our results suggest that the processing of vocal affect recognition is a multistep process involving largely distinct neural networks. Amygdala and auditory areas predominantly code emotion-related acoustic information while more anterior insular and prefrontal regions respond to the abstract, cognitive representation of vocal affect.


Asunto(s)
Adaptación Fisiológica/fisiología , Percepción Auditiva/fisiología , Encéfalo/fisiología , Emociones/fisiología , Voz , Estimulación Acústica , Adolescente , Adulto , Encéfalo/irrigación sanguínea , Mapeo Encefálico , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Oxígeno/sangre , Tiempo de Reacción/fisiología , Adulto Joven
10.
Brain Cogn ; 92C: 92-100, 2014 12.
Artículo en Inglés | MEDLINE | ID: mdl-25463143

RESUMEN

The relevance of emotional perception in interpersonal relationships and social cognition has been well documented. Although brain diseases might impair emotional processing, studies concerning emotional recognition in patients with brain tumours are relatively rare. The aim of this study was to explore emotional recognition in patients with gliomas in three conditions (visual, auditory and crossmodal) and to analyse how tumour-related variables (notably, tumour localisation) and patient-related variables influence emotion recognition. Twenty six patients with gliomas and 26 matched healthy controls were instructed to identify 5 basic emotions and a neutral expression, which were displayed through visual, auditory and crossmodal stimuli. Relative to the controls, recognition was weakly impaired in the patient group under both visual and auditory conditions, but the performances were comparable in the crossmodal condition. Additional analyses using the 'race model' suggest differences in multisensory emotional integration abilities across the groups, which were potentially correlated with the executive disorders observed in the patients. These observations support the view of compensatory mechanisms in the case of gliomas that might preserve the quality of life and help maintain the normal social and professional lives often observed in these patients.

11.
J Exp Child Psychol ; 126: 68-79, 2014 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-24892883

RESUMEN

Even in the absence of facial information, adults are able to efficiently extract emotions from bodies and voices. Although prior research indicates that 6.5-month-old infants match emotional body movements to vocalizations, the developmental origins of this function are unknown. Moreover, it is not clear whether infants perceive emotion conveyed in static body postures and match them to vocalizations. In the current experiments, 6.5-month-olds matched happy and angry static body postures to corresponding vocalizations in upright images but not in inverted images. However, 3.5-month-olds failed to match. The younger infants also failed to match when tested with videos of emotional body movements that older infants had previously matched. Thus, whereas 6.5-month-olds process emotional cues from body images and match them to emotional vocalizations, 3.5-month-olds do not exhibit such emotion knowledge. These results indicate developmental changes that lead to sophisticated emotion processing from bodies and voices early in life.


Asunto(s)
Desarrollo Infantil , Inteligencia Emocional , Cinésica , Voz , Factores de Edad , Emociones , Femenino , Humanos , Lactante , Masculino
12.
Sci Rep ; 14(1): 16462, 2024 07 16.
Artículo en Inglés | MEDLINE | ID: mdl-39014043

RESUMEN

The current study tested the hypothesis that the association between musical ability and vocal emotion recognition skills is mediated by accuracy in prosody perception. Furthermore, it was investigated whether this association is primarily related to musical expertise, operationalized by long-term engagement in musical activities, or musical aptitude, operationalized by a test of musical perceptual ability. To this end, we conducted three studies: In Study 1 (N = 85) and Study 2 (N = 93), we developed and validated a new instrument for the assessment of prosodic discrimination ability. In Study 3 (N = 136), we examined whether the association between musical ability and vocal emotion recognition was mediated by prosodic discrimination ability. We found evidence for a full mediation, though only in relation to musical aptitude and not in relation to musical expertise. Taken together, these findings suggest that individuals with high musical aptitude have superior prosody perception skills, which in turn contribute to their vocal emotion recognition skills. Importantly, our results suggest that these benefits are not unique to musicians, but extend to non-musicians with high musical aptitude.


Asunto(s)
Aptitud , Emociones , Música , Humanos , Música/psicología , Masculino , Femenino , Emociones/fisiología , Aptitud/fisiología , Adulto , Adulto Joven , Percepción del Habla/fisiología , Percepción Auditiva/fisiología , Adolescente , Reconocimiento en Psicología/fisiología , Voz/fisiología
13.
Br J Psychol ; 115(2): 206-225, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-37851369

RESUMEN

Musicians outperform non-musicians in vocal emotion perception, likely because of increased sensitivity to acoustic cues, such as fundamental frequency (F0) and timbre. Yet, how musicians make use of these acoustic cues to perceive emotions, and how they might differ from non-musicians, is unclear. To address these points, we created vocal stimuli that conveyed happiness, fear, pleasure or sadness, either in all acoustic cues, or selectively in either F0 or timbre only. We then compared vocal emotion perception performance between professional/semi-professional musicians (N = 39) and non-musicians (N = 38), all socialized in Western music culture. Compared to non-musicians, musicians classified vocal emotions more accurately. This advantage was seen in the full and F0-modulated conditions, but was absent in the timbre-modulated condition indicating that musicians excel at perceiving the melody (F0), but not the timbre of vocal emotions. Further, F0 seemed more important than timbre for the recognition of all emotional categories. Additional exploratory analyses revealed a link between time-varying F0 perception in music and voices that was independent of musical training. Together, these findings suggest that musicians are particularly tuned to the melody of vocal emotions, presumably due to a natural predisposition to exploit melodic patterns.


Asunto(s)
Música , Voz , Humanos , Estimulación Acústica , Emociones , Miedo , Reconocimiento en Psicología , Música/psicología , Percepción Auditiva
14.
Cereb Cortex Commun ; 4(1): tgad002, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36726795

RESUMEN

Vocal emotion recognition, a key determinant to analyzing a speaker's emotional state, is known to be impaired following cerebellar dysfunctions. Nevertheless, its possible functional integration in the large-scale brain network subtending emotional prosody recognition has yet to be explored. We administered an emotional prosody recognition task to patients with right versus left-hemispheric cerebellar lesions and a group of matched controls. We explored the lesional correlates of vocal emotion recognition in patients through a network-based analysis by combining a neuropsychological approach for lesion mapping with normative brain connectome data. Results revealed impaired recognition among patients for neutral or negative prosody, with poorer sadness recognition performances by patients with right cerebellar lesion. Network-based lesion-symptom mapping revealed that sadness recognition performances were linked to a network connecting the cerebellum with left frontal, temporal, and parietal cortices. Moreover, when focusing solely on a subgroup of patients with right cerebellar damage, sadness recognition performances were associated with a more restricted network connecting the cerebellum to the left parietal lobe. As the left hemisphere is known to be crucial for the processing of short segmental information, these results suggest that a corticocerebellar network operates on a fine temporal scale during vocal emotion decoding.

15.
Brain Sci ; 13(11)2023 Nov 07.
Artículo en Inglés | MEDLINE | ID: mdl-38002523

RESUMEN

Musicians outperform non-musicians in vocal emotion recognition, but the underlying mechanisms are still debated. Behavioral measures highlight the importance of auditory sensitivity towards emotional voice cues. However, it remains unclear whether and how this group difference is reflected at the brain level. Here, we compared event-related potentials (ERPs) to acoustically manipulated voices between musicians (n = 39) and non-musicians (n = 39). We used parameter-specific voice morphing to create and present vocal stimuli that conveyed happiness, fear, pleasure, or sadness, either in all acoustic cues or selectively in either pitch contour (F0) or timbre. Although the fronto-central P200 (150-250 ms) and N400 (300-500 ms) components were modulated by pitch and timbre, differences between musicians and non-musicians appeared only for a centro-parietal late positive potential (500-1000 ms). Thus, this study does not support an early auditory specialization in musicians but suggests instead that musicality affects the manner in which listeners use acoustic voice cues during later, controlled aspects of emotion evaluation.

16.
Soc Cogn Affect Neurosci ; 17(10): 890-903, 2022 10 03.
Artículo en Inglés | MEDLINE | ID: mdl-35323933

RESUMEN

Adolescence is associated with maturation of function within neural networks supporting the processing of social information. Previous longitudinal studies have established developmental influences on youth's neural response to facial displays of emotion. Given the increasing recognition of the importance of non-facial cues to social communication, we build on existing work by examining longitudinal change in neural response to vocal expressions of emotion in 8- to 19-year-old youth. Participants completed a vocal emotion recognition task at two timepoints (1 year apart) while undergoing functional magnetic resonance imaging. The right inferior frontal gyrus, right dorsal striatum and right precentral gyrus showed decreases in activation to emotional voices across timepoints, which may reflect focalization of response in these areas. Activation in the dorsomedial prefrontal cortex was positively associated with age but was stable across timepoints. In addition, the slope of change across visits varied as a function of participants' age in the right temporo-parietal junction (TPJ): this pattern of activation across timepoints and age may reflect ongoing specialization of function across childhood and adolescence. Decreased activation in the striatum and TPJ across timepoints was associated with better emotion recognition accuracy. Findings suggest that specialization of function in social cognitive networks may support the growth of vocal emotion recognition skills across adolescence.


Asunto(s)
Emociones , Voz , Adolescente , Adulto , Mapeo Encefálico , Niño , Emociones/fisiología , Expresión Facial , Humanos , Imagen por Resonancia Magnética , Corteza Prefrontal , Reconocimiento en Psicología/fisiología , Adulto Joven
17.
Disabil Rehabil Assist Technol ; : 1-8, 2022 Aug 23.
Artículo en Inglés | MEDLINE | ID: mdl-35997772

RESUMEN

PURPOSE: As humans convey information about emotions by speech signals, emotion recognition via auditory information is often employed to assess one's affective states. There are numerous ways of applying the knowledge of emotional vocal expressions to system designs that accommodate users' needs adequately. Yet, little is known about how people with visual disabilities infer emotions from speech stimuli, especially via online platforms (e.g., Zoom). This study focussed on examining the degree to which they perceive emotions strongly or weakly, i.e., perceived intensity but also investigating the degree to which their sociodemographic backgrounds affect them perceiving different intensity levels of emotions when exposed to a set of emotional speech stimuli via Zoom. MATERIALS AND METHODS: A convenience sample of 30 individuals with visual disabilities participated in zoom interviews. Participants were given a set of emotional speech stimuli and reported the intensity level of the perceived emotions on a rating scale from 1 (weak) to 8 (strong). RESULTS: When the participants were exposed to the emotional speech stimuli, calm, happy, fearful, sad, and neutral, they reported that neutral was the dominant emotion they perceived with the greatest intensity. Individual differences were also observed in the perceived intensity of emotions, associated with sociodemographic backgrounds, such as health, vision, job, and age. CONCLUSIONS: The results of this study are anticipated to contribute to the fundamental knowledge that will be helpful for many stakeholders such as voice technology engineers, user experience designers, health professionals, and social workers providing support to people with visual disabilities.IMPLICATIONS FOR REHABILITATIONTechnologies equipped with alternative user interfaces (e.g., Siri, Alexa, and Google Voice Assistant) meeting the needs of people with visual disabilities can promote independent living and quality of life.Such technologies can also be equipped with systems that can recognize emotions via users' voice, such that users can obtain services customized to fit their emotional needs or adequately address their emotional challenges (e.g., early detection of onset, provision of advice, and so on).The results of this study can be beneficial to health professionals (e.g., social workers) who work closely with clients who have visual disabilities (e.g., virtual telehealth sessions) as they could gain insights or learn how to recognize and understand the clients' emotional struggle by hearing their voice, which is contributing to enhancement of emotional intelligence. Thus, they can provide better services to their clients, leading to building a strong bond and trust between health professionals and clients with visual disabilities even they meet virtually (e.g., Zoom).

18.
Neuroimage Clin ; 34: 102966, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35182929

RESUMEN

Epilepsy has been associated with deficits in the social cognitive ability to decode others' nonverbal cues to infer their emotional intent (emotion recognition). Studies have begun to identify potential neural correlates of these deficits, but have focused primarily on one type of nonverbal cue (facial expressions) to the detriment of other crucial social signals that inform the tenor of social interactions (e.g., tone of voice). Less is known about how individuals with epilepsy process these forms of social stimuli, with a particular gap in knowledge about representation of vocal cues in the developing brain. The current study compared vocal emotion recognition skills and functional patterns of neural activation to emotional voices in youth with and without refractory focal epilepsy. We made novel use of inter-subject pattern analysis to determine brain areas in which activation to emotional voices was predictive of epilepsy status. Results indicated that youth with epilepsy were comparatively less able to infer emotional intent in vocal expressions than their typically developing peers. Activation to vocal emotional expressions in regions of the mentalizing and/or default mode network (e.g., right temporo-parietal junction, right hippocampus, right medial prefrontal cortex, among others) differentiated youth with and without epilepsy. These results are consistent with emerging evidence that pediatric epilepsy is associated with altered function in neural networks subserving social cognitive abilities. Our results contribute to ongoing efforts to understand the neural markers of social cognitive deficits in pediatric epilepsy, in order to better tailor and funnel interventions to this group of youth at risk for poor social outcomes.


Asunto(s)
Epilepsia Refractaria , Epilepsia , Voz , Adolescente , Niño , Emociones/fisiología , Expresión Facial , Humanos , Voz/fisiología
19.
Soc Cogn Affect Neurosci ; 17(12): 1145-1154, 2022 12 01.
Artículo en Inglés | MEDLINE | ID: mdl-35522247

RESUMEN

Our ability to infer a speaker's emotional state depends on the processing of acoustic parameters such as fundamental frequency (F0) and timbre. Yet, how these parameters are processed and integrated to inform emotion perception remains largely unknown. Here we pursued this issue using a novel parameter-specific voice morphing technique to create stimuli with emotion modulations in only F0 or only timbre. We used these stimuli together with fully modulated vocal stimuli in an event-related potential (ERP) study in which participants listened to and identified stimulus emotion. ERPs (P200 and N400) and behavioral data converged in showing that both F0 and timbre support emotion processing but do so differently for different emotions: Whereas F0 was most relevant for responses to happy, fearful and sad voices, timbre was most relevant for responses to voices expressing pleasure. Together, these findings offer original insights into the relative significance of different acoustic parameters for early neuronal representations of speaker emotion and show that such representations are predictive of subsequent evaluative judgments.


Asunto(s)
Percepción del Habla , Voz , Humanos , Masculino , Femenino , Electroencefalografía , Potenciales Evocados , Emociones/fisiología , Percepción Auditiva/fisiología , Percepción del Habla/fisiología
20.
Cognition ; 219: 104967, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-34875400

RESUMEN

While the human perceptual system constantly adapts to the environment, some of the underlying mechanisms are still poorly understood. For instance, although previous research demonstrated perceptual aftereffects in emotional voice adaptation, the contribution of different vocal cues to these effects is unclear. In two experiments, we used parameter-specific morphing of adaptor voices to investigate the relative roles of fundamental frequency (F0) and timbre in vocal emotion adaptation, using angry and fearful utterances. Participants adapted to voices containing emotion-specific information in either F0 or timbre, with all other parameters kept constant at an intermediate 50% morph level. Full emotional voices and ambiguous voices were used as reference conditions. All adaptor stimuli were either of the same (Experiment 1) or opposite speaker gender (Experiment 2) of subsequently presented target voices. In Experiment 1, we found consistent aftereffects in all adaptation conditions. Crucially, aftereffects following timbre adaptation were much larger than following F0 adaptation and were only marginally smaller than those following full adaptation. In Experiment 2, adaptation aftereffects appeared massively and proportionally reduced, with differences between morph types being no longer significant. These results suggest that timbre plays a larger role than F0 in vocal emotion adaptation, and that vocal emotion adaptation is compromised by eliminating gender-correspondence between adaptor and target stimuli. Our findings also add to mounting evidence suggesting a major role of timbre in auditory adaptation.


Asunto(s)
Voz , Adaptación Fisiológica , Señales (Psicología) , Emociones , Femenino , Humanos , Masculino , Percepción Visual
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA