Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros

Bases de datos
Tipo de estudio
Tipo del documento
Asunto de la revista
Intervalo de año de publicación
1.
Psychol Sci ; 31(9): 1129-1139, 2020 09.
Artículo en Inglés | MEDLINE | ID: mdl-32846109

RESUMEN

Vision is thought to support the development of spatial abilities in the other senses. If this is true, how does spatial hearing develop in people lacking visual experience? We comprehensively addressed this question by investigating auditory-localization abilities in 17 congenitally blind and 17 sighted individuals using a psychophysical minimum-audible-angle task that lacked sensorimotor confounds. Participants were asked to compare the relative position of two sound sources located in central and peripheral, horizontal and vertical, or frontal and rear spaces. We observed unequivocal enhancement of spatial-hearing abilities in congenitally blind people, irrespective of the field of space that was assessed. Our results conclusively demonstrate that visual experience is not a prerequisite for developing optimal spatial-hearing abilities and that, in striking contrast, the lack of vision leads to a general enhancement of auditory-spatial skills.


Asunto(s)
Localización de Sonidos , Personas con Daño Visual , Ceguera , Audición , Humanos , Percepción Espacial , Visión Ocular
2.
Emotion ; 24(5): 1312-1321, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38407120

RESUMEN

The ability to reliably discriminate vocal expressions of emotion is crucial to engage in successful social interactions. This process is arguably more crucial for blind individuals, since they cannot extract social information from faces and bodies, and therefore chiefly rely on voices to infer the emotional state of their interlocutors. Blind have demonstrated superior abilities in several aspects of auditory perception, but research on their ability to discriminate vocal features is still scarce and has provided unclear results. Here, we used a gating psychophysical paradigm to test whether early blind people would differ from individually matched sighted controls at the recognition of emotional expressions. Surprisingly, blind people showed lower performance than controls in discriminating specific vocal emotions. We presented segments of nonlinguistic emotional vocalizations of increasing duration (100-400 ms), portraying five basic emotions (fear, happy, sad, disgust, and angry), and we asked our participants for an explicit emotion categorization task. We then calculated sensitivity indices and confusion patterns of their performance. We observed better performance of the sighted group in the discrimination of angry and fearful expression, with no between-group differences for other emotions. This result supports the view that vision plays a calibrating role for specific threat-related emotions specifically. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Asunto(s)
Emociones , Voz , Humanos , Masculino , Femenino , Adulto , Emociones/fisiología , Voz/fisiología , Percepción Auditiva/fisiología , Miedo/fisiología , Adulto Joven , Ceguera/fisiopatología , Persona de Mediana Edad , Discriminación en Psicología/fisiología
3.
Drug Alcohol Depend ; 213: 108079, 2020 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-32554170

RESUMEN

BACKGROUND: Severe alcohol use disorder (SAUD) is associated with impaired discrimination of emotional expressions. This deficit appears increased in crossmodal settings, when simultaneous inputs from different sensory modalities are presented. However, so far, studies exploring emotional crossmodal processing in SAUD relied on static faces and unmatched face/voice pairs, thus offering limited ecological validity. Our aim was therefore to assess emotional processing using a validated and ecological paradigm relying on dynamic audio-visual stimuli, manipulating the amount of emotional information available. METHOD: Thirty individuals with SAUD and 30 matched healthy controls performed an emotional discrimination task requiring to identify five emotions (anger, disgust, fear, happiness, sadness) expressed as visual, auditory, or auditory-visual segments of varying length. Sensitivity indices (d') were computed to get an unbiased measure of emotional discrimination and entered in a Generalized Linear Mixed Model. Incorrect emotional attributions were also scrutinized through confusion matrices. RESULTS: Discrimination levels varied across sensory modalities and emotions, and increased with stimuli duration. Crucially, performances also improved from unimodal to crossmodal conditions in both groups, but discrimination for anger crossmodal stimuli and fear crossmodal/visual stimuli remained selectively impaired in SAUD. These deficits were not influenced by stimuli duration, suggesting that they were not modulated by the amount of emotional information available. Moreover, they were not associated with systematic emotional error patterns reflecting specific confusions between emotions. CONCLUSIONS: These results clarify the nature and extent of crossmodal impairments in SAUD and converge with earlier findings to ascribe a specific role for anger and fear in this pathology.

4.
Cortex ; 119: 184-194, 2019 10.
Artículo en Inglés | MEDLINE | ID: mdl-31151087

RESUMEN

Humans seamlessly extract and integrate the emotional content delivered by the face and the voice of others. It is however poorly understood how perceptual decisions unfold in time when people discriminate the expression of emotions transmitted using dynamic facial and vocal signals, as in natural social context. In this study, we relied on a gating paradigm to track how the recognition of emotion expressions across the senses unfold over exposure time. We first demonstrate that across all emotions tested, a discriminatory decision is reached earlier with faces than with voices. Importantly, multisensory stimulation consistently reduced the required accumulation of perceptual evidences needed to reach correct discrimination (Isolation Point). We also observed that expressions with different emotional content provide cumulative evidence at different speeds, with "fear" being the expression with the fastest isolation point across the senses. Finally, the lack of correlation between the confusion patterns in response to facial and vocal signals across time suggest distinct relations between the discriminative features extracted from the two signals. Altogether, these results provide a comprehensive view on how auditory, visual and audiovisual information related to different emotion expressions accumulate in time, highlighting how multisensory context can fasten the discrimination process when minimal information is available.


Asunto(s)
Emociones/fisiología , Reconocimiento en Psicología/fisiología , Factores de Tiempo , Voz/fisiología , Adolescente , Adulto , Emoción Expresada/fisiología , Expresión Facial , Femenino , Humanos , Masculino , Estimulación Luminosa/métodos , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA