Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros

Banco de datos
Tipo de estudio
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
J Acoust Soc Am ; 146(2): EL177, 2019 08.
Artículo en Inglés | MEDLINE | ID: mdl-31472570

RESUMEN

Visual calibration of auditory space requires re-alignment of representations differing in (1) format (auditory hemispheric channels vs visual maps) and (2) reference frames (head-centered vs eye-centered). Here, a ventriloquism paradigm from Kopco, Lin, Shinn-Cunningham, and Groh [J. Neurosci. 29, 13809-13814 (2009)] was used to examine these processes in humans for ventriloquism induced within one spatial hemifield. Results show that (1) the auditory representation can be adapted even by aligned audio-visual stimuli, and (2) the spatial reference frame is primarily head-centered, with a weak eye-centered modulation. These results support the view that the ventriloquism aftereffect is driven by multiple spatially non-uniform, hemisphere-specific processes.


Asunto(s)
Efecto Tardío Figurativo , Lateralidad Funcional , Localización de Sonidos , Encéfalo/fisiología , Señales (Psicología) , Movimientos Oculares , Humanos , Ilusiones/fisiología , Percepción del Habla , Adulto Joven
2.
Trends Hear ; 27: 23312165231201020, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37715636

RESUMEN

The ventriloquism aftereffect (VAE), observed as a shift in the perceived locations of sounds after audio-visual stimulation, requires reference frame (RF) alignment since hearing and vision encode space in different RFs (head-centered vs. eye-centered). Previous experimental studies reported inconsistent results, observing either a mixture of head-centered and eye-centered frames, or a predominantly head-centered frame. Here, a computational model is introduced, examining the neural mechanisms underlying these effects. The basic model version assumes that the auditory spatial map is head-centered and the visual signals are converted to head-centered frame prior to inducing the adaptation. Two mechanisms are considered as extended model versions to describe the mixed-frame experimental data: (1) additional presence of visual signals in eye-centered frame and (2) eye-gaze direction-dependent attenuation in VAE when eyes shift away from the training fixation. Simulation results show that the mixed-frame results are mainly due to the second mechanism, suggesting that the RF of VAE is mainly head-centered. Additionally, a mechanism is proposed to explain a new ventriloquism-aftereffect-like phenomenon in which adaptation is induced by aligned audio-visual signals when saccades are used for responding to auditory targets. A version of the model extended to consider such response-method-related biases accurately predicts the new phenomenon. When attempting to model all the experimentally observed phenomena simultaneously, the model predictions are qualitatively similar but less accurate, suggesting that the proposed neural mechanisms interact in a more complex way than assumed in the model.


Asunto(s)
Localización de Sonidos , Humanos , Localización de Sonidos/fisiología , Estimulación Acústica/métodos , Movimientos Sacádicos , Sonido , Estimulación Luminosa/métodos , Percepción Visual/fisiología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA