Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros

Bases de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Sci Rep ; 9(1): 7892, 2019 05 27.
Artículo en Inglés | MEDLINE | ID: mdl-31133688

RESUMEN

Although sound position is initially head-centred (egocentric coordinates), our brain can also represent sounds relative to one another (allocentric coordinates). Whether reference frames for spatial hearing are independent or interact remained largely unexplored. Here we developed a new allocentric spatial-hearing training and tested whether it can improve egocentric sound-localisation performance in normal-hearing adults listening with one ear plugged. Two groups of participants (N = 15 each) performed an egocentric sound-localisation task (point to a syllable), in monaural listening, before and after 4-days of multisensory training on triplets of white-noise bursts paired with occasional visual feedback. Critically, one group performed an allocentric task (auditory bisection task), whereas the other processed the same stimuli to perform an egocentric task (pointing to a designated sound of the triplet). Unlike most previous works, we tested also a no training group (N = 15). Egocentric sound-localisation abilities in the horizontal plane improved for all groups in the space ipsilateral to the ear-plug. This unexpected finding highlights the importance of including a no training group when studying sound localisation re-learning. Yet, performance changes were qualitatively different in trained compared to untrained participants, providing initial evidence that allocentric and multisensory procedures may prove useful when aiming to promote sound localisation re-learning.


Asunto(s)
Audición/fisiología , Localización de Sonidos/fisiología , Aprendizaje Espacial/fisiología , Percepción Visual/fisiología , Estimulación Acústica/instrumentación , Estimulación Acústica/métodos , Adulto , Femenino , Humanos , Masculino , Estimulación Luminosa/instrumentación , Estimulación Luminosa/métodos , Percepción Espacial , Adulto Joven
2.
Proc Natl Acad Sci U S A ; 114(31): E6437-E6446, 2017 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-28652333

RESUMEN

Brain systems supporting face and voice processing both contribute to the extraction of important information for social interaction (e.g., person identity). How does the brain reorganize when one of these channels is absent? Here, we explore this question by combining behavioral and multimodal neuroimaging measures (magneto-encephalography and functional imaging) in a group of early deaf humans. We show enhanced selective neural response for faces and for individual face coding in a specific region of the auditory cortex that is typically specialized for voice perception in hearing individuals. In this region, selectivity to face signals emerges early in the visual processing hierarchy, shortly after typical face-selective responses in the ventral visual pathway. Functional and effective connectivity analyses suggest reorganization in long-range connections from early visual areas to the face-selective temporal area in individuals with early and profound deafness. Altogether, these observations demonstrate that regions that typically specialize for voice processing in the hearing brain preferentially reorganize for face processing in born-deaf people. Our results support the idea that cross-modal plasticity in the case of early sensory deprivation relates to the original functional specialization of the reorganized brain regions.


Asunto(s)
Corteza Auditiva/fisiología , Sordera/fisiopatología , Reconocimiento Facial/fisiología , Plasticidad Neuronal/fisiología , Vías Visuales/fisiología , Estimulación Acústica , Adulto , Mapeo Encefálico , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Neuroimagen/métodos , Estimulación Luminosa , Privación Sensorial/fisiología , Percepción Visual/fisiología
3.
Exp Brain Res ; 214(3): 373-80, 2011 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-21901453

RESUMEN

The question of the arbitrariness of language is among the oldest in cognitive sciences, and it relates to the nature of the associations between vocal sounds and their meaning. Growing evidence seems to support sound symbolism, claiming for a naturally constrained mapping of meaning into sounds. Most of such evidence, however, comes from studies based on the interpretation of pseudowords, and to date, there is little empirical evidence that sound symbolism can affect phonatory behavior. In the present study, we asked participants to utter the letter /a/ in response to visual stimuli varying in shape, luminance, and size, and we observed consistent sound symbolic effects on vocalizations. Utterances' loudness was modulated by stimulus shape and luminance. Moreover, stimulus shape consistently modulated the frequency of the third formant (F3). This finding reveals an automatic mapping of specific visual attributes into phonological features of vocalizations. Furthermore, it suggests that sound-meaning associations are reciprocal, affecting active (production) as well as passive (comprehension) linguistic behavior.


Asunto(s)
Percepción Auditiva/fisiología , Fonación/fisiología , Semántica , Percepción del Habla/fisiología , Simbolismo , Percepción Visual/fisiología , Estimulación Acústica/métodos , Adulto , Femenino , Humanos , Imaginación/fisiología , Pruebas del Lenguaje/normas , Masculino , Persona de Mediana Edad , Estimulación Luminosa/métodos , Adulto Joven
4.
Hear Res ; 255(1-2): 91-8, 2009 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-19539018

RESUMEN

We assessed sound localisation abilities of late-implanted adults fitted with a single cochlear implant (CI) and examined whether these abilities are affected by the duration of implant use. Ten prelingually and four postlingually deafened adults who received a unilateral CI were tested in a sound-source identification task. Above chance performance was observed in those prelingual CI recipients who had worn their implant for longer time (9 years on average), revealing some monaural sound localisation abilities in this population but only after extensive CI use. On the contrary, the four postlingual recipients performed equal or better with respect to the best prelingual participants despite shorter experience with the monaural implant (11 months on average). Our findings reveal that some sound localisation ability can emerge in prelingually deafened adults fitted with a single implant, at least in a controlled laboratory setting. This ability, however, appears to emerge only after several years of CI use. Furthermore, the results of four postlingually deafened adults suggest that early experience with auditory cues may result in more rapid acquisition of spatial hearing with a single CI.


Asunto(s)
Implantes Cocleares , Localización de Sonidos/fisiología , Estimulación Acústica , Adolescente , Adulto , Sordera/fisiopatología , Sordera/terapia , Femenino , Humanos , Masculino , Persona de Mediana Edad , Habla , Factores de Tiempo , Adulto Joven
5.
Curr Biol ; 12(18): 1584-90, 2002 Sep 17.
Artículo en Inglés | MEDLINE | ID: mdl-12372250

RESUMEN

Perception of movement in acoustic space depends on comparison of the sound waveforms reaching the two ears (binaural cues) as well as spectrotemporal analysis of the waveform at each ear (monaural cues). The relative importance of these two cues is different for perception of vertical or horizontal motion, with spectrotemporal analysis likely to be more important for perceiving vertical shifts. In humans, functional imaging studies have shown that sound movement in the horizontal plane activates brain areas distinct from the primary auditory cortex, in parietal and frontal lobes and in the planum temporale. However, no previous work has examined activations for vertical sound movement. It is therefore difficult to generalize previous imaging studies, based on horizontal movement only, to multidimensional auditory space perception. Using externalized virtual-space sounds in a functional magnetic resonance imaging (fMRI) paradigm to investigate this, we compared vertical and horizontal shifts in sound location. A common bilateral network of brain areas was activated in response to both horizontal and vertical sound movement. This included the planum temporale, superior parietal cortex, and premotor cortex. Sounds perceived laterally in virtual space were associated with contralateral activation of the auditory cortex. These results demonstrate that sound movement in vertical and horizontal dimensions engages a common processing network in the human cerebral cortex and show that multidimensional spatial properties of sounds are processed at this level.


Asunto(s)
Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Estimulación Acústica , Adulto , Corteza Auditiva/anatomía & histología , Femenino , Lateralidad Funcional , Humanos , Imagen por Resonancia Magnética , Masculino , Modelos Neurológicos , Localización de Sonidos/fisiología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA