Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros

Banco de datos
Tipo del documento
Asunto de la revista
País de afiliación
Intervalo de año de publicación
1.
Neuroimage ; 128: 373-384, 2016 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-26777479

RESUMEN

Development typically leads to optimized and adaptive neural mechanisms for the processing of voice and speech. In this fMRI study we investigated how this adaptive processing reaches its mature efficiency by examining the effects of task, age and phonological skills on cortical responses to voice and speech in children (8-9years), adolescents (14-15years) and adults. Participants listened to vowels (/a/, /i/, /u/) spoken by different speakers (boy, girl, man) and performed delayed-match-to-sample tasks on vowel and speaker identity. Across age groups, similar behavioral accuracy and comparable sound evoked auditory cortical fMRI responses were observed. Analysis of task-related modulations indicated a developmental enhancement of responses in the (right) superior temporal cortex during the processing of speaker information. This effect was most evident through an analysis based on individually determined voice sensitive regions. Analysis of age effects indicated that the recruitment of regions in the temporal-parietal cortex and posterior cingulate/cingulate gyrus decreased with development. Beyond age-related changes, the strength of speech-evoked activity in left posterior and right middle superior temporal regions significantly scaled with individual differences in phonological skills. Together, these findings suggest a prolonged development of the cortical functional network for speech and voice processing. This development includes a progressive refinement of the neural mechanisms for the selection and analysis of auditory information relevant to the ongoing behavioral task.


Asunto(s)
Encéfalo/crecimiento & desarrollo , Encéfalo/fisiología , Desarrollo Infantil/fisiología , Percepción del Habla/fisiología , Voz , Estimulación Acústica , Adolescente , Adulto , Mapeo Encefálico , Niño , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Adulto Joven
2.
J Neurosci ; 32(38): 13273-80, 2012 Sep 19.
Artículo en Inglés | MEDLINE | ID: mdl-22993443

RESUMEN

The formation of new sound categories is fundamental to everyday goal-directed behavior. Categorization requires the abstraction of discrete classes from continuous physical features as required by context and task. Electrophysiology in animals has shown that learning to categorize novel sounds alters their spatiotemporal neural representation at the level of early auditory cortex. However, functional magnetic resonance imaging (fMRI) studies so far did not yield insight into the effects of category learning on sound representations in human auditory cortex. This may be due to the use of overlearned speech-like categories and fMRI subtraction paradigms, leading to insufficient sensitivity to distinguish the responses to learning-induced, novel sound categories. Here, we used fMRI pattern analysis to investigate changes in human auditory cortical response patterns induced by category learning. We created complex novel sound categories and analyzed distributed activation patterns during passive listening to a sound continuum before and after category learning. We show that only after training, sound categories could be successfully decoded from early auditory areas and that learning-induced pattern changes were specific to the category-distinctive sound feature (i.e., pitch). Notably, the similarity between fMRI response patterns for the sound continuum mirrored the sigmoid shape of the behavioral category identification function. Our results indicate that perceptual representations of novel sound categories emerge from neural changes at early levels of the human auditory processing hierarchy.


Asunto(s)
Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Mapeo Encefálico , Aprendizaje/fisiología , Sonido , Estimulación Acústica/clasificación , Adulto , Análisis de Varianza , Corteza Auditiva/irrigación sanguínea , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Distribución Normal , Oxígeno/sangre , Psicoacústica , Análisis Espectral , Adulto Joven
3.
Neuroimage ; 83: 739-50, 2013 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-23867553

RESUMEN

We study the developmental trajectory of morphology and function of the superior temporal cortex (STC) in children (8-9 years), adolescents (14-15 years) and young adults. We analyze cortical surface landmarks and functional MRI (fMRI) responses to voices, other natural categories and tones and examine how hemispheric asymmetry and inter-subject variability change across age. Our results show stable morphological asymmetries across age groups, including a larger left planum temporale and a deeper right superior temporal sulcus. fMRI analyses show that a rightward lateralization for voice-selective responses is present in all groups but decreases with age. Furthermore, STC responses to voices change from being less selective and more spatially diffuse in children to highly selective and focal in adults. Interestingly, the analysis of morphological landmarks reveals that inter-subject variability increases during development in the right--but not in the left--STC. Similarly, inter-subject variability of cortically-realigned functional responses to voices, other categories and tones increases with age in the right STC. Our findings reveal asymmetric developmental changes in brain regions crucial for auditory and voice perception. The age-related increase of inter-subject variability in right STC suggests that anatomy and function of this region are shaped by unique individual developmental experiences.


Asunto(s)
Desarrollo Infantil/fisiología , Lóbulo Temporal/crecimiento & desarrollo , Lóbulo Temporal/fisiología , Adolescente , Percepción Auditiva/fisiología , Niño , Femenino , Lateralidad Funcional/fisiología , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Adulto Joven
4.
Front Neurosci ; 10: 254, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27375416

RESUMEN

This combined fMRI and MEG study investigated brain activations during listening and attending to natural auditory scenes. We first recorded, using in-ear microphones, vocal non-speech sounds, and environmental sounds that were mixed to construct auditory scenes containing two concurrent sound streams. During the brain measurements, subjects attended to one of the streams while spatial acoustic information of the scene was either preserved (stereophonic sounds) or removed (monophonic sounds). Compared to monophonic sounds, stereophonic sounds evoked larger blood-oxygenation-level-dependent (BOLD) fMRI responses in the bilateral posterior superior temporal areas, independent of which stimulus attribute the subject was attending to. This finding is consistent with the functional role of these regions in the (automatic) processing of auditory spatial cues. Additionally, significant differences in the cortical activation patterns depending on the target of attention were observed. Bilateral planum temporale and inferior frontal gyrus were preferentially activated when attending to stereophonic environmental sounds, whereas when subjects attended to stereophonic voice sounds, the BOLD responses were larger at the bilateral middle superior temporal gyrus and sulcus, previously reported to show voice sensitivity. In contrast, the time-resolved MEG responses were stronger for mono- than stereophonic sounds in the bilateral auditory cortices at ~360 ms after the stimulus onset when attending to the voice excerpts within the combined sounds. The observed effects suggest that during the segregation of auditory objects from the auditory background, spatial sound cues together with other relevant temporal and spectral cues are processed in an attention-dependent manner at the cortical locations generally involved in sound recognition. More synchronous neuronal activation during monophonic than stereophonic sound processing, as well as (local) neuronal inhibitory mechanisms in the auditory cortex, may explain the simultaneous increase of BOLD responses and decrease of MEG responses. These findings highlight the complimentary role of electrophysiological and hemodynamic measures in addressing brain processing of complex stimuli.

5.
Front Neurosci ; 8: 132, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-24917783

RESUMEN

The transformation of acoustic signals into abstract perceptual representations is the essence of the efficient and goal-directed neural processing of sounds in complex natural environments. While the human and animal auditory system is perfectly equipped to process the spectrotemporal sound features, adequate sound identification and categorization require neural sound representations that are invariant to irrelevant stimulus parameters. Crucially, what is relevant and irrelevant is not necessarily intrinsic to the physical stimulus structure but needs to be learned over time, often through integration of information from other senses. This review discusses the main principles underlying categorical sound perception with a special focus on the role of learning and neural plasticity. We examine the role of different neural structures along the auditory processing pathway in the formation of abstract sound representations with respect to hierarchical as well as dynamic and distributed processing models. Whereas most fMRI studies on categorical sound processing employed speech sounds, the emphasis of the current review lies on the contribution of empirical studies using natural or artificial sounds that enable separating acoustic and perceptual processing levels and avoid interference with existing category representations. Finally, we discuss the opportunities of modern analyses techniques such as multivariate pattern analysis (MVPA) in studying categorical sound representations. With their increased sensitivity to distributed activation changes-even in absence of changes in overall signal level-these analyses techniques provide a promising tool to reveal the neural underpinnings of perceptually invariant sound representations.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA