RESUMEN
Tinnitus is a clinical condition where a sound is perceived without an external sound source. Homeostatic plasticity (HSP), serving to increase neural activity as compensation for the reduced input to the auditory pathway after hearing loss, has been proposed as a mechanism underlying tinnitus. In support, animal models of tinnitus show evidence of increased neural activity after hearing loss, including increased spontaneous and sound-driven firing rate, as well as increased neural noise throughout the auditory processing pathway. Bridging these findings to human tinnitus, however, has proven to be challenging. Here we implement hearing loss-induced HSP in a Wilson-Cowan Cortical Model of the auditory cortex to predict how homeostatic principles operating at the microscale translate to the meso- to macroscale accessible through human neuroimaging. We observed HSP-induced response changes in the model that were previously proposed as neural signatures of tinnitus, but that have also been reported as correlates of hearing loss and hyperacusis. As expected, HSP increased spontaneous and sound-driven responsiveness in hearing-loss affected frequency channels of the model. We furthermore observed evidence of increased neural noise and the appearance of spatiotemporal modulations in neural activity, which we discuss in light of recent human neuroimaging findings. Our computational model makes quantitative predictions that require experimental validation, and may thereby serve as the basis of future human studies of hearing loss, tinnitus, and hyperacusis.
Asunto(s)
Corteza Auditiva , Sordera , Pérdida Auditiva , Acúfeno , Animales , Humanos , Hiperacusia , Vías Auditivas , Estimulación Acústica/métodosRESUMEN
Selective attention enables the preferential processing of relevant stimulus aspects. Invasive animal studies have shown that attending a sound feature rapidly modifies neuronal tuning throughout the auditory cortex. Human neuroimaging studies have reported enhanced auditory cortical responses with selective attention. To date, it remains unclear how the results obtained with functional magnetic resonance imaging (fMRI) in humans relate to the electrophysiological findings in animal models. Here we aim to narrow the gap between animal and human research by combining a selective attention task similar in design to those used in animal electrophysiology with high spatial resolution ultra-high field fMRI at 7 Tesla. Specifically, human participants perform a detection task, whereas the probability of target occurrence varies with sound frequency. Contrary to previous fMRI studies, we show that selective attention resulted in population receptive field sharpening, and consequently reduced responses, at the attended sound frequencies. The difference between our results to those of previous fMRI studies supports the notion that the influence of selective attention on auditory cortex is diverse and may depend on context, stimulus, and task.
Asunto(s)
Corteza Auditiva , Localización de Sonidos , Animales , Humanos , Corteza Auditiva/fisiología , Estimulación Acústica/métodos , Localización de Sonidos/fisiología , Sonido , Imagen por Resonancia Magnética/métodos , Atención/fisiología , Percepción Auditiva/fisiologíaRESUMEN
Tinnitus is an auditory sensation in the absence of actual external stimulation. Different clinical interventions are used in tinnitus treatment, but only few patients respond to available options. The lack of successful tinnitus treatment is partly due to the limited knowledge about the mechanisms underlying tinnitus. Recently, the auditory part of the thalamus has gained attention as a central structure in the neuropathophysiology of tinnitus. Increased thalamic spontaneous firing rate, bursting activity and oscillations, alongside an increase of GABAergic tonic inhibition have been shown in the auditory thalamus in animal models of tinnitus. In addition, clinical neuroimaging studies have shown structural and functional thalamic changes with tinnitus. This review provides a systematic overview and discussion of these observations that support a central role of the auditory thalamus in tinnitus. Based on this approach, a neuromodulative treatment option for tinnitus is proposed.
Asunto(s)
Estimulación Encefálica Profunda , Cuerpos Geniculados/fisiopatología , Acúfeno/fisiopatología , Acúfeno/terapia , Estimulación Transcraneal de Corriente Directa , HumanosRESUMEN
Recent studies have highlighted the possible contributions of direct connectivity between early sensory cortices to audiovisual integration. Anatomical connections between the early auditory and visual cortices are concentrated in visual sites representing the peripheral field of view. Here, we aimed to engage early sensory interactive pathways with simple, far-peripheral audiovisual stimuli (auditory noise and visual gratings). Using a modulation detection task in one modality performed at an 84% correct threshold level, we investigated multisensory interactions by simultaneously presenting weak stimuli from the other modality in which the temporal modulation was barely-detectable (at 55 and 65% correct detection performance). Furthermore, we manipulated the temporal congruence between the cross-sensory streams. We found evidence for an influence of barely-detectable visual stimuli on the response times for auditory stimuli, but not for the reverse effect. These visual-to-auditory influences only occurred for specific phase-differences (at onset) between the modulated audiovisual stimuli. We discuss our findings in the light of a possible role of direct interactions between early visual and auditory areas, along with contributions from the higher-order association cortex. In sum, our results extend the behavioral evidence of audio-visual processing to the far periphery, and suggest - within this specific experimental setting - an asymmetry between the auditory influence on visual processing and the visual influence on auditory processing.
RESUMEN
Recent functional MRI (fMRI) studies have highlighted differences in responses to natural sounds along the rostral-caudal axis of the human superior temporal gyrus. However, due to the indirect nature of the fMRI signal, it has been challenging to relate these fMRI observations to actual neuronal response properties. To bridge this gap, we present a forward model of the fMRI responses to natural sounds combining a neuronal model of the auditory cortex with physiological modeling of the hemodynamic BOLD response. Neuronal responses are modeled with a dynamic recurrent firing rate model, reflecting the tonotopic, hierarchical processing in the auditory cortex along with the spectro-temporal tradeoff in the rostral-caudal axis of its belt areas. To link modeled neuronal response properties with human fMRI data in the auditory belt regions, we generated a space of neuronal models, which differed parametrically in spectral and temporal specificity of neuronal responses. Then, we obtained predictions of fMRI responses through a biophysical model of the hemodynamic BOLD response (P-DCM). Using Bayesian model comparison, our results showed that the hemodynamic BOLD responses of the caudal belt regions in the human auditory cortex were best explained by modeling faster temporal dynamics and broader spectral tuning of neuronal populations, while rostral belt regions were best explained through fine spectral tuning combined with slower temporal dynamics. These results support the hypotheses of complementary neural information processing along the rostral-caudal axis of the human superior temporal gyrus.
Asunto(s)
Corteza Auditiva/fisiología , Hemodinámica/fisiología , Neuronas/fisiología , Teorema de Bayes , Retroalimentación Fisiológica , Retroalimentación Psicológica , Humanos , Imagen por Resonancia Magnética , Modelos Neurológicos , Sensación , Sonido , Lóbulo Temporal/fisiologíaRESUMEN
Following rapid methodological advances, ultra-high field (UHF) functional and anatomical magnetic resonance imaging (MRI) has been repeatedly and successfully used for the investigation of the human auditory system in recent years. Here, we review this work and argue that UHF MRI is uniquely suited to shed light on how sounds are represented throughout the network of auditory brain regions. That is, the provided gain in spatial resolution at UHF can be used to study the functional role of the small subcortical auditory processing stages and details of cortical processing. Further, by combining high spatial resolution with the versatility of MRI contrasts, UHF MRI has the potential to localize the primary auditory cortex in individual hemispheres. This is a prerequisite to study how sound representation in higher-level auditory cortex evolves from that in early (primary) auditory cortex. Finally, the access to independent signals across auditory cortical depths, as afforded by UHF, may reveal the computations that underlie the emergence of an abstract, categorical sound representation based on low-level acoustic feature processing. Efforts on these research topics are underway. Here we discuss promises as well as challenges that come with studying these research questions using UHF MRI, and provide a future outlook.
Asunto(s)
Corteza Auditiva , Imagen por Resonancia Magnética , Corteza Auditiva/diagnóstico por imagen , Percepción Auditiva , Encéfalo/diagnóstico por imagen , Mapeo Encefálico/métodos , Humanos , Imagen por Resonancia Magnética/métodosRESUMEN
The human superior temporal plane, the site of the auditory cortex, displays high inter-individual macro-anatomical variation. This questions the validity of curvature-based alignment (CBA) methods for in vivo imaging data. Here, we have addressed this issue by developing CBA+, which is a cortical surface registration method that uses prior macro-anatomical knowledge. We validate this method by using cytoarchitectonic areas on 10 individual brains (which we make publicly available). Compared to volumetric and standard surface registration, CBA+ results in a more accurate cytoarchitectonic auditory atlas. The improved correspondence of micro-anatomy following the improved alignment of macro-anatomy validates the superiority of CBA+ compared to CBA. In addition, we use CBA+ to align in vivo and postmortem data. This allows projection of functional and anatomical information collected in vivo onto the cytoarchitectonic areas, which has the potential to contribute to the ongoing debate on the parcellation of the human auditory cortex.
Asunto(s)
Corteza Auditiva/citología , Mapeo Encefálico/métodos , HumanosRESUMEN
Tinnitus is a clinical condition defined by hearing a sound in the absence of an objective source. Early experiments in animal models have suggested that tinnitus stems from an alteration of processing in the auditory system. However, translating these results to humans has proven challenging. One limiting factor has been the insufficient spatial resolution of non-invasive measurement techniques to investigate responses in subcortical auditory nuclei, like the inferior colliculus and the medial geniculate body (MGB). Here we employed ultra-high field functional magnetic resonance imaging (UHF-fMRI) at 7 Tesla to investigate the frequency-specific processing in sub-cortical and cortical regions in a cohort of six tinnitus patients and six hearing loss matched controls. We used task-based fMRI to perform tonotopic mapping and compared the magnitude and tuning of frequency-specific responses between the two groups. Additionally, we used resting-state fMRI to investigate the functional connectivity. Our results indicate frequency-unspecific reductions in the selectivity of frequency tuning that start at the level of the MGB and continue in the auditory cortex, as well as reduced thalamocortical and cortico-cortical connectivity with tinnitus. These findings suggest that tinnitus may be associated with reduced inhibition in the auditory pathway, potentially leading to increased neural noise and reduced functional connectivity. Moreover, these results indicate the relevance of high spatial resolution UHF-fMRI for the investigation of the role of sub-cortical auditory regions in tinnitus.
Asunto(s)
Corteza Auditiva/fisiopatología , Vías Auditivas/fisiopatología , Corteza Cerebral/fisiopatología , Conectoma/métodos , Red Nerviosa/fisiopatología , Tálamo/fisiopatología , Acúfeno/fisiopatología , Adulto , Corteza Auditiva/diagnóstico por imagen , Vías Auditivas/diagnóstico por imagen , Corteza Cerebral/diagnóstico por imagen , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Persona de Mediana Edad , Red Nerviosa/diagnóstico por imagen , Tálamo/diagnóstico por imagen , Acúfeno/diagnóstico por imagenRESUMEN
Studying the human subcortical auditory system non-invasively is challenging due to its small, densely packed structures deep within the brain. Additionally, the elaborate three-dimensional (3-D) structure of the system can be difficult to understand based on currently available 2-D schematics and animal models. Wfe addressed these issues using a combination of histological data, post mortem magnetic resonance imaging (MRI), and in vivo MRI at 7 Tesla. We created anatomical atlases based on state-of-the-art human histology (BigBrain) and postmortem MRI (50 µm). We measured functional MRI (fMRI) responses to natural sounds and demonstrate that the functional localization of subcortical structures is reliable within individual participants who were scanned in two different experiments. Further, a group functional atlas derived from the functional data locates these structures with a median distance below 2 mm. Using diffusion MRI tractography, we revealed structural connectivity maps of the human subcortical auditory pathway both in vivo (1050 µm isotropic resolution) and post mortem (200 µm isotropic resolution). This work captures current MRI capabilities for investigating the human subcortical auditory system, describes challenges that remain, and contributes novel, openly available data, atlases, and tools for researching the human auditory system.
Asunto(s)
Vías Auditivas/anatomía & histología , Mapeo Encefálico , Adulto , Femenino , Histocitoquímica , Humanos , Imagen por Resonancia Magnética , MasculinoRESUMEN
Sensory thalami are central sensory pathway stations for information processing. Their role for human cognition and perception, however, remains unclear. Recent evidence suggests an involvement of the sensory thalami in speech recognition. In particular, the auditory thalamus (medial geniculate body, MGB) response is modulated by speech recognition tasks and the amount of this task-dependent modulation is associated with speech recognition abilities. Here, we tested the specific hypothesis that this behaviorally relevant modulation is present in the MGB subsection that corresponds to the primary auditory pathway (i.e., the ventral MGB [vMGB]). We used ultra-high field 7T fMRI to identify the vMGB, and found a significant positive correlation between the amount of task-dependent modulation and the speech recognition performance across participants within left vMGB, but not within the other MGB subsections. These results imply that modulation of thalamic driving input to the auditory cortex facilitates speech recognition.
Asunto(s)
Vías Auditivas/fisiología , Cuerpos Geniculados/fisiología , Percepción del Habla , Mapeo Encefálico , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Adulto JovenRESUMEN
The layers of the neocortex each have a unique anatomical connectivity and functional role. Their exploration in the human brain, however, has been severely restricted by the limited spatial resolution of non-invasive measurement techniques. Here, we exploit the sensitivity and specificity of ultra-high field fMRI at 7 Tesla to investigate responses to natural sounds at deep, middle, and superficial cortical depths of the human auditory cortex. Specifically, we compare the performance of computational models that represent different hypotheses on sound processing inside and outside the primary auditory cortex (PAC). We observe that while BOLD responses in deep and middle PAC layers are equally well represented by a simple frequency model and a more complex spectrotemporal modulation model, responses in superficial PAC are better represented by the more complex model. This indicates an increase in processing complexity in superficial PAC, which remains present throughout cortical depths in the non-primary auditory cortex. These results suggest that a relevant transformation in sound processing takes place between the thalamo-recipient middle PAC layers and superficial PAC. This transformation may be a first computational step towards sound abstraction and perception, serving to form an increasingly more complex representation of the physical input.
Asunto(s)
Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Procesamiento de Imagen Asistido por Computador/métodos , Neocórtex/fisiología , Adulto , Mapeo Encefálico , Femenino , Voluntarios Sanos , Humanos , Imagen por Resonancia Magnética , Masculino , Localización de Sonidos , Adulto JovenRESUMEN
Neural processing of sounds in the dorsal and ventral streams of the (human) auditory cortex is optimized for analyzing fine-grained temporal and spectral information, respectively. Here we use a Wilson and Cowan firing-rate modeling framework to simulate spectro-temporal processing of sounds in these auditory streams and to investigate the link between neural population activity and behavioral results of psychoacoustic experiments. The proposed model consisted of two core (A1 and R, representing primary areas) and two belt (Slow and Fast, representing rostral and caudal processing respectively) areas, differing in terms of their spectral and temporal response properties. First, we simulated the responses to amplitude modulated (AM) noise and tones. In agreement with electrophysiological results, we observed an area-dependent transition from a temporal (synchronization) to a rate code when moving from low to high modulation rates. Simulated neural responses in a task of amplitude modulation detection suggested that thresholds derived from population responses in core areas closely resembled those of psychoacoustic experiments in human listeners. For tones, simulated modulation threshold functions were found to be dependent on the carrier frequency. Second, we simulated the responses to complex tones with missing fundamental stimuli and found that synchronization of responses in the Fast area accurately encoded pitch, with the strength of synchronization depending on number and order of harmonic components. Finally, using speech stimuli, we showed that the spectral and temporal structure of the speech was reflected in parallel by the modeled areas. The analyses highlighted that the Slow stream coded with high spectral precision the aspects of the speech signal characterized by slow temporal changes (e.g., prosody), while the Fast stream encoded primarily the faster changes (e.g., phonemes, consonants, temporal pitch). Interestingly, the pitch of a speaker was encoded both spatially (i.e., tonotopically) in Slow area and temporally in Fast area. Overall, performed simulations showed that the model is valuable for generating hypotheses on how the different cortical areas/streams may contribute toward behaviorally relevant aspects of auditory processing. The model can be used in combination with physiological models of neurovascular coupling to generate predictions for human functional MRI experiments.
RESUMEN
Using ultra-high field fMRI, we explored the cortical depth-dependent stability of acoustic feature preference in human auditory cortex. We collected responses from human auditory cortex (subjects from either sex) to a large number of natural sounds at submillimeter spatial resolution, and observed that these responses were well explained by a model that assumes neuronal population tuning to frequency-specific spectrotemporal modulations. We observed a relatively stable (columnar) tuning to frequency and temporal modulations. However, spectral modulation tuning was variable throughout the cortical depth. This difference in columnar stability between feature maps could not be explained by a difference in map smoothness, as the preference along the cortical sheet varied in a similar manner for the different feature maps. Furthermore, tuning to all three features was more columnar in primary than nonprimary auditory cortex. The observed overall lack of overlapping columnar regions across acoustic feature maps suggests, especially for primary auditory cortex, a coding strategy in which across cortical depths tuning to some features is kept stable, whereas tuning to other features systematically varies.SIGNIFICANCE STATEMENT In the human auditory cortex, sound aspects are processed in large-scale maps. Invasive animal studies show that an additional processing organization may be implemented orthogonal to the cortical sheet (i.e., in the columnar direction), but it is unknown whether observed organizational principles apply to the human auditory cortex. Combining ultra-high field fMRI with natural sounds, we explore the columnar organization of various sound aspects. Our results suggest that the human auditory cortex contains a modular coding strategy, where, for each module, several sound aspects act as an anchor along which computations are performed while the processing of another sound aspect undergoes a transformation. This strategy may serve to optimally represent the content of our complex acoustic natural environment.
Asunto(s)
Corteza Auditiva/diagnóstico por imagen , Percepción Auditiva/fisiología , Localización de Sonidos/fisiología , Estimulación Acústica , Adulto , Corteza Auditiva/fisiología , Mapeo Encefálico/métodos , Femenino , Neuroimagen Funcional/métodos , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Adulto JovenRESUMEN
Timbre, or sound quality, is a crucial but poorly understood dimension of auditory perception that is important in describing speech, music, and environmental sounds. The present study investigates the cortical representation of different timbral dimensions. Encoding models have typically incorporated the physical characteristics of sounds as features when attempting to understand their neural representation with functional MRI. Here we test an encoding model that is based on five subjectively derived dimensions of timbre to predict cortical responses to natural orchestral sounds. Results show that this timbre model can outperform other models based on spectral characteristics, and can perform as well as a complex joint spectrotemporal modulation model. In cortical regions at the medial border of Heschl's gyrus, bilaterally, and regions at its posterior adjacency in the right hemisphere, the timbre model outperforms even the complex joint spectrotemporal modulation model. These findings suggest that the responses of cortical neuronal populations in auditory cortex may reflect the encoding of perceptual timbre dimensions.
Asunto(s)
Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Neuroimagen Funcional/métodos , Música , Adulto , Corteza Auditiva/diagnóstico por imagen , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Adulto JovenRESUMEN
Pitch is a perceptual attribute related to the fundamental frequency (or periodicity) of a sound. So far, the cortical processing of pitch has been investigated mostly using synthetic sounds. However, the complex harmonic structure of natural sounds may require different mechanisms for the extraction and analysis of pitch. This study investigated the neural representation of pitch in human auditory cortex using model-based encoding and decoding analyses of high field (7 T) functional magnetic resonance imaging (fMRI) data collected while participants listened to a wide range of real-life sounds. Specifically, we modeled the fMRI responses as a function of the sounds' perceived pitch height and salience (related to the fundamental frequency and the harmonic structure respectively), which we estimated with a computational algorithm of pitch extraction (de Cheveigné and Kawahara, 2002). First, using single-voxel fMRI encoding, we identified a pitch-coding region in the antero-lateral Heschl's gyrus (HG) and adjacent superior temporal gyrus (STG). In these regions, the pitch representation model combining height and salience predicted the fMRI responses comparatively better than other models of acoustic processing and, in the right hemisphere, better than pitch representations based on height/salience alone. Second, we assessed with model-based decoding that multi-voxel response patterns of the identified regions are more informative of perceived pitch than the remainder of the auditory cortex. Further multivariate analyses showed that complementing a multi-resolution spectro-temporal sound representation with pitch produces a small but significant improvement to the decoding of complex sounds from fMRI response patterns. In sum, this work extends model-based fMRI encoding and decoding methods - previously employed to examine the representation and processing of acoustic sound features in the human auditory system - to the representation and processing of a relevant perceptual attribute such as pitch. Taken together, the results of our model-based encoding and decoding analyses indicated that the pitch of complex real life sounds is extracted and processed in lateral HG/STG regions, at locations consistent with those indicated in several previous fMRI studies using synthetic sounds. Within these regions, pitch-related sound representations reflect the modulatory combination of height and the salience of the pitch percept.
Asunto(s)
Corteza Auditiva/fisiología , Mapeo Encefálico/métodos , Modelos Neurológicos , Percepción de la Altura Tonal/fisiología , Estimulación Acústica , Adulto , Potenciales Evocados Auditivos/fisiología , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , MasculinoRESUMEN
Following rapid technological advances, ultra-high field functional MRI (fMRI) enables exploring correlates of neuronal population activity at an increasing spatial resolution. However, as the fMRI blood-oxygenation-level-dependent (BOLD) contrast is a vascular signal, the spatial specificity of fMRI data is ultimately determined by the characteristics of the underlying vasculature. At 7T, fMRI measurement parameters determine the relative contribution of the macro- and microvasculature to the acquired signal. Here we investigate how these parameters affect relevant high-end fMRI analyses such as encoding, decoding, and submillimeter mapping of voxel preferences in the human auditory cortex. Specifically, we compare a T2* weighted fMRI dataset, obtained with 2D gradient echo (GE) EPI, to a predominantly T2 weighted dataset obtained with 3D GRASE. We first investigated the decoding accuracy based on two encoding models that represented different hypotheses about auditory cortical processing. This encoding/decoding analysis profited from the large spatial coverage and sensitivity of the T2* weighted acquisitions, as evidenced by a significantly higher prediction accuracy in the GE-EPI dataset compared to the 3D GRASE dataset for both encoding models. The main disadvantage of the T2* weighted GE-EPI dataset for encoding/decoding analyses was that the prediction accuracy exhibited cortical depth dependent vascular biases. However, we propose that the comparison of prediction accuracy across the different encoding models may be used as a post processing technique to salvage the spatial interpretability of the GE-EPI cortical depth-dependent prediction accuracy. Second, we explored the mapping of voxel preferences. Large-scale maps of frequency preference (i.e., tonotopy) were similar across datasets, yet the GE-EPI dataset was preferable due to its larger spatial coverage and sensitivity. However, submillimeter tonotopy maps revealed biases in assigned frequency preference and selectivity for the GE-EPI dataset, but not for the 3D GRASE dataset. Thus, a T2 weighted acquisition is recommended if high specificity in tonotopic maps is required. In conclusion, different fMRI acquisitions were better suited for different analyses. It is therefore critical that any sequence parameter optimization considers the eventual intended fMRI analyses and the nature of the neuroscience questions being asked.
Asunto(s)
Corteza Auditiva/diagnóstico por imagen , Mapeo Encefálico/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Algoritmos , Humanos , Sensibilidad y EspecificidadRESUMEN
The ability to measure functional brain responses non-invasively with ultra high field MRI (7 T and above) represents a unique opportunity in advancing our understanding of the human brain. Compared to lower fields (3 T and below), ultra high field MRI has an increased sensitivity, which can be used to acquire functional images with greater spatial resolution, and greater specificity of the blood oxygen level dependent (BOLD) signal to the underlying neuronal responses. Together, increased resolution and specificity enable investigating brain functions at a submillimeter scale, which so far could only be done with invasive techniques. At this mesoscopic spatial scale, perception, cognition and behavior can be probed at the level of fundamental units of neural computations, such as cortical columns, cortical layers, and subcortical nuclei. This represents a unique and distinctive advantage that differentiates ultra high from lower field imaging and that can foster a tighter link between fMRI and computational modeling of neural networks. So far, functional brain mapping at submillimeter scale has focused on the processing of sensory information and on well-known systems for which extensive information is available from invasive recordings in animals. It remains an open challenge to extend this methodology to uniquely human functions and, more generally, to systems for which animal models may be problematic. To succeed, the possibility to acquire high-resolution functional data with large spatial coverage, the availability of computational models of neural processing as well as accurate biophysical modeling of neurovascular coupling at mesoscopic scale all appear necessary.
Asunto(s)
Encéfalo/diagnóstico por imagen , Encéfalo/fisiología , Neuroimagen Funcional/métodos , Interpretación de Imagen Asistida por Computador/métodos , Imagen por Resonancia Magnética/métodos , Procesos Mentales/fisiología , Modelos Teóricos , Acoplamiento Neurovascular/fisiología , Encéfalo/anatomía & histología , HumanosRESUMEN
Ethological views of brain functioning suggest that sound representations and computations in the auditory neural system are optimized finely to process and discriminate behaviorally relevant acoustic features and sounds (e.g., spectrotemporal modulations in the songs of zebra finches). Here, we show that modeling of neural sound representations in terms of frequency-specific spectrotemporal modulations enables accurate and specific reconstruction of real-life sounds from high-resolution functional magnetic resonance imaging (fMRI) response patterns in the human auditory cortex. Region-based analyses indicated that response patterns in separate portions of the auditory cortex are informative of distinctive sets of spectrotemporal modulations. Most relevantly, results revealed that in early auditory regions, and progressively more in surrounding regions, temporal modulations in a range relevant for speech analysis (â¼2-4 Hz) were reconstructed more faithfully than other temporal modulations. In early auditory regions, this effect was frequency-dependent and only present for lower frequencies (<â¼2 kHz), whereas for higher frequencies, reconstruction accuracy was higher for faster temporal modulations. Further analyses suggested that auditory cortical processing optimized for the fine-grained discrimination of speech and vocal sounds underlies this enhanced reconstruction accuracy. In sum, the present study introduces an approach to embed models of neural sound representations in the analysis of fMRI response patterns. Furthermore, it reveals that, in the human brain, even general purpose and fundamental neural processing mechanisms are shaped by the physical features of real-world stimuli that are most relevant for behavior (i.e., speech, voice).
Asunto(s)
Corteza Auditiva/diagnóstico por imagen , Corteza Auditiva/fisiología , Imagen por Resonancia Magnética , Percepción de la Altura Tonal/fisiología , Percepción del Habla/fisiología , Adulto , Femenino , Humanos , MasculinoRESUMEN
Columnar arrangements of neurons with similar preference have been suggested as the fundamental processing units of the cerebral cortex. Within these columnar arrangements, feed-forward information enters at middle cortical layers whereas feedback information arrives at superficial and deep layers. This interplay of feed-forward and feedback processing is at the core of perception and behavior. Here we provide in vivo evidence consistent with a columnar organization of the processing of sound frequency in the human auditory cortex. We measure submillimeter functional responses to sound frequency sweeps at high magnetic fields (7 tesla) and show that frequency preference is stable through cortical depth in primary auditory cortex. Furthermore, we demonstrate that-in this highly columnar cortex-task demands sharpen the frequency tuning in superficial cortical layers more than in middle or deep layers. These findings are pivotal to understanding mechanisms of neural information processing and flow during the active perception of sounds.
Asunto(s)
Atención/fisiología , Corteza Auditiva/fisiología , Corteza Cerebral/fisiología , Sonido , Estimulación Acústica , Adulto , Corteza Auditiva/anatomía & histología , Percepción Auditiva/fisiología , Mapeo Encefálico , Corteza Cerebral/anatomía & histología , Femenino , Humanos , Imagen por Resonancia Magnética/métodos , Localización de Sonidos/fisiologíaRESUMEN
To date it remains largely unknown how fundamental aspects of natural sounds, such as their spectral content and location in space, are processed in human subcortical structures. Here we exploited the high sensitivity and specificity of high field fMRI (7 Tesla) to examine the human inferior colliculus (IC) and medial geniculate body (MGB). Subcortical responses to natural sounds were well explained by an encoding model of sound processing that represented frequency and location jointly. Frequency tuning was organized in one tonotopic gradient in the IC, whereas two tonotopic maps characterized the MGB reflecting two MGB subdivisions. In contrast, no topographic pattern of preferred location was detected, beyond an overall preference for peripheral (as opposed to central) and contralateral locations. Our findings suggest the functional organization of frequency and location processing in human subcortical auditory structures, and pave the way for studying the subcortical to cortical interaction required to create coherent auditory percepts.