Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 53
Filtrar
1.
Nat Neurosci ; 26(4): 664-672, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36928634

RESUMO

Recognizing sounds implicates the cerebral transformation of input waveforms into semantic representations. Although past research identified the superior temporal gyrus (STG) as a crucial cortical region, the computational fingerprint of these cerebral transformations remains poorly characterized. Here, we exploit a model comparison framework and contrasted the ability of acoustic, semantic (continuous and categorical) and sound-to-event deep neural network representation models to predict perceived sound dissimilarity and 7 T human auditory cortex functional magnetic resonance imaging responses. We confirm that spectrotemporal modulations predict early auditory cortex (Heschl's gyrus) responses, and that auditory dimensions (for example, loudness, periodicity) predict STG responses and perceived dissimilarity. Sound-to-event deep neural networks predict Heschl's gyrus responses similar to acoustic models but, notably, they outperform all competing models at predicting both STG responses and perceived dissimilarity. Our findings indicate that STG entails intermediate acoustic-to-semantic sound representations that neither acoustic nor semantic models can account for. These representations are compositional in nature and relevant to behavior.


Assuntos
Córtex Auditivo , Semântica , Humanos , Estimulação Acústica/métodos , Córtex Auditivo/fisiologia , Acústica , Imageamento por Ressonância Magnética , Percepção Auditiva/fisiologia , Mapeamento Encefálico/métodos
2.
Neuroimage ; 228: 117670, 2021 03.
Artigo em Inglês | MEDLINE | ID: mdl-33359352

RESUMO

Selective attention is essential for the processing of multi-speaker auditory scenes because they require the perceptual segregation of the relevant speech ("target") from irrelevant speech ("distractors"). For simple sounds, it has been suggested that the processing of multiple distractor sounds depends on bottom-up factors affecting task performance. However, it remains unclear whether such dependency applies to naturalistic multi-speaker auditory scenes. In this study, we tested the hypothesis that increased perceptual demand (the processing requirement posed by the scene to separate the target speech) reduces the cortical processing of distractor speech thus decreasing their perceptual segregation. Human participants were presented with auditory scenes including three speakers and asked to selectively attend to one speaker while their EEG was acquired. The perceptual demand of this selective listening task was varied by introducing an auditory cue (interaural time differences, ITDs) for segregating the target from the distractor speakers, while acoustic differences between the distractors were matched in ITD and loudness. We obtained a quantitative measure of the cortical segregation of distractor speakers by assessing the difference in how accurately speech-envelope following EEG responses could be predicted by models of averaged distractor speech versus models of individual distractor speech. In agreement with our hypothesis, results show that interaural segregation cues led to improved behavioral word-recognition performance and stronger cortical segregation of the distractor speakers. The neural effect was strongest in the δ-band and at early delays (0 - 200 ms). Our results indicate that during low perceptual demand, the human cortex represents individual distractor speech signals as more segregated. This suggests that, in addition to purely acoustical properties, the cortical processing of distractor speakers depends on factors like perceptual demand.


Assuntos
Atenção/fisiologia , Córtex Cerebral/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica/métodos , Adulto , Eletroencefalografia/métodos , Feminino , Humanos , Masculino , Ruído , Processamento de Sinais Assistido por Computador , Adulto Jovem
3.
Neuroimage Clin ; 25: 102166, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-31958686

RESUMO

Tinnitus is a clinical condition defined by hearing a sound in the absence of an objective source. Early experiments in animal models have suggested that tinnitus stems from an alteration of processing in the auditory system. However, translating these results to humans has proven challenging. One limiting factor has been the insufficient spatial resolution of non-invasive measurement techniques to investigate responses in subcortical auditory nuclei, like the inferior colliculus and the medial geniculate body (MGB). Here we employed ultra-high field functional magnetic resonance imaging (UHF-fMRI) at 7 Tesla to investigate the frequency-specific processing in sub-cortical and cortical regions in a cohort of six tinnitus patients and six hearing loss matched controls. We used task-based fMRI to perform tonotopic mapping and compared the magnitude and tuning of frequency-specific responses between the two groups. Additionally, we used resting-state fMRI to investigate the functional connectivity. Our results indicate frequency-unspecific reductions in the selectivity of frequency tuning that start at the level of the MGB and continue in the auditory cortex, as well as reduced thalamocortical and cortico-cortical connectivity with tinnitus. These findings suggest that tinnitus may be associated with reduced inhibition in the auditory pathway, potentially leading to increased neural noise and reduced functional connectivity. Moreover, these results indicate the relevance of high spatial resolution UHF-fMRI for the investigation of the role of sub-cortical auditory regions in tinnitus.


Assuntos
Córtex Auditivo/fisiopatologia , Vias Auditivas/fisiopatologia , Córtex Cerebral/fisiopatologia , Conectoma/métodos , Rede Nervosa/fisiopatologia , Tálamo/fisiopatologia , Zumbido/fisiopatologia , Adulto , Córtex Auditivo/diagnóstico por imagem , Vias Auditivas/diagnóstico por imagem , Córtex Cerebral/diagnóstico por imagem , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Rede Nervosa/diagnóstico por imagem , Tálamo/diagnóstico por imagem , Zumbido/diagnóstico por imagem
4.
Cereb Cortex ; 30(3): 1103-1116, 2020 03 14.
Artigo em Inglês | MEDLINE | ID: mdl-31504283

RESUMO

Auditory spatial tasks induce functional activation in the occipital-visual-cortex of early blind humans. Less is known about the effects of blindness on auditory spatial processing in the temporal-auditory-cortex. Here, we investigated spatial (azimuth) processing in congenitally and early blind humans with a phase-encoding functional magnetic resonance imaging (fMRI) paradigm. Our results show that functional activation in response to sounds in general-independent of sound location-was stronger in the occipital cortex but reduced in the medial temporal cortex of blind participants in comparison with sighted participants. Additionally, activation patterns for binaural spatial processing were different for sighted and blind participants in planum temporale. Finally, fMRI responses in the auditory cortex of blind individuals carried less information on sound azimuth position than those in sighted individuals, as assessed with a 2-channel, opponent coding model for the cortical representation of sound azimuth. These results indicate that early visual deprivation results in reorganization of binaural spatial processing in the auditory cortex and that blind individuals may rely on alternative mechanisms for processing azimuth position.


Assuntos
Córtex Auditivo/fisiopatologia , Cegueira/fisiopatologia , Plasticidade Neuronal , Localização de Som/fisiologia , Estimulação Acústica , Adulto , Cegueira/congênito , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Lobo Occipital/fisiologia , Pessoas com Deficiência Visual
5.
Nat Hum Behav ; 3(9): 974-987, 2019 09.
Artigo em Inglês | MEDLINE | ID: mdl-31285622

RESUMO

Speech is the most important signal in our auditory environment, and the processing of speech is highly dependent on context. However, it is unknown how contextual demands influence the neural encoding of speech. Here, we examine the context dependence of auditory cortical mechanisms for speech encoding at the level of the representation of fundamental acoustic features (spectrotemporal modulations) using model-based functional magnetic resonance imaging. We found that the performance of different tasks on identical speech sounds leads to neural enhancement of the acoustic features in the stimuli that are critically relevant to task performance. These task effects were observed at the earliest stages of auditory cortical processing, in line with interactive accounts of speech processing. Our work provides important insights into the mechanisms that underlie the processing of contextually relevant acoustic information within our rich and dynamic auditory environment.


Assuntos
Córtex Auditivo/fisiologia , Percepção da Fala , Estimulação Acústica , Córtex Auditivo/diagnóstico por imagem , Encéfalo/diagnóstico por imagem , Encéfalo/fisiologia , Feminino , Neuroimagem Funcional , Humanos , Imageamento por Ressonância Magnética , Masculino , Acústica da Fala , Percepção da Fala/fisiologia , Adulto Jovem
6.
Cereb Cortex ; 29(9): 3636-3650, 2019 08 14.
Artigo em Inglês | MEDLINE | ID: mdl-30395192

RESUMO

Understanding homologies and differences in auditory cortical processing in human and nonhuman primates is an essential step in elucidating the neurobiology of speech and language. Using fMRI responses to natural sounds, we investigated the representation of multiple acoustic features in auditory cortex of awake macaques and humans. Comparative analyses revealed homologous large-scale topographies not only for frequency but also for temporal and spectral modulations. In both species, posterior regions preferably encoded relatively fast temporal and coarse spectral information, whereas anterior regions encoded slow temporal and fine spectral modulations. Conversely, we observed a striking interspecies difference in cortical sensitivity to temporal modulations: While decoding from macaque auditory cortex was most accurate at fast rates (> 30 Hz), humans had highest sensitivity to ~3 Hz, a relevant rate for speech analysis. These findings suggest that characteristic tuning of human auditory cortex to slow temporal modulations is unique and may have emerged as a critical step in the evolution of speech and language.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Estimulação Acústica , Animais , Mapeamento Encefálico , Feminino , Humanos , Macaca mulatta , Imageamento por Ressonância Magnética , Masculino , Especificidade da Espécie , Percepção da Fala/fisiologia , Vocalização Animal
7.
J Neurosci ; 38(36): 7822-7832, 2018 09 05.
Artigo em Inglês | MEDLINE | ID: mdl-30185539

RESUMO

Using ultra-high field fMRI, we explored the cortical depth-dependent stability of acoustic feature preference in human auditory cortex. We collected responses from human auditory cortex (subjects from either sex) to a large number of natural sounds at submillimeter spatial resolution, and observed that these responses were well explained by a model that assumes neuronal population tuning to frequency-specific spectrotemporal modulations. We observed a relatively stable (columnar) tuning to frequency and temporal modulations. However, spectral modulation tuning was variable throughout the cortical depth. This difference in columnar stability between feature maps could not be explained by a difference in map smoothness, as the preference along the cortical sheet varied in a similar manner for the different feature maps. Furthermore, tuning to all three features was more columnar in primary than nonprimary auditory cortex. The observed overall lack of overlapping columnar regions across acoustic feature maps suggests, especially for primary auditory cortex, a coding strategy in which across cortical depths tuning to some features is kept stable, whereas tuning to other features systematically varies.SIGNIFICANCE STATEMENT In the human auditory cortex, sound aspects are processed in large-scale maps. Invasive animal studies show that an additional processing organization may be implemented orthogonal to the cortical sheet (i.e., in the columnar direction), but it is unknown whether observed organizational principles apply to the human auditory cortex. Combining ultra-high field fMRI with natural sounds, we explore the columnar organization of various sound aspects. Our results suggest that the human auditory cortex contains a modular coding strategy, where, for each module, several sound aspects act as an anchor along which computations are performed while the processing of another sound aspect undergoes a transformation. This strategy may serve to optimally represent the content of our complex acoustic natural environment.


Assuntos
Córtex Auditivo/diagnóstico por imagem , Percepção Auditiva/fisiologia , Localização de Som/fisiologia , Estimulação Acústica , Adulto , Córtex Auditivo/fisiologia , Mapeamento Encefálico/métodos , Feminino , Neuroimagem Funcional/métodos , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Adulto Jovem
8.
J Neurosci ; 38(40): 8574-8587, 2018 10 03.
Artigo em Inglês | MEDLINE | ID: mdl-30126968

RESUMO

Spatial hearing sensitivity in humans is dynamic and task-dependent, but the mechanisms in human auditory cortex that enable dynamic sound location encoding remain unclear. Using functional magnetic resonance imaging (fMRI), we assessed how active behavior affects encoding of sound location (azimuth) in primary auditory cortical areas and planum temporale (PT). According to the hierarchical model of auditory processing and cortical functional specialization, PT is implicated in sound location ("where") processing. Yet, our results show that spatial tuning profiles in primary auditory cortical areas (left primary core and right caudo-medial belt) sharpened during a sound localization ("where") task compared with a sound identification ("what") task. In contrast, spatial tuning in PT was sharp but did not vary with task performance. We further applied a population pattern decoder to the measured fMRI activity patterns, which confirmed the task-dependent effects in the left core: sound location estimates from fMRI patterns measured during active sound localization were most accurate. In PT, decoding accuracy was not modulated by task performance. These results indicate that changes of population activity in human primary auditory areas reflect dynamic and task-dependent processing of sound location. As such, our findings suggest that the hierarchical model of auditory processing may need to be revised to include an interaction between primary and functionally specialized areas depending on behavioral requirements.SIGNIFICANCE STATEMENT According to a purely hierarchical view, cortical auditory processing consists of a series of analysis stages from sensory (acoustic) processing in primary auditory cortex to specialized processing in higher-order areas. Posterior-dorsal cortical auditory areas, planum temporale (PT) in humans, are considered to be functionally specialized for spatial processing. However, this model is based mostly on passive listening studies. Our results provide compelling evidence that active behavior (sound localization) sharpens spatial selectivity in primary auditory cortex, whereas spatial tuning in functionally specialized areas (PT) is narrow but task-invariant. These findings suggest that the hierarchical view of cortical functional specialization needs to be extended: our data indicate that active behavior involves feedback projections from higher-order regions to primary auditory cortex.


Assuntos
Córtex Auditivo/fisiologia , Localização de Som/fisiologia , Estimulação Acústica , Adulto , Vias Auditivas/fisiologia , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Adulto Jovem
9.
J Neurosci ; 38(21): 4934-4942, 2018 05 23.
Artigo em Inglês | MEDLINE | ID: mdl-29712781

RESUMO

Auditory inputs reaching our ears are often incomplete, but our brains nevertheless transform them into rich and complete perceptual phenomena such as meaningful conversations or pleasurable music. It has been hypothesized that our brains extract regularities in inputs, which enables us to predict the upcoming stimuli, leading to efficient sensory processing. However, it is unclear whether tone predictions are encoded with similar specificity as perceived signals. Here, we used high-field fMRI to investigate whether human auditory regions encode one of the most defining characteristics of auditory perception: the frequency of predicted tones. Two pairs of tone sequences were presented in ascending or descending directions, with the last tone omitted in half of the trials. Every pair of incomplete sequences contained identical sounds, but was associated with different expectations about the last tone (a high- or low-frequency target). This allowed us to disambiguate predictive signaling from sensory-driven processing. We recorded fMRI responses from eight female participants during passive listening to complete and incomplete sequences. Inspection of specificity and spatial patterns of responses revealed that target frequencies were encoded similarly during their presentations, as well as during omissions, suggesting frequency-specific encoding of predicted tones in the auditory cortex (AC). Importantly, frequency specificity of predictive signaling was observed already at the earliest levels of auditory cortical hierarchy: in the primary AC. Our findings provide evidence for content-specific predictive processing starting at the earliest cortical levels.SIGNIFICANCE STATEMENT Given the abundance of sensory information around us in any given moment, it has been proposed that our brain uses contextual information to prioritize and form predictions about incoming signals. However, there remains a surprising lack of understanding of the specificity and content of such prediction signaling; for example, whether a predicted tone is encoded with similar specificity as a perceived tone. Here, we show that early auditory regions encode the frequency of a tone that is predicted yet omitted. Our findings contribute to the understanding of how expectations shape sound processing in the human auditory cortex and provide further insights into how contextual information influences computations in neuronal circuits.


Assuntos
Córtex Auditivo/fisiologia , Percepção da Altura Sonora/fisiologia , Estimulação Acústica , Adulto , Antecipação Psicológica , Percepção Auditiva/fisiologia , Mapeamento Encefálico , Potenciais Evocados Auditivos/fisiologia , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Psicofísica , Adulto Jovem
10.
J Neurosci ; 38(21): 4977-4984, 2018 05 23.
Artigo em Inglês | MEDLINE | ID: mdl-29712782

RESUMO

The primary and posterior auditory cortex (AC) are known for their sensitivity to spatial information, but how this information is processed is not yet understood. AC that is sensitive to spatial manipulations is also modulated by the number of auditory streams present in a scene (Smith et al., 2010), suggesting that spatial and nonspatial cues are integrated for stream segregation. We reasoned that, if this is the case, then it is the distance between sounds rather than their absolute positions that is essential. To test this hypothesis, we measured human brain activity in response to spatially separated concurrent sounds with fMRI at 7 tesla in five men and five women. Stimuli were spatialized amplitude-modulated broadband noises recorded for each participant via in-ear microphones before scanning. Using a linear support vector machine classifier, we investigated whether sound location and/or location plus spatial separation between sounds could be decoded from the activity in Heschl's gyrus and the planum temporale. The classifier was successful only when comparing patterns associated with the conditions that had the largest difference in perceptual spatial separation. Our pattern of results suggests that the representation of spatial separation is not merely the combination of single locations, but rather is an independent feature of the auditory scene.SIGNIFICANCE STATEMENT Often, when we think of auditory spatial information, we think of where sounds are coming from-that is, the process of localization. However, this information can also be used in scene analysis, the process of grouping and segregating features of a soundwave into objects. Essentially, when sounds are further apart, they are more likely to be segregated into separate streams. Here, we provide evidence that activity in the human auditory cortex represents the spatial separation between sounds rather than their absolute locations, indicating that scene analysis and localization processes may be independent.


Assuntos
Córtex Auditivo/fisiologia , Localização de Som/fisiologia , Percepção Espacial/fisiologia , Estimulação Acústica , Adulto , Córtex Auditivo/diagnóstico por imagem , Percepção Auditiva/fisiologia , Mapeamento Encefálico , Simulação por Computador , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Máquina de Vetores de Suporte
11.
Neuroimage ; 173: 472-483, 2018 06.
Artigo em Inglês | MEDLINE | ID: mdl-29518569

RESUMO

Often, in everyday life, we encounter auditory scenes comprising multiple simultaneous sounds and succeed to selectively attend to only one sound, typically the most relevant for ongoing behavior. Studies using basic sounds and two-talker stimuli have shown that auditory selective attention aids this by enhancing the neural representations of the attended sound in auditory cortex. It remains unknown, however, whether and how this selective attention mechanism operates on representations of auditory scenes containing natural sounds of different categories. In this high-field fMRI study we presented participants with simultaneous voices and musical instruments while manipulating their focus of attention. We found an attentional enhancement of neural sound representations in temporal cortex - as defined by spatial activation patterns - at locations that depended on the attended category (i.e., voices or instruments). In contrast, we found that in frontal cortex the site of enhancement was independent of the attended category and the same regions could flexibly represent any attended sound regardless of its category. These results are relevant to elucidate the interacting mechanisms of bottom-up and top-down processing when listening to real-life scenes comprised of multiple sound categories.


Assuntos
Atenção/fisiologia , Percepção Auditiva/fisiologia , Córtex Cerebral/fisiologia , Estimulação Acústica/métodos , Adulto , Mapeamento Encefálico/métodos , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Masculino , Adulto Jovem
12.
Neuroimage ; 174: 274-287, 2018 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-29571712

RESUMO

Paying selective attention to an audio frequency selectively enhances activity within primary auditory cortex (PAC) at the tonotopic site (frequency channel) representing that frequency. Animal PAC neurons achieve this 'frequency-specific attentional spotlight' by adapting their frequency tuning, yet comparable evidence in humans is scarce. Moreover, whether the spotlight operates in human midbrain is unknown. To address these issues, we studied the spectral tuning of frequency channels in human PAC and inferior colliculus (IC), using 7-T functional magnetic resonance imaging (FMRI) and frequency mapping, while participants focused on different frequency-specific sounds. We found that shifts in frequency-specific attention alter the response gain, but not tuning profile, of PAC frequency channels. The gain modulation was strongest in low-frequency channels and varied near-monotonically across the tonotopic axis, giving rise to the attentional spotlight. We observed less prominent, non-tonotopic spatial patterns of attentional modulation in IC. These results indicate that the frequency-specific attentional spotlight in human PAC as measured with FMRI arises primarily from tonotopic gain modulation, rather than adapted frequency tuning. Moreover, frequency-specific attentional modulation of afferent sound processing in human IC seems to be considerably weaker, suggesting that the spotlight diminishes toward this lower-order processing stage. Our study sheds light on how the human auditory pathway adapts to the different demands of selective hearing.


Assuntos
Atenção/fisiologia , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Colículos Inferiores/fisiologia , Estimulação Acústica , Adulto , Vias Auditivas/fisiologia , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Adulto Jovem
13.
Neuroimage ; 180(Pt A): 291-300, 2018 10 15.
Artigo em Inglês | MEDLINE | ID: mdl-29146377

RESUMO

Pitch is a perceptual attribute related to the fundamental frequency (or periodicity) of a sound. So far, the cortical processing of pitch has been investigated mostly using synthetic sounds. However, the complex harmonic structure of natural sounds may require different mechanisms for the extraction and analysis of pitch. This study investigated the neural representation of pitch in human auditory cortex using model-based encoding and decoding analyses of high field (7 T) functional magnetic resonance imaging (fMRI) data collected while participants listened to a wide range of real-life sounds. Specifically, we modeled the fMRI responses as a function of the sounds' perceived pitch height and salience (related to the fundamental frequency and the harmonic structure respectively), which we estimated with a computational algorithm of pitch extraction (de Cheveigné and Kawahara, 2002). First, using single-voxel fMRI encoding, we identified a pitch-coding region in the antero-lateral Heschl's gyrus (HG) and adjacent superior temporal gyrus (STG). In these regions, the pitch representation model combining height and salience predicted the fMRI responses comparatively better than other models of acoustic processing and, in the right hemisphere, better than pitch representations based on height/salience alone. Second, we assessed with model-based decoding that multi-voxel response patterns of the identified regions are more informative of perceived pitch than the remainder of the auditory cortex. Further multivariate analyses showed that complementing a multi-resolution spectro-temporal sound representation with pitch produces a small but significant improvement to the decoding of complex sounds from fMRI response patterns. In sum, this work extends model-based fMRI encoding and decoding methods - previously employed to examine the representation and processing of acoustic sound features in the human auditory system - to the representation and processing of a relevant perceptual attribute such as pitch. Taken together, the results of our model-based encoding and decoding analyses indicated that the pitch of complex real life sounds is extracted and processed in lateral HG/STG regions, at locations consistent with those indicated in several previous fMRI studies using synthetic sounds. Within these regions, pitch-related sound representations reflect the modulatory combination of height and the salience of the pitch percept.


Assuntos
Córtex Auditivo/fisiologia , Mapeamento Encefálico/métodos , Modelos Neurológicos , Percepção da Altura Sonora/fisiologia , Estimulação Acústica , Adulto , Potenciais Evocados Auditivos/fisiologia , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Masculino
14.
J Acoust Soc Am ; 142(4): 1757, 2017 10.
Artigo em Inglês | MEDLINE | ID: mdl-29092572

RESUMO

Meaningful sounds represent the majority of sounds that humans hear and process in everyday life. Yet studies of human sound localization mainly use artificial stimuli such as clicks, pure tones, and noise bursts. The present study investigated the influence of behavioral relevance, sound category, and acoustic properties on the localization of complex, meaningful sounds in the horizontal plane. Participants localized vocalizations and traffic sounds with two levels of behavioral relevance (low and high) within each category, as well as amplitude-modulated tones. Results showed a small but significant effect of behavioral relevance: localization acuity was higher for complex sounds with a high level of behavioral relevance at several target locations. The data also showed category-specific effects: localization biases were lower, and localization precision higher, for vocalizations than for traffic sounds in central space. Several acoustic parameters influenced sound localization performance as well. Correcting localization responses for front-back reversals reduced the overall variability across sounds, but behavioral relevance and sound category still had a modulatory effect on sound localization performance in central auditory space. The results thus demonstrate that spatial hearing performance for complex sounds is influenced not only by acoustic characteristics, but also by sound category and behavioral relevance.


Assuntos
Estimulação Acústica/métodos , Sinais (Psicologia) , Ruído dos Transportes , Psicoacústica , Localização de Som , Voz , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
15.
Cereb Cortex ; 27(5): 3002-3014, 2017 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-27230215

RESUMO

A sound of interest may be tracked amid other salient sounds by focusing attention on its characteristic features including its frequency. Functional magnetic resonance imaging findings have indicated that frequency representations in human primary auditory cortex (AC) contribute to this feat. However, attentional modulations were examined at relatively low spatial and spectral resolutions, and frequency-selective contributions outside the primary AC could not be established. To address these issues, we compared blood oxygenation level-dependent (BOLD) responses in the superior temporal cortex of human listeners while they identified single frequencies versus listened selectively for various frequencies within a multifrequency scene. Using best-frequency mapping, we observed that the detailed spatial layout of attention-induced BOLD response enhancements in primary AC follows the tonotopy of stimulus-driven frequency representations-analogous to the "spotlight" of attention enhancing visuospatial representations in retinotopic visual cortex. Moreover, using an algorithm trained to discriminate stimulus-driven frequency representations, we could successfully decode the focus of frequency-selective attention from listeners' BOLD response patterns in nonprimary AC. Our results indicate that the human brain facilitates selective listening to a frequency of interest in a scene by reinforcing the fine-grained activity pattern throughout the entire superior temporal cortex that would be evoked if that frequency was present alone.


Assuntos
Atenção/fisiologia , Percepção Auditiva/fisiologia , Mapeamento Encefálico , Lobo Temporal/diagnóstico por imagem , Estimulação Acústica , Adulto , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Julgamento , Imageamento por Ressonância Magnética , Masculino , Oxigênio/sangue , Psicoacústica , Adulto Jovem
16.
Hum Brain Mapp ; 38(3): 1140-1154, 2017 03.
Artigo em Inglês | MEDLINE | ID: mdl-27790786

RESUMO

A tonotopic organization of the human auditory cortex (AC) has been reliably found by neuroimaging studies. However, a full characterization and parcellation of the AC is still lacking. In this study, we employed pseudo-continuous arterial spin labeling (pCASL) to map tonotopy and voice selective regions using, for the first time, cerebral blood flow (CBF). We demonstrated the feasibility of CBF-based tonotopy and found a good agreement with BOLD signal-based tonotopy, despite the lower contrast-to-noise ratio of CBF. Quantitative perfusion mapping of baseline CBF showed a region of high perfusion centered on Heschl's gyrus and corresponding to the main high-low-high frequency gradients, co-located to the presumed primary auditory core and suggesting baseline CBF as a novel marker for AC parcellation. Furthermore, susceptibility weighted imaging was employed to investigate the tissue specificity of CBF and BOLD signal and the possible venous bias of BOLD-based tonotopy. For BOLD only active voxels, we found a higher percentage of vein contamination than for CBF only active voxels. Taken together, we demonstrated that both baseline and stimulus-induced CBF is an alternative fMRI approach to the standard BOLD signal to study auditory processing and delineate the functional organization of the auditory cortex. Hum Brain Mapp 38:1140-1154, 2017. © 2016 Wiley Periodicals, Inc.


Assuntos
Córtex Auditivo/irrigação sanguínea , Córtex Auditivo/diagnóstico por imagem , Mapeamento Encefálico , Circulação Cerebrovascular/fisiologia , Estimulação Acústica , Adulto , Artérias , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Oxigênio/sangue , Análise Espectral , Marcadores de Spin , Fatores de Tempo
17.
Neuroimage ; 132: 32-42, 2016 05 15.
Artigo em Inglês | MEDLINE | ID: mdl-26899782

RESUMO

Multivariate pattern analysis (MVPA) in fMRI has been used to extract information from distributed cortical activation patterns, which may go undetected in conventional univariate analysis. However, little is known about the physical and physiological underpinnings of MVPA in fMRI as well as about the effect of spatial smoothing on its performance. Several studies have addressed these issues, but their investigation was limited to the visual cortex at 3T with conflicting results. Here, we used ultra-high field (7T) fMRI to investigate the effect of spatial resolution and smoothing on decoding of speech content (vowels) and speaker identity from auditory cortical responses. To that end, we acquired high-resolution (1.1mm isotropic) fMRI data and additionally reconstructed them at 2.2 and 3.3mm in-plane spatial resolutions from the original k-space data. Furthermore, the data at each resolution were spatially smoothed with different 3D Gaussian kernel sizes (i.e. no smoothing or 1.1, 2.2, 3.3, 4.4, or 8.8mm kernels). For all spatial resolutions and smoothing kernels, we demonstrate the feasibility of decoding speech content (vowel) and speaker identity at 7T using support vector machine (SVM) MVPA. In addition, we found that high spatial frequencies are informative for vowel decoding and that the relative contribution of high and low spatial frequencies is different across the two decoding tasks. Moderate smoothing (up to 2.2mm) improved the accuracies for both decoding of vowels and speakers, possibly due to reduction of noise (e.g. residual motion artifacts or instrument noise) while still preserving information at high spatial frequency. In summary, our results show that - even with the same stimuli and within the same brain areas - the optimal spatial resolution for MVPA in fMRI depends on the specific decoding task of interest.


Assuntos
Mapeamento Encefálico/métodos , Encéfalo/fisiologia , Processamento de Sinais Assistido por Computador , Estimulação Acústica , Adulto , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Análise Multivariada , Reconhecimento Automatizado de Padrão , Percepção da Fala , Máquina de Vetores de Suporte
18.
Neuroimage ; 128: 373-384, 2016 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-26777479

RESUMO

Development typically leads to optimized and adaptive neural mechanisms for the processing of voice and speech. In this fMRI study we investigated how this adaptive processing reaches its mature efficiency by examining the effects of task, age and phonological skills on cortical responses to voice and speech in children (8-9years), adolescents (14-15years) and adults. Participants listened to vowels (/a/, /i/, /u/) spoken by different speakers (boy, girl, man) and performed delayed-match-to-sample tasks on vowel and speaker identity. Across age groups, similar behavioral accuracy and comparable sound evoked auditory cortical fMRI responses were observed. Analysis of task-related modulations indicated a developmental enhancement of responses in the (right) superior temporal cortex during the processing of speaker information. This effect was most evident through an analysis based on individually determined voice sensitive regions. Analysis of age effects indicated that the recruitment of regions in the temporal-parietal cortex and posterior cingulate/cingulate gyrus decreased with development. Beyond age-related changes, the strength of speech-evoked activity in left posterior and right middle superior temporal regions significantly scaled with individual differences in phonological skills. Together, these findings suggest a prolonged development of the cortical functional network for speech and voice processing. This development includes a progressive refinement of the neural mechanisms for the selection and analysis of auditory information relevant to the ongoing behavioral task.


Assuntos
Encéfalo/crescimento & desenvolvimento , Encéfalo/fisiologia , Desenvolvimento Infantil/fisiologia , Percepção da Fala/fisiologia , Voz , Estimulação Acústica , Adolescente , Adulto , Mapeamento Encefálico , Criança , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Adulto Jovem
19.
Cereb Cortex ; 26(1): 450-464, 2016 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-26545618

RESUMO

Coding of sound location in auditory cortex (AC) is only partially understood. Recent electrophysiological research suggests that neurons in mammalian auditory cortex are characterized by broad spatial tuning and a preference for the contralateral hemifield, that is, a nonuniform sampling of sound azimuth. Additionally, spatial selectivity decreases with increasing sound intensity. To accommodate these findings, it has been proposed that sound location is encoded by the integrated activity of neuronal populations with opposite hemifield tuning ("opponent channel model"). In this study, we investigated the validity of such a model in human AC with functional magnetic resonance imaging (fMRI) and a phase-encoding paradigm employing binaural stimuli recorded individually for each participant. In all subjects, we observed preferential fMRI responses to contralateral azimuth positions. Additionally, in most AC locations, spatial tuning was broad and not level invariant. We derived an opponent channel model of the fMRI responses by subtracting the activity of contralaterally tuned regions in bilateral planum temporale. This resulted in accurate decoding of sound azimuth location, which was unaffected by changes in sound level. Our data thus support opponent channel coding as a neural mechanism for representing acoustic azimuth in human AC.


Assuntos
Potenciais de Ação/fisiologia , Córtex Auditivo/fisiologia , Neurônios/fisiologia , Localização de Som/fisiologia , Som , Estimulação Acústica/métodos , Feminino , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino
20.
Proc Natl Acad Sci U S A ; 112(52): 16036-41, 2015 Dec 29.
Artigo em Inglês | MEDLINE | ID: mdl-26668397

RESUMO

Columnar arrangements of neurons with similar preference have been suggested as the fundamental processing units of the cerebral cortex. Within these columnar arrangements, feed-forward information enters at middle cortical layers whereas feedback information arrives at superficial and deep layers. This interplay of feed-forward and feedback processing is at the core of perception and behavior. Here we provide in vivo evidence consistent with a columnar organization of the processing of sound frequency in the human auditory cortex. We measure submillimeter functional responses to sound frequency sweeps at high magnetic fields (7 tesla) and show that frequency preference is stable through cortical depth in primary auditory cortex. Furthermore, we demonstrate that-in this highly columnar cortex-task demands sharpen the frequency tuning in superficial cortical layers more than in middle or deep layers. These findings are pivotal to understanding mechanisms of neural information processing and flow during the active perception of sounds.


Assuntos
Atenção/fisiologia , Córtex Auditivo/fisiologia , Córtex Cerebral/fisiologia , Som , Estimulação Acústica , Adulto , Córtex Auditivo/anatomia & histologia , Percepção Auditiva/fisiologia , Mapeamento Encefálico , Córtex Cerebral/anatomia & histologia , Feminino , Humanos , Imageamento por Ressonância Magnética/métodos , Localização de Som/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA