Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 143
Filtrar
Mais filtros

Bases de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Nat Rev Neurosci ; 20(10): 609-623, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-31467450

RESUMO

Humans and other animals use spatial hearing to rapidly localize events in the environment. However, neural encoding of sound location is a complex process involving the computation and integration of multiple spatial cues that are not represented directly in the sensory organ (the cochlea). Our understanding of these mechanisms has increased enormously in the past few years. Current research is focused on the contribution of animal models for understanding human spatial audition, the effects of behavioural demands on neural sound location encoding, the emergence of a cue-independent location representation in the auditory cortex, and the relationship between single-source and concurrent location encoding in complex auditory scenes. Furthermore, computational modelling seeks to unravel how neural representations of sound source locations are derived from the complex binaural waveforms of real-life sounds. In this article, we review and integrate the latest insights from neurophysiological, neuroimaging and computational modelling studies of mammalian spatial hearing. We propose that the cortical representation of sound location emerges from recurrent processing taking place in a dynamic, adaptive network of early (primary) and higher-order (posterior-dorsal and dorsolateral prefrontal) auditory regions. This cortical network accommodates changing behavioural requirements and is especially relevant for processing the location of real-life, complex sounds and complex auditory scenes.


Assuntos
Estimulação Acústica/métodos , Córtex Auditivo/fisiologia , Vias Auditivas/fisiologia , Localização de Som/fisiologia , Animais , Córtex Auditivo/diagnóstico por imagem , Vias Auditivas/diagnóstico por imagem , Audição/fisiologia , Humanos
2.
MAGMA ; 36(2): 159-173, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-37081247

RESUMO

The 9.4 T scanner in Maastricht is a whole-body magnet with head gradients and parallel RF transmit capability. At the time of the design, it was conceptualized to be one of the best fMRI scanners in the world, but it has also been used for anatomical and diffusion imaging. 9.4 T offers increases in sensitivity and contrast, but the technical ultra-high field (UHF) challenges, such as field inhomogeneities and constraints set by RF power deposition, are exacerbated compared to 7 T. This article reviews some of the 9.4 T work done in Maastricht. Functional imaging experiments included blood oxygenation level-dependent (BOLD) and blood-volume weighted (VASO) fMRI using different readouts. BOLD benefits from shorter T2* at 9.4 T while VASO from longer T1. We show examples of both ex vivo and in vivo anatomical imaging. For many applications, pTx and optimized coils are essential to harness the full potential of 9.4 T. Our experience shows that, while considerable effort was required compared to our 7 T scanner, we could obtain high-quality anatomical and functional data, which illustrates the potential of MR acquisitions at even higher field strengths. The practical challenges of working with a relatively unique system are also discussed.


Assuntos
Imageamento por Ressonância Magnética , Imageamento por Ressonância Magnética/métodos
3.
MAGMA ; 36(2): 211-225, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-37036574

RESUMO

OBJECTIVE: We outline our vision for a 14 Tesla MR system. This comprises a novel whole-body magnet design utilizing high temperature superconductor; a console and associated electronic equipment; an optimized radiofrequency coil setup for proton measurement in the brain, which also has a local shim capability; and a high-performance gradient set. RESEARCH FIELDS: The 14 Tesla system can be considered a 'mesocope': a device capable of measuring on biologically relevant scales. In neuroscience the increased spatial resolution will anatomically resolve all layers of the cortex, cerebellum, subcortical structures, and inner nuclei. Spectroscopic imaging will simultaneously measure excitatory and inhibitory activity, characterizing the excitation/inhibition balance of neural circuits. In medical research (including brain disorders) we will visualize fine-grained patterns of structural abnormalities and relate these changes to functional and molecular changes. The significantly increased spectral resolution will make it possible to detect (dynamic changes in) individual metabolites associated with pathological pathways including molecular interactions and dynamic disease processes. CONCLUSIONS: The 14 Tesla system will offer new perspectives in neuroscience and fundamental research. We anticipate that this initiative will usher in a new era of ultra-high-field MR.


Assuntos
Encéfalo , Imageamento por Ressonância Magnética , Imageamento por Ressonância Magnética/métodos , Encéfalo/diagnóstico por imagem , Cabeça , Imagem de Difusão por Ressonância Magnética , Ondas de Rádio
4.
Neuroimage ; 244: 118575, 2021 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-34517127

RESUMO

Recent functional MRI (fMRI) studies have highlighted differences in responses to natural sounds along the rostral-caudal axis of the human superior temporal gyrus. However, due to the indirect nature of the fMRI signal, it has been challenging to relate these fMRI observations to actual neuronal response properties. To bridge this gap, we present a forward model of the fMRI responses to natural sounds combining a neuronal model of the auditory cortex with physiological modeling of the hemodynamic BOLD response. Neuronal responses are modeled with a dynamic recurrent firing rate model, reflecting the tonotopic, hierarchical processing in the auditory cortex along with the spectro-temporal tradeoff in the rostral-caudal axis of its belt areas. To link modeled neuronal response properties with human fMRI data in the auditory belt regions, we generated a space of neuronal models, which differed parametrically in spectral and temporal specificity of neuronal responses. Then, we obtained predictions of fMRI responses through a biophysical model of the hemodynamic BOLD response (P-DCM). Using Bayesian model comparison, our results showed that the hemodynamic BOLD responses of the caudal belt regions in the human auditory cortex were best explained by modeling faster temporal dynamics and broader spectral tuning of neuronal populations, while rostral belt regions were best explained through fine spectral tuning combined with slower temporal dynamics. These results support the hypotheses of complementary neural information processing along the rostral-caudal axis of the human superior temporal gyrus.


Assuntos
Córtex Auditivo/fisiologia , Hemodinâmica/fisiologia , Neurônios/fisiologia , Teorema de Bayes , Retroalimentação Fisiológica , Retroalimentação Psicológica , Humanos , Imageamento por Ressonância Magnética , Modelos Neurológicos , Sensação , Som , Lobo Temporal/fisiologia
5.
Neuroimage ; 228: 117670, 2021 03.
Artigo em Inglês | MEDLINE | ID: mdl-33359352

RESUMO

Selective attention is essential for the processing of multi-speaker auditory scenes because they require the perceptual segregation of the relevant speech ("target") from irrelevant speech ("distractors"). For simple sounds, it has been suggested that the processing of multiple distractor sounds depends on bottom-up factors affecting task performance. However, it remains unclear whether such dependency applies to naturalistic multi-speaker auditory scenes. In this study, we tested the hypothesis that increased perceptual demand (the processing requirement posed by the scene to separate the target speech) reduces the cortical processing of distractor speech thus decreasing their perceptual segregation. Human participants were presented with auditory scenes including three speakers and asked to selectively attend to one speaker while their EEG was acquired. The perceptual demand of this selective listening task was varied by introducing an auditory cue (interaural time differences, ITDs) for segregating the target from the distractor speakers, while acoustic differences between the distractors were matched in ITD and loudness. We obtained a quantitative measure of the cortical segregation of distractor speakers by assessing the difference in how accurately speech-envelope following EEG responses could be predicted by models of averaged distractor speech versus models of individual distractor speech. In agreement with our hypothesis, results show that interaural segregation cues led to improved behavioral word-recognition performance and stronger cortical segregation of the distractor speakers. The neural effect was strongest in the δ-band and at early delays (0 - 200 ms). Our results indicate that during low perceptual demand, the human cortex represents individual distractor speech signals as more segregated. This suggests that, in addition to purely acoustical properties, the cortical processing of distractor speakers depends on factors like perceptual demand.


Assuntos
Atenção/fisiologia , Córtex Cerebral/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica/métodos , Adulto , Eletroencefalografia/métodos , Feminino , Humanos , Masculino , Ruído , Processamento de Sinais Assistido por Computador , Adulto Jovem
6.
Neuroimage ; 238: 118145, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-33961999

RESUMO

Multi-Voxel Pattern Analysis (MVPA) is a well established tool to disclose weak, distributed effects in brain activity patterns. The generalization ability is assessed by testing the learning model on new, unseen data. However, when limited data is available, the decoding success is estimated using cross-validation. There is general consensus on assessing statistical significance of cross-validated accuracy with non-parametric permutation tests. In this work we focus on the false positive control of different permutation strategies and on the statistical power of different cross-validation schemes. With simulations, we show that estimating the entire cross-validation error on each permuted dataset is the only statistically valid permutation strategy. Furthermore, using both simulations and real data from the HCP WU-Minn 3T fMRI dataset, we show that, among the different cross-validation schemes, a repeated split-half cross-validation is the most powerful, despite achieving slightly lower classification accuracy, when compared to other schemes. Our findings provide additional insights into the optimization of the experimental design for MVPA, highlighting the benefits of having many short runs.


Assuntos
Encéfalo/diagnóstico por imagem , Neuroimagem Funcional/métodos , Processamento de Imagem Assistida por Computador/métodos , Simulação por Computador , Humanos , Imageamento por Ressonância Magnética , Projetos de Pesquisa
7.
Cereb Cortex ; 30(3): 1103-1116, 2020 03 14.
Artigo em Inglês | MEDLINE | ID: mdl-31504283

RESUMO

Auditory spatial tasks induce functional activation in the occipital-visual-cortex of early blind humans. Less is known about the effects of blindness on auditory spatial processing in the temporal-auditory-cortex. Here, we investigated spatial (azimuth) processing in congenitally and early blind humans with a phase-encoding functional magnetic resonance imaging (fMRI) paradigm. Our results show that functional activation in response to sounds in general-independent of sound location-was stronger in the occipital cortex but reduced in the medial temporal cortex of blind participants in comparison with sighted participants. Additionally, activation patterns for binaural spatial processing were different for sighted and blind participants in planum temporale. Finally, fMRI responses in the auditory cortex of blind individuals carried less information on sound azimuth position than those in sighted individuals, as assessed with a 2-channel, opponent coding model for the cortical representation of sound azimuth. These results indicate that early visual deprivation results in reorganization of binaural spatial processing in the auditory cortex and that blind individuals may rely on alternative mechanisms for processing azimuth position.


Assuntos
Córtex Auditivo/fisiopatologia , Cegueira/fisiopatologia , Plasticidade Neuronal , Localização de Som/fisiologia , Estimulação Acústica , Adulto , Cegueira/congênito , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Lobo Occipital/fisiologia , Pessoas com Deficiência Visual
8.
J Cogn Neurosci ; 32(11): 2145-2158, 2020 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-32662723

RESUMO

When speech perception is difficult, one way listeners adjust is by reconfiguring phoneme category boundaries, drawing on contextual information. Both lexical knowledge and lipreading cues are used in this way, but it remains unknown whether these two differing forms of perceptual learning are similar at a neural level. This study compared phoneme boundary adjustments driven by lexical or audiovisual cues, using ultra-high-field 7-T fMRI. During imaging, participants heard exposure stimuli and test stimuli. Exposure stimuli for lexical retuning were audio recordings of words, and those for audiovisual recalibration were audio-video recordings of lip movements during utterances of pseudowords. Test stimuli were ambiguous phonetic strings presented without context, and listeners reported what phoneme they heard. Reports reflected phoneme biases in preceding exposure blocks (e.g., more reported /p/ after /p/-biased exposure). Analysis of corresponding brain responses indicated that both forms of cue use were associated with a network of activity across the temporal cortex, plus parietal, insula, and motor areas. Audiovisual recalibration also elicited significant occipital cortex activity despite the lack of visual stimuli. Activity levels in several ROIs also covaried with strength of audiovisual recalibration, with greater activity accompanying larger recalibration shifts. Similar activation patterns appeared for lexical retuning, but here, no significant ROIs were identified. Audiovisual and lexical forms of perceptual learning thus induce largely similar brain response patterns. However, audiovisual recalibration involves additional visual cortex contributions, suggesting that previously acquired visual information (on lip movements) is retrieved and deployed to disambiguate auditory perception.


Assuntos
Fonética , Percepção da Fala , Percepção Auditiva/fisiologia , Humanos , Aprendizagem , Leitura Labial , Percepção da Fala/fisiologia
9.
PLoS Comput Biol ; 15(3): e1006397, 2019 03.
Artigo em Inglês | MEDLINE | ID: mdl-30849071

RESUMO

Computational neuroimaging methods aim to predict brain responses (measured e.g. with functional magnetic resonance imaging [fMRI]) on the basis of stimulus features obtained through computational models. The accuracy of such prediction is used as an indicator of how well the model describes the computations underlying the brain function that is being considered. However, the prediction accuracy is bounded by the proportion of the variance of the brain response which is related to the measurement noise and not to the stimuli (or cognitive functions). This bound to the performance of a computational model has been referred to as the noise ceiling. In previous fMRI applications two methods have been proposed to estimate the noise ceiling based on either a split-half procedure or Monte Carlo simulations. These methods make different assumptions over the nature of the effects underlying the data, and, importantly, their relation has not been clarified yet. Here, we derive an analytical form for the noise ceiling that does not require computationally expensive simulations or a splitting procedure that reduce the amount of data. The validity of this analytical definition is proved in simulations, we show that the analytical solution results in the same estimate of the noise ceiling as the Monte Carlo method. Considering different simulated noise structure, we evaluate different estimators of the variance of the responses and their impact on the estimation of the noise ceiling. We furthermore evaluate the interplay between regularization (often used to estimate model fits to the data when the number of computational features in the model is large) and model complexity on the performance with respect to the noise ceiling. Our results indicate that when considering the variance of the responses across runs, computing the noise ceiling analytically results in similar estimates as the split half estimator and approaches the true noise ceiling under a variety of simulated noise scenarios. Finally, the methods are tested on real fMRI data acquired at 7 Tesla.


Assuntos
Simulação por Computador , Imageamento por Ressonância Magnética/métodos , Encéfalo/fisiologia , Humanos , Método de Monte Carlo , Reprodutibilidade dos Testes
10.
Cereb Cortex ; 29(9): 3636-3650, 2019 08 14.
Artigo em Inglês | MEDLINE | ID: mdl-30395192

RESUMO

Understanding homologies and differences in auditory cortical processing in human and nonhuman primates is an essential step in elucidating the neurobiology of speech and language. Using fMRI responses to natural sounds, we investigated the representation of multiple acoustic features in auditory cortex of awake macaques and humans. Comparative analyses revealed homologous large-scale topographies not only for frequency but also for temporal and spectral modulations. In both species, posterior regions preferably encoded relatively fast temporal and coarse spectral information, whereas anterior regions encoded slow temporal and fine spectral modulations. Conversely, we observed a striking interspecies difference in cortical sensitivity to temporal modulations: While decoding from macaque auditory cortex was most accurate at fast rates (> 30 Hz), humans had highest sensitivity to ~3 Hz, a relevant rate for speech analysis. These findings suggest that characteristic tuning of human auditory cortex to slow temporal modulations is unique and may have emerged as a critical step in the evolution of speech and language.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Estimulação Acústica , Animais , Mapeamento Encefálico , Feminino , Humanos , Macaca mulatta , Imageamento por Ressonância Magnética , Masculino , Especificidade da Espécie , Percepção da Fala/fisiologia , Vocalização Animal
11.
Proc Natl Acad Sci U S A ; 114(18): 4799-4804, 2017 May 02.
Artigo em Inglês | MEDLINE | ID: mdl-28420788

RESUMO

Ethological views of brain functioning suggest that sound representations and computations in the auditory neural system are optimized finely to process and discriminate behaviorally relevant acoustic features and sounds (e.g., spectrotemporal modulations in the songs of zebra finches). Here, we show that modeling of neural sound representations in terms of frequency-specific spectrotemporal modulations enables accurate and specific reconstruction of real-life sounds from high-resolution functional magnetic resonance imaging (fMRI) response patterns in the human auditory cortex. Region-based analyses indicated that response patterns in separate portions of the auditory cortex are informative of distinctive sets of spectrotemporal modulations. Most relevantly, results revealed that in early auditory regions, and progressively more in surrounding regions, temporal modulations in a range relevant for speech analysis (∼2-4 Hz) were reconstructed more faithfully than other temporal modulations. In early auditory regions, this effect was frequency-dependent and only present for lower frequencies (<∼2 kHz), whereas for higher frequencies, reconstruction accuracy was higher for faster temporal modulations. Further analyses suggested that auditory cortical processing optimized for the fine-grained discrimination of speech and vocal sounds underlies this enhanced reconstruction accuracy. In sum, the present study introduces an approach to embed models of neural sound representations in the analysis of fMRI response patterns. Furthermore, it reveals that, in the human brain, even general purpose and fundamental neural processing mechanisms are shaped by the physical features of real-world stimuli that are most relevant for behavior (i.e., speech, voice).


Assuntos
Córtex Auditivo/diagnóstico por imagem , Córtex Auditivo/fisiologia , Imageamento por Ressonância Magnética , Percepção da Altura Sonora/fisiologia , Percepção da Fala/fisiologia , Adulto , Feminino , Humanos , Masculino
12.
J Neurosci ; 38(21): 4934-4942, 2018 05 23.
Artigo em Inglês | MEDLINE | ID: mdl-29712781

RESUMO

Auditory inputs reaching our ears are often incomplete, but our brains nevertheless transform them into rich and complete perceptual phenomena such as meaningful conversations or pleasurable music. It has been hypothesized that our brains extract regularities in inputs, which enables us to predict the upcoming stimuli, leading to efficient sensory processing. However, it is unclear whether tone predictions are encoded with similar specificity as perceived signals. Here, we used high-field fMRI to investigate whether human auditory regions encode one of the most defining characteristics of auditory perception: the frequency of predicted tones. Two pairs of tone sequences were presented in ascending or descending directions, with the last tone omitted in half of the trials. Every pair of incomplete sequences contained identical sounds, but was associated with different expectations about the last tone (a high- or low-frequency target). This allowed us to disambiguate predictive signaling from sensory-driven processing. We recorded fMRI responses from eight female participants during passive listening to complete and incomplete sequences. Inspection of specificity and spatial patterns of responses revealed that target frequencies were encoded similarly during their presentations, as well as during omissions, suggesting frequency-specific encoding of predicted tones in the auditory cortex (AC). Importantly, frequency specificity of predictive signaling was observed already at the earliest levels of auditory cortical hierarchy: in the primary AC. Our findings provide evidence for content-specific predictive processing starting at the earliest cortical levels.SIGNIFICANCE STATEMENT Given the abundance of sensory information around us in any given moment, it has been proposed that our brain uses contextual information to prioritize and form predictions about incoming signals. However, there remains a surprising lack of understanding of the specificity and content of such prediction signaling; for example, whether a predicted tone is encoded with similar specificity as a perceived tone. Here, we show that early auditory regions encode the frequency of a tone that is predicted yet omitted. Our findings contribute to the understanding of how expectations shape sound processing in the human auditory cortex and provide further insights into how contextual information influences computations in neuronal circuits.


Assuntos
Córtex Auditivo/fisiologia , Percepção da Altura Sonora/fisiologia , Estimulação Acústica , Adulto , Antecipação Psicológica , Percepção Auditiva/fisiologia , Mapeamento Encefálico , Potenciais Evocados Auditivos/fisiologia , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Psicofísica , Adulto Jovem
13.
J Neurosci ; 38(21): 4977-4984, 2018 05 23.
Artigo em Inglês | MEDLINE | ID: mdl-29712782

RESUMO

The primary and posterior auditory cortex (AC) are known for their sensitivity to spatial information, but how this information is processed is not yet understood. AC that is sensitive to spatial manipulations is also modulated by the number of auditory streams present in a scene (Smith et al., 2010), suggesting that spatial and nonspatial cues are integrated for stream segregation. We reasoned that, if this is the case, then it is the distance between sounds rather than their absolute positions that is essential. To test this hypothesis, we measured human brain activity in response to spatially separated concurrent sounds with fMRI at 7 tesla in five men and five women. Stimuli were spatialized amplitude-modulated broadband noises recorded for each participant via in-ear microphones before scanning. Using a linear support vector machine classifier, we investigated whether sound location and/or location plus spatial separation between sounds could be decoded from the activity in Heschl's gyrus and the planum temporale. The classifier was successful only when comparing patterns associated with the conditions that had the largest difference in perceptual spatial separation. Our pattern of results suggests that the representation of spatial separation is not merely the combination of single locations, but rather is an independent feature of the auditory scene.SIGNIFICANCE STATEMENT Often, when we think of auditory spatial information, we think of where sounds are coming from-that is, the process of localization. However, this information can also be used in scene analysis, the process of grouping and segregating features of a soundwave into objects. Essentially, when sounds are further apart, they are more likely to be segregated into separate streams. Here, we provide evidence that activity in the human auditory cortex represents the spatial separation between sounds rather than their absolute locations, indicating that scene analysis and localization processes may be independent.


Assuntos
Córtex Auditivo/fisiologia , Localização de Som/fisiologia , Percepção Espacial/fisiologia , Estimulação Acústica , Adulto , Córtex Auditivo/diagnóstico por imagem , Percepção Auditiva/fisiologia , Mapeamento Encefálico , Simulação por Computador , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Máquina de Vetores de Suporte
14.
J Neurosci ; 38(36): 7822-7832, 2018 09 05.
Artigo em Inglês | MEDLINE | ID: mdl-30185539

RESUMO

Using ultra-high field fMRI, we explored the cortical depth-dependent stability of acoustic feature preference in human auditory cortex. We collected responses from human auditory cortex (subjects from either sex) to a large number of natural sounds at submillimeter spatial resolution, and observed that these responses were well explained by a model that assumes neuronal population tuning to frequency-specific spectrotemporal modulations. We observed a relatively stable (columnar) tuning to frequency and temporal modulations. However, spectral modulation tuning was variable throughout the cortical depth. This difference in columnar stability between feature maps could not be explained by a difference in map smoothness, as the preference along the cortical sheet varied in a similar manner for the different feature maps. Furthermore, tuning to all three features was more columnar in primary than nonprimary auditory cortex. The observed overall lack of overlapping columnar regions across acoustic feature maps suggests, especially for primary auditory cortex, a coding strategy in which across cortical depths tuning to some features is kept stable, whereas tuning to other features systematically varies.SIGNIFICANCE STATEMENT In the human auditory cortex, sound aspects are processed in large-scale maps. Invasive animal studies show that an additional processing organization may be implemented orthogonal to the cortical sheet (i.e., in the columnar direction), but it is unknown whether observed organizational principles apply to the human auditory cortex. Combining ultra-high field fMRI with natural sounds, we explore the columnar organization of various sound aspects. Our results suggest that the human auditory cortex contains a modular coding strategy, where, for each module, several sound aspects act as an anchor along which computations are performed while the processing of another sound aspect undergoes a transformation. This strategy may serve to optimally represent the content of our complex acoustic natural environment.


Assuntos
Córtex Auditivo/diagnóstico por imagem , Percepção Auditiva/fisiologia , Localização de Som/fisiologia , Estimulação Acústica , Adulto , Córtex Auditivo/fisiologia , Mapeamento Encefálico/métodos , Feminino , Neuroimagem Funcional/métodos , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Adulto Jovem
15.
J Neurosci ; 38(40): 8574-8587, 2018 10 03.
Artigo em Inglês | MEDLINE | ID: mdl-30126968

RESUMO

Spatial hearing sensitivity in humans is dynamic and task-dependent, but the mechanisms in human auditory cortex that enable dynamic sound location encoding remain unclear. Using functional magnetic resonance imaging (fMRI), we assessed how active behavior affects encoding of sound location (azimuth) in primary auditory cortical areas and planum temporale (PT). According to the hierarchical model of auditory processing and cortical functional specialization, PT is implicated in sound location ("where") processing. Yet, our results show that spatial tuning profiles in primary auditory cortical areas (left primary core and right caudo-medial belt) sharpened during a sound localization ("where") task compared with a sound identification ("what") task. In contrast, spatial tuning in PT was sharp but did not vary with task performance. We further applied a population pattern decoder to the measured fMRI activity patterns, which confirmed the task-dependent effects in the left core: sound location estimates from fMRI patterns measured during active sound localization were most accurate. In PT, decoding accuracy was not modulated by task performance. These results indicate that changes of population activity in human primary auditory areas reflect dynamic and task-dependent processing of sound location. As such, our findings suggest that the hierarchical model of auditory processing may need to be revised to include an interaction between primary and functionally specialized areas depending on behavioral requirements.SIGNIFICANCE STATEMENT According to a purely hierarchical view, cortical auditory processing consists of a series of analysis stages from sensory (acoustic) processing in primary auditory cortex to specialized processing in higher-order areas. Posterior-dorsal cortical auditory areas, planum temporale (PT) in humans, are considered to be functionally specialized for spatial processing. However, this model is based mostly on passive listening studies. Our results provide compelling evidence that active behavior (sound localization) sharpens spatial selectivity in primary auditory cortex, whereas spatial tuning in functionally specialized areas (PT) is narrow but task-invariant. These findings suggest that the hierarchical view of cortical functional specialization needs to be extended: our data indicate that active behavior involves feedback projections from higher-order regions to primary auditory cortex.


Assuntos
Córtex Auditivo/fisiologia , Localização de Som/fisiologia , Estimulação Acústica , Adulto , Vias Auditivas/fisiologia , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Adulto Jovem
16.
Neuroimage ; 197: 785-791, 2019 08 15.
Artigo em Inglês | MEDLINE | ID: mdl-28687519

RESUMO

The cortex is a massively recurrent network, characterized by feedforward and feedback connections between brain areas as well as lateral connections within an area. Feedforward, horizontal and feedback responses largely activate separate layers of a cortical unit, meaning they can be dissociated by lamina-resolved neurophysiological techniques. Such techniques are invasive and are therefore rarely used in humans. However, recent developments in high spatial resolution fMRI allow for non-invasive, in vivo measurements of brain responses specific to separate cortical layers. This provides an important opportunity to dissociate between feedforward and feedback brain responses, and investigate communication between brain areas at a more fine- grained level than previously possible in the human species. In this review, we highlight recent studies that successfully used laminar fMRI to isolate layer-specific feedback responses in human sensory cortex. In addition, we review several areas of cognitive neuroscience that stand to benefit from this new technological development, highlighting contemporary hypotheses that yield testable predictions for laminar fMRI. We hope to encourage researchers with the opportunity to embrace this development in fMRI research, as we expect that many future advancements in our current understanding of human brain function will be gained from measuring lamina-specific brain responses.


Assuntos
Mapeamento Encefálico/métodos , Encéfalo/fisiologia , Neurociência Cognitiva/métodos , Imageamento por Ressonância Magnética/métodos , Animais , Neurociência Cognitiva/tendências , Humanos
17.
Neuroimage ; 186: 369-381, 2019 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-30391345

RESUMO

Functional Magnetic Resonance Imaging (fMRI) has been successfully used for Brain Computer Interfacing (BCI) to classify (imagined) movements of different limbs. However, reliable classification of more subtle signals originating from co-localized neural networks in the sensorimotor cortex, e.g. individual movements of fingers of the same hand, has proved to be more challenging, especially when taking into account the requirement for high single trial reliability in the BCI context. In recent years, Multi Voxel Pattern Analysis (MVPA) has gained momentum as a suitable method to disclose such weak, distributed activation patterns. Much attention has been devoted to developing and validating data analysis strategies, but relatively little guidance is available on the choice of experimental design, even less so in the context of BCI-MVPA. When applicable, block designs are considered the safest choice, but the expectations, strategies and adaptation induced by blocking of similar trials can make it a sub-optimal strategy. Fast event-related designs, in contrast, require a more complicated analysis and show stronger dependence on linearity assumptions but allow for randomly alternating trials. However, they lack resting intervals that enable the BCI participant to process feedback. In this proof-of-concept paper a hybrid blocked fast-event related design is introduced that is novel in the context of MVPA and BCI experiments, and that might overcome these issues by combining the rest periods of the block design with the shorter and randomly alternating trial characteristics of a rapid event-related design. A well-established button-press experiment was used to perform a within-subject comparison of the proposed design with a block and a slow event-related design. The proposed hybrid blocked fast-event related design showed a decoding accuracy that was close to that of the block design, which showed highest accuracy. It allowed for across-design decoding, i.e. reliable prediction of examples obtained with another design. Finally, it also showed the most stable incremental decoding results, obtaining good performance with relatively few blocks. Our findings suggest that the blocked fast event-related design could be a viable alternative to block designs in the context of BCI-MVPA, when expectations, strategies and adaptation make blocking of trials of the same type a sub-optimal strategy. Additionally, the blocked fast event-related design is also suitable for applications in which fast incremental decoding is desired, and enables the use of a slow or block design during the test phase.


Assuntos
Mapeamento Encefálico/métodos , Interfaces Cérebro-Computador , Imageamento por Ressonância Magnética/métodos , Projetos de Pesquisa , Córtex Sensório-Motor/fisiologia , Adulto , Teorema de Bayes , Feminino , Humanos , Masculino , Desempenho Psicomotor , Adulto Jovem
18.
Hum Brain Mapp ; 40(12): 3488-3507, 2019 08 15.
Artigo em Inglês | MEDLINE | ID: mdl-31037793

RESUMO

There are a wealth of tools for fitting linear models at each location in the brain in neuroimaging analysis, and a wealth of genetic tools for estimating heritability for a small number of phenotypes. But there remains a need for computationally efficient neuroimaging genetic tools that can conduct analyses at the brain-wide scale. Here we present a simple method for heritability estimation on twins that replaces a variance component model-which requires iterative optimisation-with a (noniterative) linear regression model, by transforming data to squared twin-pair differences. We demonstrate that the method has comparable bias, mean squared error, false positive risk, and power to best practice maximum-likelihood-based methods, while requiring a small fraction of the computation time. Combined with permutation, we call this approach "Accelerated Permutation Inference for the ACE Model (APACE)" where ACE refers to the additive genetic (A) effects, and common (C), and unique (E) environmental influences on the trait. We show how the use of spatial statistics like cluster size can dramatically improve power, and illustrate the method on a heritability analysis of an fMRI working memory dataset.


Assuntos
Encéfalo/diagnóstico por imagem , Encéfalo/fisiologia , Memória de Curto Prazo/fisiologia , Modelos Neurológicos , Gêmeos Dizigóticos/genética , Gêmeos Monozigóticos/genética , Adulto , Feminino , Interação Gene-Ambiente , Humanos , Modelos Lineares , Imageamento por Ressonância Magnética/métodos , Masculino , Adulto Jovem
19.
Neuroimage ; 173: 472-483, 2018 06.
Artigo em Inglês | MEDLINE | ID: mdl-29518569

RESUMO

Often, in everyday life, we encounter auditory scenes comprising multiple simultaneous sounds and succeed to selectively attend to only one sound, typically the most relevant for ongoing behavior. Studies using basic sounds and two-talker stimuli have shown that auditory selective attention aids this by enhancing the neural representations of the attended sound in auditory cortex. It remains unknown, however, whether and how this selective attention mechanism operates on representations of auditory scenes containing natural sounds of different categories. In this high-field fMRI study we presented participants with simultaneous voices and musical instruments while manipulating their focus of attention. We found an attentional enhancement of neural sound representations in temporal cortex - as defined by spatial activation patterns - at locations that depended on the attended category (i.e., voices or instruments). In contrast, we found that in frontal cortex the site of enhancement was independent of the attended category and the same regions could flexibly represent any attended sound regardless of its category. These results are relevant to elucidate the interacting mechanisms of bottom-up and top-down processing when listening to real-life scenes comprised of multiple sound categories.


Assuntos
Atenção/fisiologia , Percepção Auditiva/fisiologia , Córtex Cerebral/fisiologia , Estimulação Acústica/métodos , Adulto , Mapeamento Encefálico/métodos , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Masculino , Adulto Jovem
20.
Neuroimage ; 181: 617-626, 2018 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-30048749

RESUMO

In everyday life, we process mixtures of a variety of sounds. This processing involves the segregation of auditory input and the attentive selection of the stream that is most relevant to current goals. For natural scenes with multiple irrelevant sounds, however, it is unclear how the human auditory system represents all the unattended sounds. In particular, it remains elusive whether the sensory input to the human auditory cortex of unattended sounds biases the cortical integration/segregation of these sounds in a similar way as for attended sounds. In this study, we tested this by asking participants to selectively listen to one of two speakers or music in an ongoing 1-min sound mixture while their cortical neural activity was measured with EEG. Using a stimulus reconstruction approach, we find better reconstruction of mixed unattended sounds compared to individual unattended sounds at two early cortical stages (70 ms and 150 ms) of the auditory processing hierarchy. Crucially, at the earlier processing stage (70 ms), this cortical bias to represent unattended sounds as integrated rather than segregated increases with increasing similarity of the unattended sounds. Our results reveal an important role of acoustical properties for the cortical segregation of unattended auditory streams in natural listening situations. They further corroborate the notion that selective attention contributes functionally to cortical stream segregation. These findings highlight that a common, acoustics-based grouping principle governs the cortical representation of auditory streams not only inside but also outside the listener's focus of attention.


Assuntos
Atenção/fisiologia , Percepção Auditiva/fisiologia , Córtex Cerebral/fisiologia , Eletroencefalografia/métodos , Neuroimagem Funcional/métodos , Música , Percepção da Fala/fisiologia , Adolescente , Adulto , Córtex Auditivo/fisiologia , Feminino , Humanos , Masculino , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA