Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 90
Filter
Add more filters

Publication year range
1.
J Neurosci ; 44(17)2024 Apr 24.
Article in English | MEDLINE | ID: mdl-38508715

ABSTRACT

Previous studies have demonstrated that auditory cortex activity can be influenced by cross-sensory visual inputs. Intracortical laminar recordings in nonhuman primates have suggested a feedforward (FF) type profile for auditory evoked but feedback (FB) type for visual evoked activity in the auditory cortex. To test whether cross-sensory visual evoked activity in the auditory cortex is associated with FB inputs also in humans, we analyzed magnetoencephalography (MEG) responses from eight human subjects (six females) evoked by simple auditory or visual stimuli. In the estimated MEG source waveforms for auditory cortex regions of interest, auditory evoked response showed peaks at 37 and 90 ms and visual evoked response at 125 ms. The inputs to the auditory cortex were modeled through FF- and FB-type connections targeting different cortical layers using the Human Neocortical Neurosolver (HNN), which links cellular- and circuit-level mechanisms to MEG signals. HNN modeling suggested that the experimentally observed auditory response could be explained by an FF input followed by an FB input, whereas the cross-sensory visual response could be adequately explained by just an FB input. Thus, the combined MEG and HNN results support the hypothesis that cross-sensory visual input in the auditory cortex is of FB type. The results also illustrate how the dynamic patterns of the estimated MEG source activity can provide information about the characteristics of the input into a cortical area in terms of the hierarchical organization among areas.


Subject(s)
Acoustic Stimulation , Auditory Cortex , Evoked Potentials, Visual , Magnetoencephalography , Photic Stimulation , Humans , Auditory Cortex/physiology , Magnetoencephalography/methods , Female , Male , Adult , Photic Stimulation/methods , Evoked Potentials, Visual/physiology , Acoustic Stimulation/methods , Models, Neurological , Young Adult , Evoked Potentials, Auditory/physiology , Neurons/physiology , Brain Mapping/methods
2.
J Neurosci ; 44(7)2024 Feb 14.
Article in English | MEDLINE | ID: mdl-38129133

ABSTRACT

Neuroimaging studies suggest cross-sensory visual influences in human auditory cortices (ACs). Whether these influences reflect active visual processing in human ACs, which drives neuronal firing and concurrent broadband high-frequency activity (BHFA; >70 Hz), or whether they merely modulate sound processing is still debatable. Here, we presented auditory, visual, and audiovisual stimuli to 16 participants (7 women, 9 men) with stereo-EEG depth electrodes implanted near ACs for presurgical monitoring. Anatomically normalized group analyses were facilitated by inverse modeling of intracranial source currents. Analyses of intracranial event-related potentials (iERPs) suggested cross-sensory responses to visual stimuli in ACs, which lagged the earliest auditory responses by several tens of milliseconds. Visual stimuli also modulated the phase of intrinsic low-frequency oscillations and triggered 15-30 Hz event-related desynchronization in ACs. However, BHFA, a putative correlate of neuronal firing, was not significantly increased in ACs after visual stimuli, not even when they coincided with auditory stimuli. Intracranial recordings demonstrate cross-sensory modulations, but no indication of active visual processing in human ACs.


Subject(s)
Auditory Cortex , Male , Humans , Female , Auditory Cortex/physiology , Acoustic Stimulation/methods , Evoked Potentials/physiology , Electroencephalography/methods , Visual Perception/physiology , Auditory Perception/physiology , Photic Stimulation
3.
Cereb Cortex ; 33(24): 11517-11525, 2023 12 09.
Article in English | MEDLINE | ID: mdl-37851854

ABSTRACT

Speech and language processing involve complex interactions between cortical areas necessary for articulatory movements and auditory perception and a range of areas through which these are connected and interact. Despite their fundamental importance, the precise mechanisms underlying these processes are not fully elucidated. We measured BOLD signals from normal hearing participants using high-field 7 Tesla fMRI with 1-mm isotropic voxel resolution. The subjects performed 2 speech perception tasks (discrimination and classification) and a speech production task during the scan. By employing univariate and multivariate pattern analyses, we identified the neural signatures associated with speech production and perception. The left precentral, premotor, and inferior frontal cortex regions showed significant activations that correlated with phoneme category variability during perceptual discrimination tasks. In addition, the perceived sound categories could be decoded from signals in a region of interest defined based on activation related to production task. The results support the hypothesis that articulatory motor networks in the left hemisphere, typically associated with speech production, may also play a critical role in the perceptual categorization of syllables. The study provides valuable insights into the intricate neural mechanisms that underlie speech processing.


Subject(s)
Speech Perception , Speech , Humans , Speech/physiology , Magnetic Resonance Imaging/methods , Brain Mapping/methods , Auditory Perception/physiology , Speech Perception/physiology
4.
Hum Brain Mapp ; 44(2): 362-372, 2023 02 01.
Article in English | MEDLINE | ID: mdl-35980015

ABSTRACT

Invasive neurophysiological studies in nonhuman primates have shown different laminar activation profiles to auditory vs. visual stimuli in auditory cortices and adjacent polymodal areas. Means to examine the underlying feedforward vs. feedback type influences noninvasively have been limited in humans. Here, using 1-mm isotropic resolution 3D echo-planar imaging at 7 T, we studied the intracortical depth profiles of functional magnetic resonance imaging (fMRI) blood oxygenation level dependent (BOLD) signals to brief auditory (noise bursts) and visual (checkerboard) stimuli. BOLD percent-signal-changes were estimated at 11 equally spaced intracortical depths, within regions-of-interest encompassing auditory (Heschl's gyrus, Heschl's sulcus, planum temporale, and posterior superior temporal gyrus) and polymodal (middle and posterior superior temporal sulcus) areas. Effects of differing BOLD signal strengths for auditory and visual stimuli were controlled via normalization and statistical modeling. The BOLD depth profile shapes, modeled with quadratic regression, were significantly different for auditory vs. visual stimuli in auditory cortices, but not in polymodal areas. The different depth profiles could reflect sensory-specific feedforward versus cross-sensory feedback influences, previously shown in laminar recordings in nonhuman primates. The results suggest that intracortical BOLD profiles can help distinguish between feedforward and feedback type influences in the human brain. Further experimental studies are still needed to clarify how underlying signal strength influences BOLD depth profiles under different stimulus conditions.


Subject(s)
Auditory Cortex , Magnetic Resonance Imaging , Humans , Animals , Acoustic Stimulation , Magnetic Resonance Imaging/methods , Auditory Cortex/diagnostic imaging , Auditory Cortex/physiology , Brain/physiology , Brain Mapping , Primates
5.
Neuroimage ; 263: 119633, 2022 11.
Article in English | MEDLINE | ID: mdl-36115589

ABSTRACT

Accumulating multivariate pattern analysis (MVPA) results from fMRI studies suggest that information is represented in fingerprint patterns of activations and deactivations during perception, emotions, and cognition. We postulate that these fingerprint patterns might reflect neuronal-population level sparse code documented in two-photon calcium imaging studies in animal models, i.e., information represented in specific and reproducible ensembles of a few percent of active neurons amidst widespread inhibition in neural populations. We suggest that such representations constitute a fundamental organizational principle via interacting across multiple levels of brain hierarchy, thus giving rise to perception, emotions, and cognition.


Subject(s)
Brain Mapping , Cognition , Animals , Humans , Brain Mapping/methods , Cognition/physiology , Brain/physiology , Emotions/physiology , Multivariate Analysis , Magnetic Resonance Imaging/methods
6.
Cereb Cortex ; 31(5): 2450-2465, 2021 03 31.
Article in English | MEDLINE | ID: mdl-33350445

ABSTRACT

Accumulating evidence shows that auditory cortex (AC) of humans, and other primates, is involved in more complex cognitive processes than feature segregation only, which are shaped by experience-dependent plasticity and thus likely show substantial individual variability. However, thus far, individual variability of ACs has been considered a methodological impediment rather than a phenomenon of theoretical importance. Here, we examined the variability of ACs using intrinsic functional connectivity patterns in humans and macaques. Our results demonstrate that in humans, interindividual variability is greater near the nonprimary than primary ACs, indicating that variability dramatically increases across the processing hierarchy. ACs are also more variable than comparable visual areas and show higher variability in the left than in the right hemisphere, which may be related to the left lateralization of auditory-related functions such as language. Intriguingly, remarkably similar modality differences and lateralization of variability were also observed in macaques. These connectivity-based findings are consistent with a confirmatory task-based functional magnetic resonance imaging analysis. The quantification of variability in auditory function, and the similar findings in both humans and macaques, will have strong implications for understanding the evolution of advanced auditory functions in humans.


Subject(s)
Auditory Cortex/diagnostic imaging , Auditory Pathways/diagnostic imaging , Biological Variation, Individual , Adult , Animals , Auditory Cortex/physiology , Auditory Pathways/physiology , Female , Functional Neuroimaging , Humans , Macaca mulatta , Magnetic Resonance Imaging , Male , Young Adult
7.
Cereb Cortex ; 31(6): 2898-2912, 2021 05 10.
Article in English | MEDLINE | ID: mdl-33497437

ABSTRACT

The cerebellum, a structure historically associated with motor control, has more recently been implicated in several higher-order auditory-cognitive functions. However, the exact functional pathways that mediate cerebellar influences on auditory cortex (AC) remain unclear. Here, we sought to identify auditory cortico-cerebellar pathways based on intrinsic functional connectivity magnetic resonance imaging. In contrast to previous connectivity studies that principally consider the AC as a single functionally homogenous unit, we mapped the cerebellar connectivity across different parts of the AC. Our results reveal that auditory subareas demonstrating different levels of interindividual functional variability are functionally coupled with distinct cerebellar regions. Moreover, auditory and sensorimotor areas show divergent cortico-cerebellar connectivity patterns, although sensorimotor areas proximal to the AC are often functionally grouped with the AC in previous connectivity-based network analyses. Lastly, we found that the AC can be functionally segmented into highly similar subareas based on either cortico-cerebellar or cortico-cortical functional connectivity, suggesting the existence of multiple parallel auditory cortico-cerebellar circuits that involve different subareas of the AC. Overall, the present study revealed multiple auditory cortico-cerebellar pathways and provided a fine-grained map of AC subareas, indicative of the critical role of the cerebellum in auditory processing and multisensory integration.


Subject(s)
Auditory Cortex/diagnostic imaging , Auditory Pathways/diagnostic imaging , Brain Mapping/methods , Cerebellum/diagnostic imaging , Magnetic Resonance Imaging/methods , Nerve Net/diagnostic imaging , Adult , Auditory Cortex/physiology , Auditory Pathways/physiology , Cerebellum/physiology , Databases, Factual , Female , Humans , Male , Nerve Net/physiology , Young Adult
8.
Neuroimage ; 224: 117430, 2021 01 01.
Article in English | MEDLINE | ID: mdl-33038537

ABSTRACT

Low spatial resolution is often cited as the most critical limitation of magneto- and electroencephalography (MEG and EEG), but a unifying framework for quantifying the spatial fidelity of M/EEG source estimates has yet to be established; previous studies have focused on linear estimation methods under ideal scenarios without noise. Here we present an approach that quantifies the spatial fidelity of M/EEG estimates from simulated patch activations over the entire neocortex superposed on measured resting-state data. This approach grants more generalizability in the evaluation process that allows for, e.g., comparing linear and non-linear estimates in the whole brain for different signal-to-noise ratios (SNR), number of active sources and activation waveforms. Using this framework, we evaluated the MNE, dSPM, sLORETA, eLORETA, and MxNE methods and found that the spatial fidelity varies significantly with SNR, following a largely sigmoidal curve whose shape varies depending on which aspect of spatial fidelity that is being quantified and the source estimation method. We believe that these methods and results will be useful when interpreting M/EEG source estimates as well as in methods development.


Subject(s)
Electroencephalography/methods , Magnetoencephalography/methods , Neocortex/physiology , Signal Processing, Computer-Assisted , Spatial Analysis , Adult , Brain/diagnostic imaging , Brain/physiology , Female , Humans , Linear Models , Magnetic Resonance Imaging , Male , Neocortex/diagnostic imaging , Nonlinear Dynamics , Rest , Signal-To-Noise Ratio , Young Adult
9.
Neuroimage ; 224: 117445, 2021 01 01.
Article in English | MEDLINE | ID: mdl-33059053

ABSTRACT

Using movies and narratives as naturalistic stimuli in human neuroimaging studies has yielded significant advances in understanding of cognitive and emotional functions. The relevant literature was reviewed, with emphasis on how the use of naturalistic stimuli has helped advance scientific understanding of human memory, attention, language, emotions, and social cognition in ways that would have been difficult otherwise. These advances include discovering a cortical hierarchy of temporal receptive windows, which supports processing of dynamic information that accumulates over several time scales, such as immediate reactions vs. slowly emerging patterns in social interactions. Naturalistic stimuli have also helped elucidate how the hippocampus supports segmentation and memorization of events in day-to-day life and have afforded insights into attentional brain mechanisms underlying our ability to adopt specific perspectives during natural viewing. Further, neuroimaging studies with naturalistic stimuli have revealed the role of the default-mode network in narrative-processing and in social cognition. Finally, by robustly eliciting genuine emotions, these stimuli have helped elucidate the brain basis of both basic and social emotions apparently manifested as highly overlapping yet distinguishable patterns of brain activity.


Subject(s)
Attention , Brain/diagnostic imaging , Emotions , Language , Memory , Motion Pictures , Narration , Social Cognition , Brain/physiology , Brain Mapping , Electroencephalography , Functional Neuroimaging , Humans , Magnetic Resonance Imaging , Neural Pathways
10.
Neuroimage ; 230: 117746, 2021 04 15.
Article in English | MEDLINE | ID: mdl-33454414

ABSTRACT

Intracranial stereoelectroencephalography (sEEG) provides unsurpassed sensitivity and specificity for human neurophysiology. However, functional mapping of brain functions has been limited because the implantations have sparse coverage and differ greatly across individuals. Here, we developed a distributed, anatomically realistic sEEG source-modeling approach for within- and between-subject analyses. In addition to intracranial event-related potentials (iERP), we estimated the sources of high broadband gamma activity (HBBG), a putative correlate of local neural firing. Our novel approach accounted for a significant portion of the variance of the sEEG measurements in leave-one-out cross-validation. After logarithmic transformations, the sensitivity and signal-to-noise ratio were linearly inversely related to the minimal distance between the brain location and electrode contacts (slope≈-3.6). The signa-to-noise ratio and sensitivity in the thalamus and brain stem were comparable to those locations at the vicinity of electrode contact implantation. The HGGB source estimates were remarkably consistent with analyses of intracranial-contact data. In conclusion, distributed sEEG source modeling provides a powerful neuroimaging tool, which facilitates anatomically-normalized functional mapping of human brain using both iERP and HBBG data.


Subject(s)
Drug Resistant Epilepsy/diagnostic imaging , Drug Resistant Epilepsy/physiopathology , Electrodes, Implanted/standards , Electroencephalography/methods , Electroencephalography/standards , Stereotaxic Techniques/standards , Acoustic Stimulation/methods , Acoustic Stimulation/standards , Adult , Female , Humans , Male , Middle Aged , Random Allocation
11.
Neuroimage ; 208: 116436, 2020 03.
Article in English | MEDLINE | ID: mdl-31809885

ABSTRACT

Auditory distance perception and its neuronal mechanisms are poorly understood, mainly because 1) it is difficult to separate distance processing from intensity processing, 2) multiple intensity-independent distance cues are often available, and 3) the cues are combined in a context-dependent way. A recent fMRI study identified human auditory cortical area representing intensity-independent distance for sources presented along the interaural axis (Kopco et al. PNAS, 109, 11019-11024). For these sources, two intensity-independent cues are available, interaural level difference (ILD) and direct-to-reverberant energy ratio (DRR). Thus, the observed activations may have been contributed by not only distance-related, but also direction-encoding neuron populations sensitive to ILD. Here, the paradigm from the previous study was used to examine DRR-based distance representation for sounds originating in front of the listener, where ILD is not available. In a virtual environment, we performed behavioral and fMRI experiments, combined with computational analyses to identify the neural representation of distance based on DRR. The stimuli varied in distance (15-100 â€‹cm) while their received intensity was varied randomly and independently of distance. Behavioral performance showed that intensity-independent distance discrimination is accurate for frontal stimuli, even though it is worse than for lateral stimuli. fMRI activations for sounds varying in frontal distance, as compared to varying only in intensity, increased bilaterally in the posterior banks of Heschl's gyri, the planum temporale, and posterior superior temporal gyrus regions. Taken together, these results suggest that posterior human auditory cortex areas contain neuron populations that are sensitive to distance independent of intensity and of binaural cues relevant for directional hearing.


Subject(s)
Auditory Cortex/physiology , Auditory Perception/physiology , Brain Mapping , Cues , Distance Perception/physiology , Adult , Auditory Cortex/diagnostic imaging , Discrimination, Psychological/physiology , Female , Humans , Magnetic Resonance Imaging , Male , Models, Theoretical , Neural Pathways/physiology , Psychophysics , Young Adult
12.
Brain Topogr ; 33(4): 477-488, 2020 07.
Article in English | MEDLINE | ID: mdl-32441009

ABSTRACT

Auditory attention allows us to focus on relevant target sounds in the acoustic environment while maintaining the capability to orient to unpredictable (novel) sound changes. An open question is whether orienting to expected vs. unexpected auditory events are governed by anatomically distinct attention pathways, respectively, or by differing communication patterns within a common system. To address this question, we applied a recently developed PeSCAR analysis method to evaluate spectrotemporal functional connectivity patterns across subregions of broader cortical regions of interest (ROIs) to analyze magnetoencephalography data obtained during a cued auditory attention task. Subjects were instructed to detect a predictable harmonic target sound embedded among standard tones in one ear and to ignore the standard tones and occasional unpredictable novel sounds presented in the opposite ear. Phase coherence of estimated source activity was calculated between subregions of superior temporal, frontal, inferior parietal, and superior parietal cortex ROIs. Functional connectivity was stronger in response to target than novel stimuli between left superior temporal and left parietal ROIs and between left frontal and right parietal ROIs, with the largest effects observed in the beta band (15-35 Hz). In contrast, functional connectivity was stronger in response to novel than target stimuli in inter-hemispheric connections between left and right frontal ROIs, observed in early time windows in the alpha band (8-12 Hz). Our findings suggest that auditory processing of expected target vs. unexpected novel sounds involves different spatially, temporally, and spectrally distributed oscillatory connectivity patterns across temporal, parietal, and frontal areas.


Subject(s)
Attention , Auditory Cortex , Auditory Perception , Magnetoencephalography , Acoustic Stimulation , Brain Mapping , Female , Humans , Parietal Lobe
13.
Proc Natl Acad Sci U S A ; 114(48): E10465-E10474, 2017 11 28.
Article in English | MEDLINE | ID: mdl-29138310

ABSTRACT

Subcortical structures play a critical role in brain function. However, options for assessing electrophysiological activity in these structures are limited. Electromagnetic fields generated by neuronal activity in subcortical structures can be recorded noninvasively, using magnetoencephalography (MEG) and electroencephalography (EEG). However, these subcortical signals are much weaker than those generated by cortical activity. In addition, we show here that it is difficult to resolve subcortical sources because distributed cortical activity can explain the MEG and EEG patterns generated by deep sources. We then demonstrate that if the cortical activity is spatially sparse, both cortical and subcortical sources can be resolved with M/EEG. Building on this insight, we develop a hierarchical sparse inverse solution for M/EEG. We assess the performance of this algorithm on realistic simulations and auditory evoked response data, and show that thalamic and brainstem sources can be correctly estimated in the presence of cortical activity. Our work provides alternative perspectives and tools for characterizing electrophysiological activity in subcortical structures in the human brain.


Subject(s)
Brain Mapping/methods , Brain/physiology , Evoked Potentials, Auditory/physiology , Models, Neurological , Adult , Algorithms , Brain/diagnostic imaging , Electroencephalography , Feasibility Studies , Healthy Volunteers , Humans , Magnetic Resonance Imaging , Magnetoencephalography
14.
Exp Brain Res ; 237(9): 2137-2143, 2019 Sep.
Article in English | MEDLINE | ID: mdl-31201472

ABSTRACT

Global auditory-spatial orienting cues help the detection of weak visual stimuli, but it is not clear whether crossmodal attention cues also enhance the resolution of visuospatial discrimination. Here, we hypothesized that if anywhere, crossmodal modulations of visual localization should emerge in the periphery where the receptive fields are large. Subjects were presented with trials where a Visual Target, defined by a cluster of low-luminance dots, was shown for 220 ms at 25°-35° eccentricity in either the left or right hemifield. The Visual Target was either Uncued or it was presented 250 ms after a crossmodal Auditory Cue that was simulated either from the same or the opposite hemifield than the Visual Target location. After a whole-screen visual mask displayed for 800 ms, a pair of vertical Reference Bars was presented ipsilateral to the Visual Target. In a two-alternative forced choice task, subjects were asked to determine which of these two bars was closer to the center of the Visual Target. When the Auditory Cue and Visual Target were hemispatially incongruent, the speed and accuracy of visual localization performance was significantly impaired. However, hemispatially congruent Auditory Cues did not improve the localization of Visual Targets when compared to the Uncued condition. Further analyses suggested that the crossmodal Auditory Cues decreased the sensitivity (d') of the Visual Target localization without affecting post-perceptual decision biases. Our results suggest that in the visual periphery, the detrimental effect of hemispatially incongruent Auditory Cues is far greater than the benefit produced by hemispatially congruent cues. Our working hypothesis for future studies is that auditory-spatial attention cues suppress irrelevant visual locations in a global fashion, without modulating the local visual precision at relevant sites.


Subject(s)
Attention/physiology , Auditory Perception/physiology , Space Perception/physiology , Visual Fields/physiology , Visual Perception/physiology , Adult , Cues , Feedback, Sensory/physiology , Female , Humans , Male , Young Adult
15.
Neuroimage ; 161: 1-8, 2017 11 01.
Article in English | MEDLINE | ID: mdl-28818692

ABSTRACT

Auditory working memory (WM) processing in everyday acoustic environments depends on our ability to maintain relevant information online in our minds, and to suppress interference caused by competing incoming stimuli. A challenge in communication settings is that the relevant content and irrelevant inputs may emanate from a common source, such as a talkative conversationalist. An open question is how the WM system deals with such interference. Will the distracters become inadvertently filtered before processing for meaning because the primary WM operations deplete all available processing resources? Or are they suppressed post perceptually, through an active control process? We tested these alternative hypotheses by measuring magnetoencephalography (MEG), EEG, and functional MRI (fMRI) during a phonetic auditory continuous performance task. Contextual WM maintenance load was manipulated by adjusting the number of "filler" letter sounds in-between cue and target letter sounds. Trial-to-trial variability of pre- and post-stimulus activations in fMRI-informed cortical MEG/EEG estimates was analyzed within and across 14 subjects using generalized linear mixed effect (GLME) models. High contextual WM maintenance load suppressed left auditory cortex (AC) activations around 250-300 ms after the onset of irrelevant phonetic sounds. This effect coincided with increased 10-14 Hz alpha-range oscillatory functional connectivity between the left dorsolateral prefrontal cortex (DLPFC) and left AC. Suppression of AC responses to irrelevant sounds during active maintenance of the task context also correlated with increased pre-stimulus 7-15 Hz alpha power. Our results suggest that under high auditory WM load, irrelevant sounds are suppressed through a "late" active suppression mechanism, which prevents short-term consolidation of irrelevant information without affecting the initial screening of potentially meaningful stimuli. The results also suggest that AC alpha oscillations play an inhibitory role during auditory WM processing.


Subject(s)
Alpha Rhythm/physiology , Attention/physiology , Auditory Cortex/physiology , Auditory Perception/physiology , Connectome/methods , Magnetoencephalography/methods , Memory, Short-Term/physiology , Prefrontal Cortex/physiology , Adult , Auditory Cortex/diagnostic imaging , Female , Humans , Magnetic Resonance Imaging , Male , Prefrontal Cortex/diagnostic imaging , Young Adult
16.
Neuroimage ; 124(Pt A): 858-868, 2016 Jan 01.
Article in English | MEDLINE | ID: mdl-26419388

ABSTRACT

Spatial and non-spatial information of sound events is presumably processed in parallel auditory cortex (AC) "what" and "where" streams, which are modulated by inputs from the respective visual-cortex subsystems. How these parallel processes are integrated to perceptual objects that remain stable across time and the source agent's movements is unknown. We recorded magneto- and electroencephalography (MEG/EEG) data while subjects viewed animated video clips featuring two audiovisual objects, a black cat and a gray cat. Adaptor-probe events were either linked to the same object (the black cat meowed twice in a row in the same location) or included a visually conveyed identity change (the black and then the gray cat meowed with identical voices in the same location). In addition to effects in visual (including fusiform, middle temporal or MT areas) and frontoparietal association areas, the visually conveyed object-identity change was associated with a release from adaptation of early (50-150ms) activity in posterior ACs, spreading to left anterior ACs at 250-450ms in our combined MEG/EEG source estimates. Repetition of events belonging to the same object resulted in increased theta-band (4-8Hz) synchronization within the "what" and "where" pathways (e.g., between anterior AC and fusiform areas). In contrast, the visually conveyed identity changes resulted in distributed synchronization at higher frequencies (alpha and beta bands, 8-32Hz) across different auditory, visual, and association areas. The results suggest that sound events become initially linked to perceptual objects in posterior AC, followed by modulations of representations in anterior AC. Hierarchical what and where pathways seem to operate in parallel after repeating audiovisual associations, whereas the resetting of such associations engages a distributed network across auditory, visual, and multisensory areas.


Subject(s)
Auditory Cortex/physiology , Auditory Pathways/physiology , Auditory Perception/physiology , Visual Perception/physiology , Acoustic Stimulation , Adult , Animals , Cats , Cortical Synchronization , Electroencephalography , Evoked Potentials, Auditory/physiology , Female , Humans , Magnetoencephalography , Male , Middle Aged , Photic Stimulation , Visual Cortex/physiology , Vocalization, Animal , Young Adult
17.
Neuroimage ; 143: 116-127, 2016 Dec.
Article in English | MEDLINE | ID: mdl-27608603

ABSTRACT

Despite recent advances in auditory neuroscience, the exact functional organization of human auditory cortex (AC) has been difficult to investigate. Here, using reversals of tonotopic gradients as the test case, we examined whether human ACs can be more precisely mapped by avoiding signals caused by large draining vessels near the pial surface, which bias blood-oxygen level dependent (BOLD) signals away from the actual sites of neuronal activity. Using ultra-high field (7T) fMRI and cortical depth analysis techniques previously applied in visual cortices, we sampled 1mm isotropic voxels from different depths of AC during narrow-band sound stimulation with biologically relevant temporal patterns. At the group level, analyses that considered voxels from all cortical depths, but excluded those intersecting the pial surface, showed (a) the greatest statistical sensitivity in contrasts between activations to high vs. low frequency sounds and (b) the highest inter-subject consistency of phase-encoded continuous tonotopy mapping. Analyses based solely on voxels intersecting the pial surface produced the least consistent group results, even when compared to analyses based solely on voxels intersecting the white-matter surface where both signal strength and within-subject statistical power are weakest. However, no evidence was found for reduced within-subject reliability in analyses considering the pial voxels only. Our group results could, thus, reflect improved inter-subject correspondence of high and low frequency gradients after the signals from voxels near the pial surface are excluded. Using tonotopy analyses as the test case, our results demonstrate that when the major physiological and anatomical biases imparted by the vasculature are controlled, functional mapping of human ACs becomes more consistent from subject to subject than previously thought.


Subject(s)
Auditory Cortex/physiology , Brain Mapping/methods , Cerebral Cortex/diagnostic imaging , Cerebral Veins/diagnostic imaging , Pia Mater/diagnostic imaging , Speech Perception/physiology , Adult , Auditory Cortex/diagnostic imaging , Brain Mapping/standards , Female , Humans , Magnetic Resonance Imaging , Male , Middle Aged , Pia Mater/blood supply , Young Adult
18.
Neuroimage ; 114: 49-56, 2015 Jul 01.
Article in English | MEDLINE | ID: mdl-25842290

ABSTRACT

Naturalistic stimuli such as movies are increasingly used to engage cognitive and emotional processes during fMRI of brain hemodynamic activity. However, movies have been little utilized during magnetoencephalography (MEG) and EEG that directly measure population-level neuronal activity at a millisecond resolution. Here, subjects watched a 17-min segment from the movie Crash (Lionsgate Films, 2004) twice during simultaneous MEG/EEG recordings. Physiological noise components, including ocular and cardiac artifacts, were removed using the DRIFTER algorithm. Dynamic estimates of cortical activity were calculated using MRI-informed minimum-norm estimation. To improve the signal-to-noise ratio (SNR), principal component analyses (PCA) were employed to extract the prevailing temporal characteristics within each anatomical parcel of the Freesurfer Desikan-Killiany cortical atlas. A variety of alternative inter-subject correlation (ISC) approaches were then utilized to investigate the reliability of inter-subject synchronization during natural viewing. In the first analysis, the ISCs of the time series of each anatomical region over the full time period across all subject pairs were calculated and averaged. In the second analysis, dynamic ISC (dISC) analysis, the correlation was calculated over a sliding window of 200 ms with 3.3 ms steps. Finally, in a between-run ISC analysis, the between-run correlation was calculated over the dynamic ISCs of the two different runs after the Fisher z-transformation. Overall, the most reliable activations occurred in occipital/inferior temporal visual and superior temporal auditory cortices as well as in the posterior cingulate, precuneus, pre- and post-central gyri, and right inferior and middle frontal gyri. Significant between-run ISCs were observed in superior temporal auditory cortices and inferior temporal visual cortices. Taken together, our results show that movies can be utilized as naturalistic stimuli in MEG/EEG similarly as in fMRI studies.


Subject(s)
Cerebral Cortex/physiology , Electroencephalography/methods , Magnetoencephalography/methods , Visual Perception/physiology , Adolescent , Adult , Artifacts , Female , Humans , Male , Motion Pictures , Photic Stimulation , Signal Processing, Computer-Assisted , Young Adult
19.
Proc Natl Acad Sci U S A ; 109(27): 11019-24, 2012 Jul 03.
Article in English | MEDLINE | ID: mdl-22699495

ABSTRACT

Neuronal mechanisms of auditory distance perception are poorly understood, largely because contributions of intensity and distance processing are difficult to differentiate. Typically, the received intensity increases when sound sources approach us. However, we can also distinguish between soft-but-nearby and loud-but-distant sounds, indicating that distance processing can also be based on intensity-independent cues. Here, we combined behavioral experiments, fMRI measurements, and computational analyses to identify the neural representation of distance independent of intensity. In a virtual reverberant environment, we simulated sound sources at varying distances (15-100 cm) along the right-side interaural axis. Our acoustic analysis suggested that, of the individual intensity-independent depth cues available for these stimuli, direct-to-reverberant ratio (D/R) is more reliable and robust than interaural level difference (ILD). However, on the basis of our behavioral results, subjects' discrimination performance was more consistent with complex intensity-independent distance representations, combining both available cues, than with representations on the basis of either D/R or ILD individually. fMRI activations to sounds varying in distance (containing all cues, including intensity), compared with activations to sounds varying in intensity only, were significantly increased in the planum temporale and posterior superior temporal gyrus contralateral to the direction of stimulation. This fMRI result suggests that neurons in posterior nonprimary auditory cortices, in or near the areas processing other auditory spatial features, are sensitive to intensity-independent sound properties relevant for auditory distance perception.


Subject(s)
Auditory Cortex/physiology , Auditory Pathways/physiology , Auditory Perception/physiology , Models, Neurological , Sound Localization/physiology , Acoustic Stimulation/methods , Adaptation, Physiological/physiology , Adult , Auditory Cortex/cytology , Auditory Pathways/cytology , Brain Mapping , Cues , Female , Humans , Magnetic Resonance Imaging , Male , Neurons/physiology , Psychoacoustics , Space Perception/physiology , Young Adult
20.
Neuroimage ; 86: 461-9, 2014 Feb 01.
Article in English | MEDLINE | ID: mdl-24185023

ABSTRACT

Based on the infamous left-lateralized neglect syndrome, one might hypothesize that the dominating right parietal cortex has a bilateral representation of space, whereas the left parietal cortex represents only the contralateral right hemispace. Whether this principle applies to human auditory attention is not yet fully clear. Here, we explicitly tested the differences in cross-hemispheric functional coupling between the intraparietal sulcus (IPS) and auditory cortex (AC) using combined magnetoencephalography (MEG), EEG, and functional MRI (fMRI). Inter-regional pairwise phase consistency (PPC) was analyzed from data obtained during dichotic auditory selective attention task, where subjects were in 10-s trials cued to attend to sounds presented to one ear and to ignore sounds presented in the opposite ear. Using MEG/EEG/fMRI source modeling, parietotemporal PPC patterns were (a) mapped between all AC locations vs. IPS seeds and (b) analyzed between four anatomically defined AC regions-of-interest (ROI) vs. IPS seeds. Consistent with our hypothesis, stronger cross-hemispheric PPC was observed between the right IPS and left AC for attended right-ear sounds, as compared to PPC between the left IPS and right AC for attended left-ear sounds. In the mapping analyses, these differences emerged at 7-13Hz, i.e., at the theta to alpha frequency bands, and peaked in Heschl's gyrus and lateral posterior non-primary ACs. The ROI analysis revealed similarly lateralized differences also in the beta and lower theta bands. Taken together, our results support the view that the right parietal cortex dominates auditory spatial attention.


Subject(s)
Attention/physiology , Auditory Perception/physiology , Biological Clocks/physiology , Cortical Synchronization/physiology , Functional Laterality/physiology , Parietal Lobe/physiology , Temporal Lobe/physiology , Acoustic Stimulation/methods , Brain Mapping , Cues , Humans , Male , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL