Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 28
Filter
Add more filters










Publication year range
1.
Elife ; 132024 Jul 22.
Article in English | MEDLINE | ID: mdl-39038076

ABSTRACT

To what extent does speech and music processing rely on domain-specific and domain-general neural networks? Using whole-brain intracranial EEG recordings in 18 epilepsy patients listening to natural, continuous speech or music, we investigated the presence of frequency-specific and network-level brain activity. We combined it with a statistical approach in which a clear operational distinction is made between shared, preferred, and domain-selective neural responses. We show that the majority of focal and network-level neural activity is shared between speech and music processing. Our data also reveal an absence of anatomical regional selectivity. Instead, domain-selective neural responses are restricted to distributed and frequency-specific coherent oscillations, typical of spectral fingerprints. Our work highlights the importance of considering natural stimuli and brain dynamics in their full complexity to map cognitive and brain functions.


Subject(s)
Music , Humans , Male , Female , Adult , Nerve Net/physiology , Speech/physiology , Auditory Perception/physiology , Epilepsy/physiopathology , Young Adult , Electroencephalography , Cerebral Cortex/physiology , Electrocorticography , Speech Perception/physiology , Middle Aged , Brain Mapping
2.
J Neurosci Methods ; 403: 110035, 2024 03.
Article in English | MEDLINE | ID: mdl-38128785

ABSTRACT

BACKGROUND: Long and thin shaft electrodes are implanted intracerebrally for stereoelectroencephalography (SEEG) in patients with pharmacoresistant focal epilepsies. Two adjacent contacts of one of such electrodes can deliver a train of single pulse electrical stimulations (SPES), and evoked potentials (EPs) are recorded on other contacts. In this study we assess if stimulating and recording on the same shaft, as opposed to different shafts, has an impact on common EP features. NEW METHOD: We leverage the large volume of SEEG data gathered in the F-TRACT database and analyze data from nearly one thousand SEEG implantations in order to verify whether stimulation and recording from the same shaft influence the EP pattern. RESULTS: We found that when the stimulated and the recording contacts were located on the same shaft, the mean and median amplitudes of an EP are greater, and its mean and median latencies are smaller than when the contacts were located on different shafts. This effect is small (Cohen's d ∼ 0.1), but robust (p-value < 10-3) across the SEEG database. COMPARISON WITH EXISTING METHOD(S): Our study is the first one to address this question. Due to the choice of commonly used EP features, our method is congruent with other studies. CONCLUSIONS: The magnitude of the reported effect does not obligate all standard analyses to correct for it, unless they aim at high precision. The source of the effect is not clear. Manufacturers of SEEG electrodes could examine it and potentially minimize the effect in their future products.


Subject(s)
Epilepsies, Partial , Stereotaxic Techniques , Humans , Evoked Potentials/physiology , Electrodes , Electric Stimulation , Electroencephalography , Electrodes, Implanted
3.
Neuroimage ; 260: 119438, 2022 10 15.
Article in English | MEDLINE | ID: mdl-35792291

ABSTRACT

Since the second-half of the twentieth century, intracranial electroencephalography (iEEG), including both electrocorticography (ECoG) and stereo-electroencephalography (sEEG), has provided an intimate view into the human brain. At the interface between fundamental research and the clinic, iEEG provides both high temporal resolution and high spatial specificity but comes with constraints, such as the individual's tailored sparsity of electrode sampling. Over the years, researchers in neuroscience developed their practices to make the most of the iEEG approach. Here we offer a critical review of iEEG research practices in a didactic framework for newcomers, as well addressing issues encountered by proficient researchers. The scope is threefold: (i) review common practices in iEEG research, (ii) suggest potential guidelines for working with iEEG data and answer frequently asked questions based on the most widespread practices, and (iii) based on current neurophysiological knowledge and methodologies, pave the way to good practice standards in iEEG research. The organization of this paper follows the steps of iEEG data processing. The first section contextualizes iEEG data collection. The second section focuses on localization of intracranial electrodes. The third section highlights the main pre-processing steps. The fourth section presents iEEG signal analysis methods. The fifth section discusses statistical approaches. The sixth section draws some unique perspectives on iEEG research. Finally, to ensure a consistent nomenclature throughout the manuscript and to align with other guidelines, e.g., Brain Imaging Data Structure (BIDS) and the OHBM Committee on Best Practices in Data Analysis and Sharing (COBIDAS), we provide a glossary to disambiguate terms related to iEEG research.


Subject(s)
Electrocorticography , Electroencephalography , Brain/physiology , Brain Mapping/methods , Electrocorticography/methods , Electrodes , Electroencephalography/methods , Humans
4.
Cortex ; 150: 1-11, 2022 05.
Article in English | MEDLINE | ID: mdl-35305505

ABSTRACT

Statistical learning has been proposed as a mechanism to structure and segment the continuous flow of information in several sensory modalities. Previous studies proposed that the medial temporal lobe, and in particular the hippocampus, may be crucial to parse the stream in the visual modality. However, the involvement of the hippocampus in auditory statistical learning, and specifically in speech segmentation is less clear. To explore the role of the hippocampus in speech segmentation based on statistical learning, we exposed seven pharmaco-resistant temporal lobe epilepsy patients to a continuous stream of trisyllabic pseudowords and recorded intracranial stereotaxic electro-encephalography (sEEG). We used frequency-tagging analysis to quantify neuronal synchronization of the hippocampus and auditory regions to the temporal structure of words and syllables of the learning stream. We also analyzed the event-related potentials (ERPs) of the test to evaluate the role of both regions in the recognition of newly segmented words. Results show that while auditory regions highly respond to syllable frequency, the hippocampus responds mostly to word frequency. Moreover, ERPs collected in the hippocampus show clear sensitivity to the familiarity of the items. These findings provide direct evidence of the involvement of the hippocampus in the speech segmentation process and suggest a hierarchical organization of auditory information during speech processing.


Subject(s)
Speech Perception , Speech , Hippocampus , Humans , Language , Learning/physiology , Speech/physiology , Speech Perception/physiology
5.
Neuroimage ; 257: 119056, 2022 08 15.
Article in English | MEDLINE | ID: mdl-35283287

ABSTRACT

Good scientific practice (GSP) refers to both explicit and implicit rules, recommendations, and guidelines that help scientists to produce work that is of the highest quality at any given time, and to efficiently share that work with the community for further scrutiny or utilization. For experimental research using magneto- and electroencephalography (MEEG), GSP includes specific standards and guidelines for technical competence, which are periodically updated and adapted to new findings. However, GSP also needs to be regularly revisited in a broader light. At the LiveMEEG 2020 conference, a reflection on GSP was fostered that included explicitly documented guidelines and technical advances, but also emphasized intangible GSP: a general awareness of personal, organizational, and societal realities and how they can influence MEEG research. This article provides an extensive report on most of the LiveMEEG contributions and new literature, with the additional aim to synthesize ongoing cultural changes in GSP. It first covers GSP with respect to cognitive biases and logical fallacies, pre-registration as a tool to avoid those and other early pitfalls, and a number of resources to enable collaborative and reproducible research as a general approach to minimize misconceptions. Second, it covers GSP with respect to data acquisition, analysis, reporting, and sharing, including new tools and frameworks to support collaborative work. Finally, GSP is considered in light of ethical implications of MEEG research and the resulting responsibility that scientists have to engage with societal challenges. Considering among other things the benefits of peer review and open access at all stages, the need to coordinate larger international projects, the complexity of MEEG subject matter, and today's prioritization of fairness, privacy, and the environment, we find that current GSP tends to favor collective and cooperative work, for both scientific and for societal reasons.


Subject(s)
Electroencephalography , Humans
6.
J Neurosci ; 40(44): 8530-8542, 2020 10 28.
Article in English | MEDLINE | ID: mdl-33023923

ABSTRACT

Natural conversation is multisensory: when we can see the speaker's face, visual speech cues improve our comprehension. The neuronal mechanisms underlying this phenomenon remain unclear. The two main alternatives are visually mediated phase modulation of neuronal oscillations (excitability fluctuations) in auditory neurons and visual input-evoked responses in auditory neurons. Investigating this question using naturalistic audiovisual speech with intracranial recordings in humans of both sexes, we find evidence for both mechanisms. Remarkably, auditory cortical neurons track the temporal dynamics of purely visual speech using the phase of their slow oscillations and phase-related modulations in broadband high-frequency activity. Consistent with known perceptual enhancement effects, the visual phase reset amplifies the cortical representation of concomitant auditory speech. In contrast to this, and in line with earlier reports, visual input reduces the amplitude of evoked responses to concomitant auditory input. We interpret the combination of improved phase tracking and reduced response amplitude as evidence for more efficient and reliable stimulus processing in the presence of congruent auditory and visual speech inputs.SIGNIFICANCE STATEMENT Watching the speaker can facilitate our understanding of what is being said. The mechanisms responsible for this influence of visual cues on the processing of speech remain incompletely understood. We studied these mechanisms by recording the electrical activity of the human brain through electrodes implanted surgically inside the brain. We found that visual inputs can operate by directly activating auditory cortical areas, and also indirectly by modulating the strength of cortical responses to auditory input. Our results help to understand the mechanisms by which the brain merges auditory and visual speech into a unitary perception.


Subject(s)
Auditory Cortex/physiology , Evoked Potentials/physiology , Nonverbal Communication/physiology , Adult , Drug Resistant Epilepsy/surgery , Electrocorticography , Evoked Potentials, Auditory/physiology , Evoked Potentials, Visual/physiology , Female , Humans , Middle Aged , Neurons/physiology , Nonverbal Communication/psychology , Photic Stimulation , Young Adult
7.
PLoS One ; 15(6): e0234026, 2020.
Article in English | MEDLINE | ID: mdl-32525897

ABSTRACT

Social cognition is dependent on the ability to extract information from human stimuli. Of those, patterns of biological motion (BM) and in particular walking patterns of other humans, are prime examples. Although most often tested in isolation, BM outside the laboratory is often associated with multisensory cues (i.e. we often hear and see someone walking) and there is evidence that vision-based judgments of BM stimuli are systematically influenced by motor signals. Furthermore, cross-modal visuo-tactile mechanisms have been shown to influence perception of bodily stimuli. Based on these observations, we here investigated if somatosensory inputs would affect visual BM perception. In two experiments, we asked healthy participants to perform a speed discrimination task on two point light walkers (PLW) presented one after the other. In the first experiment, we quantified somatosensory-visual interactions by presenting PLW together with tactile stimuli either on the participants' forearms or feet soles. In the second experiment, we assessed the specificity of these interactions by presenting tactile stimuli either synchronously or asynchronously with upright or inverted PLW. Our results confirm that somatosensory input in the form of tactile foot stimulation influences visual BM perception. When presented with a seen walker's footsteps, additional tactile cues enhanced sensitivity on a speed discrimination task, but only if the tactile stimuli were presented on the relevant body-part (under the feet) and when the tactile stimuli were presented synchronously with the seen footsteps of the PLW, whether upright or inverted. Based on these findings we discuss potential mechanisms of somatosensory-visual interactions in BM perception.


Subject(s)
Motion Perception/physiology , Photic Stimulation/methods , Physical Stimulation/methods , Touch Perception/physiology , Visual Perception/physiology , Adult , Female , Humans , Judgment/physiology , Male , Young Adult
8.
Neuroimage ; 222: 116970, 2020 11 15.
Article in English | MEDLINE | ID: mdl-32454204

ABSTRACT

Facing perceptual uncertainty, the brain combines information from different senses to make optimal perceptual decisions and to guide behavior. However, decision making has been investigated mostly in unimodal contexts. Thus, how the brain integrates multisensory information during decision making is still unclear. Two opposing, but not mutually exclusive, scenarios are plausible: either the brain thoroughly combines the signals from different modalities before starting to build a supramodal decision, or unimodal signals are integrated during decision formation. To answer this question, we devised a paradigm mimicking naturalistic situations where human participants were exposed to continuous cacophonous audiovisual inputs containing an unpredictable signal cue in one or two modalities and had to perform a signal detection task or a cue categorization task. First, model-based analyses of behavioral data indicated that multisensory integration takes place alongside perceptual decision making. Next, using supervised machine learning on concurrently recorded EEG, we identified neural signatures of two processing stages: sensory encoding and decision formation. Generalization analyses across experimental conditions and time revealed that multisensory cues were processed faster during both stages. We further established that acceleration of neural dynamics during sensory encoding and decision formation was directly linked to multisensory integration. Our results were consistent across both signal detection and categorization tasks. Taken together, the results revealed a continuous dynamic interplay between multisensory integration and decision making processes (mixed scenario), with integration of multimodal information taking place both during sensory encoding as well as decision formation.


Subject(s)
Cerebral Cortex/physiology , Concept Formation/physiology , Decision Making/physiology , Electroencephalography , Functional Neuroimaging , Models, Theoretical , Signal Detection, Psychological/physiology , Supervised Machine Learning , Adult , Auditory Perception/physiology , Electroencephalography/methods , Female , Functional Neuroimaging/methods , Humans , Male , Pattern Recognition, Visual/physiology , Psychomotor Performance/physiology , Young Adult
9.
J Cogn Neurosci ; 32(8): 1562-1576, 2020 08.
Article in English | MEDLINE | ID: mdl-32319865

ABSTRACT

Anticipation of an impending stimulus shapes the state of the sensory systems, optimizing neural and behavioral responses. Here, we studied the role of brain oscillations in mediating spatial and temporal anticipations. Because spatial attention and temporal expectation are often associated with visual and auditory processing, respectively, we directly contrasted the visual and auditory modalities and asked whether these anticipatory mechanisms are similar in both domains. We recorded the magnetoencephalogram in healthy human participants performing an auditory and visual target discrimination task, in which cross-modal cues provided both temporal and spatial information with regard to upcoming stimulus presentation. Motivated by prior findings, we were specifically interested in delta (1-3 Hz) and alpha (8-13 Hz) band oscillatory state in anticipation of target presentation and their impact on task performance. Our findings support the view that spatial attention has a stronger effect in the visual domain, whereas temporal expectation effects are more prominent in the auditory domain. For the spatial attention manipulation, we found a typical pattern of alpha lateralization in the visual system, which correlated with response speed. Providing a rhythmic temporal cue led to increased postcue synchronization of low-frequency rhythms, although this effect was more broadband in nature, suggesting a general phase reset rather than frequency-specific neural entrainment. In addition, we observed delta-band synchronization with a frontal topography, which correlated with performance, especially in the auditory task. Combined, these findings suggest that spatial and temporal anticipations operate via a top-down modulation of the power and phase of low-frequency oscillations, respectively.


Subject(s)
Alpha Rhythm , Motivation , Acoustic Stimulation , Attention , Auditory Perception , Humans , Photic Stimulation
10.
Brain Connect ; 7(10): 648-660, 2017 12.
Article in English | MEDLINE | ID: mdl-28978234

ABSTRACT

Brain stimulation is increasingly viewed as an effective approach to treat neuropsychiatric disease. The brain's organization in distributed networks suggests that the activity of a remote brain structure could be modulated by stimulating cortical areas that strongly connect to the target. Most connections between cerebral areas are asymmetric, and a better understanding of the relative direction of information flow along connections could improve the targeting of stimulation to influence deep brain structures. The hippocampus and amygdala, two deep-situated structures that are crucial to memory and emotions, respectively, have been implicated in multiple neurological and psychiatric disorders. We explored the directed connectivity between the hippocampus and amygdala and the cerebral cortex in patients implanted with intracranial electrodes using corticocortical evoked potentials (CCEPs) evoked by single-pulse electrical stimulation. The hippocampus and amygdala were connected with most of the cortical mantle, either directly or indirectly, with the inferior temporal cortex being most directly connected. Because CCEPs assess the directionality of connections, we could determine that incoming connections from cortex to hippocampus were more direct than outgoing connections from hippocampus to cortex. We found a similar, although smaller, tendency for connections between the amygdala and cortex. Our results support the roles of the hippocampus and amygdala to be integrators of widespread cortical influence. These results can inform the targeting of noninvasive neurostimulation to influence hippocampus and amygdala function.


Subject(s)
Amygdala/physiopathology , Brain Mapping , Epilepsy/pathology , Evoked Potentials/physiology , Hippocampus/physiopathology , Neural Pathways/physiopathology , Adolescent , Adult , Cerebral Cortex , Electric Stimulation , Electrodes, Implanted , Electroencephalography , Female , Functional Laterality , Humans , Male , Middle Aged , Nerve Net/physiopathology , Young Adult
11.
J Neurosci Methods ; 281: 40-48, 2017 Apr 01.
Article in English | MEDLINE | ID: mdl-28192130

ABSTRACT

BACKGROUND: Intracranial electrical recordings (iEEG) and brain stimulation (iEBS) are invaluable human neuroscience methodologies. However, the value of such data is often unrealized as many laboratories lack tools for localizing electrodes relative to anatomy. To remedy this, we have developed a MATLAB toolbox for intracranial electrode localization and visualization, iELVis. NEW METHOD: iELVis uses existing tools (BioImage Suite, FSL, and FreeSurfer) for preimplant magnetic resonance imaging (MRI) segmentation, neuroimaging coregistration, and manual identification of electrodes in postimplant neuroimaging. Subsequently, iELVis implements methods for correcting electrode locations for postimplant brain shift with millimeter-scale accuracy and provides interactive visualization on 3D surfaces or in 2D slices with optional functional neuroimaging overlays. iELVis also localizes electrodes relative to FreeSurfer-based atlases and can combine data across subjects via the FreeSurfer average brain. RESULTS: It takes 30-60min of user time and 12-24h of computer time to localize and visualize electrodes from one brain. We demonstrate iELVis's functionality by showing that three methods for mapping primary hand somatosensory cortex (iEEG, iEBS, and functional MRI) provide highly concordant results. COMPARISON WITH EXISTING METHODS: iELVis is the first public software for electrode localization that corrects for brain shift, maps electrodes to an average brain, and supports neuroimaging overlays. Moreover, its interactive visualizations are powerful and its tutorial material is extensive. CONCLUSIONS: iELVis promises to speed the progress and enhance the robustness of intracranial electrode research. The software and extensive tutorial materials are freely available as part of the EpiSurg software project: https://github.com/episurg/episurg.


Subject(s)
Algorithms , Brain/diagnostic imaging , Brain/physiology , Electrocorticography/instrumentation , Electrodes, Implanted , Magnetic Resonance Imaging/methods , Atlases as Topic , Brain/surgery , Electrocorticography/methods , Humans , Imaging, Three-Dimensional , Motion , Neuroimaging/methods , Pattern Recognition, Automated/methods , Postoperative Period , Preoperative Period , Software
12.
Brain Struct Funct ; 222(2): 1093-1107, 2017 03.
Article in English | MEDLINE | ID: mdl-27318997

ABSTRACT

The main model of visual processing in primates proposes an anatomo-functional distinction between the dorsal stream, specialized in spatio-temporal information, and the ventral stream, processing essentially form information. However, these two pathways also communicate to share much visual information. These dorso-ventral interactions have been studied using form-from-motion (FfM) stimuli, revealing that FfM perception first activates dorsal regions (e.g., MT+/V5), followed by successive activations of ventral regions (e.g., LOC). However, relatively little is known about the implications of focal brain damage of visual areas on these dorso-ventral interactions. In the present case report, we investigated the dynamics of dorsal and ventral activations related to FfM perception (using topographical ERP analysis and electrical source imaging) in a patient suffering from a deficit in FfM perception due to right extrastriate brain damage in the ventral stream. Despite the patient's FfM impairment, both successful (observed for the highest level of FfM signal) and absent/failed FfM perception evoked the same temporal sequence of three processing states observed previously in healthy subjects. During the first period, brain source localization revealed cortical activations along the dorsal stream, currently associated with preserved elementary motion processing. During the latter two periods, the patterns of activity differed from normal subjects: activations were observed in the ventral stream (as reported for normal subjects), but also in the dorsal pathway, with the strongest and most sustained activity localized in the parieto-occipital regions. On the other hand, absent/failed FfM perception was characterized by weaker brain activity, restricted to the more lateral regions. This study shows that in the present case report, successful FfM perception, while following the same temporal sequence of processing steps as in normal subjects, evoked different patterns of brain activity. By revealing a brain circuit involving the most rostral part of the dorsal pathway, this study provides further support for neuro-imaging studies and brain lesion investigations that have suggested the existence of different brain circuits associated with different profiles of interaction between the dorsal and the ventral streams.


Subject(s)
Form Perception/physiology , Motion Perception/physiology , Perceptual Disorders/physiopathology , Visual Cortex/physiopathology , Aged , Electroencephalography , Evoked Potentials, Visual , Female , Functional Laterality , Humans , Image Processing, Computer-Assisted , Perceptual Disorders/pathology , Photic Stimulation , Visual Cortex/pathology , Visual Pathways/pathology , Visual Pathways/physiopathology
13.
Neuroimage ; 147: 219-232, 2017 02 15.
Article in English | MEDLINE | ID: mdl-27554533

ABSTRACT

While there is a strong interest in meso-scale field potential recording using intracranial electroencephalography with penetrating depth electrodes (i.e. stereotactic EEG or S-EEG) in humans, the signal recorded in the white matter remains ignored. White matter is generally considered electrically neutral and often included in the reference montage. Moreover, re-referencing electrophysiological data is a critical preprocessing choice that could drastically impact signal content and consequently the results of any given analysis. In the present stereotactic electroencephalography study, we first illustrate empirically the consequences of commonly used references (subdermal, white matter, global average, local montage) on inter-electrode signal correlation. Since most of these reference montages incorporate white matter signal, we next consider the difference between signals recorded in cortical gray matter and white matter. Our results reveal that electrode contacts located in the white matter record a mixture of activity, with part arising from the volume conduction (zero time delay) of activity from nearby gray matter. Furthermore, our analysis shows that white matter signal may be correlated with distant gray matter signal. While residual passive electrical spread from nearby matter may account for this relationship, our results suggest the possibility that this long distance correlation arises from the white matter fiber tracts themselves (i.e. activity from distant gray matter traveling along axonal fibers with time lag larger than zero); yet definitive conclusions about the origin of the white matter signal would require further experimental substantiation. By characterizing the properties of signals recorded in white matter and in gray matter, this study illustrates the importance of including anatomical prior knowledge when analyzing S-EEG data.


Subject(s)
Electroencephalography/methods , Gray Matter/physiology , White Matter/physiology , Adult , Electrodes, Implanted , Epilepsy/diagnosis , Epilepsy/physiopathology , Epilepsy/surgery , Female , Humans , Male , Stereotaxic Techniques , Young Adult
14.
J Neurosci ; 35(22): 8546-57, 2015 Jun 03.
Article in English | MEDLINE | ID: mdl-26041921

ABSTRACT

Even simple tasks rely on information exchange between functionally distinct and often relatively distant neuronal ensembles. Considerable work indicates oscillatory synchronization through phase alignment is a major agent of inter-regional communication. In the brain, different oscillatory phases correspond to low- and high-excitability states. Optimally aligned phases (or high-excitability states) promote inter-regional communication. Studies have also shown that sensory stimulation can modulate or reset the phase of ongoing cortical oscillations. For example, auditory stimuli can reset the phase of oscillations in visual cortex, influencing processing of a simultaneous visual stimulus. Such cross-regional phase reset represents a candidate mechanism for aligning oscillatory phase for inter-regional communication. Here, we explored the role of local and inter-regional phase alignment in driving a well established behavioral correlate of multisensory integration: the redundant target effect (RTE), which refers to the fact that responses to multisensory inputs are substantially faster than to unisensory stimuli. In a speeded detection task, human epileptic patients (N = 3) responded to unisensory (auditory or visual) and multisensory (audiovisual) stimuli with a button press, while electrocorticography was recorded over auditory and motor regions. Visual stimulation significantly modulated auditory activity via phase reset in the delta and theta bands. During the period between stimulation and subsequent motor response, transient synchronization between auditory and motor regions was observed. Phase synchrony to multisensory inputs was faster than to unisensory stimulation. This sensorimotor phase alignment correlated with behavior such that stronger synchrony was associated with faster responses, linking the commonly observed RTE with phase alignment across a sensorimotor network.


Subject(s)
Auditory Perception/physiology , Brain Mapping , Cerebral Cortex/physiopathology , Epilepsy/pathology , Evoked Potentials/physiology , Visual Perception/physiology , Acoustic Stimulation , Adolescent , Adult , Electroencephalography , Female , Humans , Male , Middle Aged , Photic Stimulation , Reaction Time/physiology
15.
Eur J Neurosci ; 41(7): 925-39, 2015 Apr.
Article in English | MEDLINE | ID: mdl-25688539

ABSTRACT

When sensory inputs are presented serially, response amplitudes to stimulus repetitions generally decrease as a function of presentation rate, diminishing rapidly as inter-stimulus intervals (ISIs) fall below 1 s. This 'adaptation' is believed to represent mechanisms by which sensory systems reduce responsivity to consistent environmental inputs, freeing resources to respond to potentially more relevant inputs. While auditory adaptation functions have been relatively well characterized, considerably less is known about visual adaptation in humans. Here, high-density visual-evoked potentials (VEPs) were recorded while two paradigms were used to interrogate visual adaptation. The first presented stimulus pairs with varying ISIs, comparing VEP amplitude to the second stimulus with that of the first (paired-presentation). The second involved blocks of stimulation (N = 100) at various ISIs and comparison of VEP amplitude between blocks of differing ISIs (block-presentation). Robust VEP modulations were evident as a function of presentation rate in the block-paradigm, with strongest modulations in the 130-150 ms and 160-180 ms visual processing phases. In paired-presentations, with ISIs of just 200-300 ms, an enhancement of VEP was evident when comparing S2 with S1, with no significant effect of presentation rate. Importantly, in block-presentations, adaptation effects were statistically robust at the individual participant level. These data suggest that a more taxing block-presentation paradigm is better suited to engage visual adaptation mechanisms than a paired-presentation design. The increased sensitivity of the visual processing metric obtained in the block-paradigm has implications for the examination of visual processing deficits in clinical populations.


Subject(s)
Adaptation, Physiological/physiology , Brain/physiology , Evoked Potentials, Visual/physiology , Visual Pathways/physiology , Visual Perception/physiology , Adult , Brain Mapping , Female , Humans , Male , Photic Stimulation , Time Factors
16.
Neuroimage ; 90: 360-73, 2014 Apr 15.
Article in English | MEDLINE | ID: mdl-24365674

ABSTRACT

The adult human visual system can efficiently fill-in missing object boundaries when low-level information from the retina is incomplete, but little is known about how these processes develop across childhood. A decade of visual-evoked potential (VEP) studies has produced a theoretical model identifying distinct phases of contour completion in adults. The first, termed a perceptual phase, occurs from approximately 100-200 ms and is associated with automatic boundary completion. The second is termed a conceptual phase occurring between 230 and 400 ms. The latter has been associated with the analysis of ambiguous objects which seem to require more effort to complete. The electrophysiological markers of these phases have both been localized to the lateral occipital complex, a cluster of ventral visual stream brain regions associated with object-processing. We presented Kanizsa-type illusory contour stimuli, often used for exploring contour completion processes, to neurotypical persons ages 6-31 (N=63), while parametrically varying the spatial extent of these induced contours, in order to better understand how filling-in processes develop across childhood and adolescence. Our results suggest that, while adults complete contour boundaries in a single discrete period during the automatic perceptual phase, children display an immature response pattern-engaging in more protracted processing across both timeframes and appearing to recruit more widely distributed regions which resemble those evoked during adult processing of higher-order ambiguous figures. However, children older than 5years of age were remarkably like adults in that the effects of contour processing were invariant to manipulation of contour extent.


Subject(s)
Brain Mapping/methods , Evoked Potentials, Visual/physiology , Form Perception/physiology , Occipital Lobe/growth & development , Occipital Lobe/physiology , Adolescent , Adult , Child , Female , Humans , Male , Photic Stimulation , Signal Processing, Computer-Assisted , Young Adult
17.
Brain Struct Funct ; 219(4): 1369-83, 2014 Jul.
Article in English | MEDLINE | ID: mdl-23708059

ABSTRACT

The auditory system is organized such that progressively more complex features are represented across successive cortical hierarchical stages. Just when and where the processing of phonemes, fundamental elements of the speech signal, is achieved in this hierarchy remains a matter of vigorous debate. Non-invasive measures of phonemic representation have been somewhat equivocal. While some studies point to a primary role for middle/anterior regions of the superior temporal gyrus (STG), others implicate the posterior STG. Differences in stimulation, task and inter-individual anatomical/functional variability may account for these discrepant findings. Here, we sought to clarify this issue by mapping phonemic representation across left perisylvian cortex, taking advantage of the excellent sampling density afforded by intracranial recordings in humans. We asked whether one or both major divisions of the STG were sensitive to phonemic transitions. The high signal-to-noise characteristics of direct intracranial recordings allowed for analysis at the individual participant level, circumventing issues of inter-individual anatomic and functional variability that may have obscured previous findings at the group level of analysis. The mismatch negativity (MMN), an electrophysiological response elicited by changes in repetitive streams of stimulation, served as our primary dependent measure. Oddball configurations of pairs of phonemes, spectro-temporally matched non-phonemes, and simple tones were presented. The loci of the MMN clearly differed as a function of stimulus type. Phoneme representation was most robust over middle/anterior STG/STS, but was also observed over posterior STG/SMG. These data point to multiple phonemic processing zones along perisylvian cortex, both anterior and posterior to primary auditory cortex. This finding is considered within the context of a dual stream model of auditory processing in which functionally distinct ventral and dorsal auditory processing pathways may be engaged by speech stimuli.


Subject(s)
Auditory Cortex/physiology , Auditory Pathways/physiology , Auditory Perception/physiology , Speech/physiology , Temporal Lobe/physiology , Acoustic Stimulation , Adolescent , Brain Mapping/methods , Electroencephalography , Female , Functional Laterality , Humans , Language , Male , Young Adult
18.
J Neurosci ; 33(48): 18849-54, 2013 Nov 27.
Article in English | MEDLINE | ID: mdl-24285891

ABSTRACT

Neocortical neuronal activity is characterized by complex spatiotemporal dynamics. Although slow oscillations have been shown to travel over space in terms of consistent phase advances, it is unknown how this phenomenon relates to neuronal activity in other frequency bands. We here present electrocorticographic data from three male and one female human subject and demonstrate that gamma power is phase locked to traveling alpha waves. Given that alpha activity has been proposed to coordinate neuronal processing reflected in the gamma band, we suggest that alpha waves are involved in coordinating neuronal processing in both space and time.


Subject(s)
Alpha Rhythm/physiology , Electroencephalography , Neocortex/physiology , Adult , Data Interpretation, Statistical , Electroencephalography Phase Synchronization , Female , Humans , Male , Neocortex/cytology , Neurons/physiology
19.
Neuroimage ; 79: 19-29, 2013 Oct 01.
Article in English | MEDLINE | ID: mdl-23624493

ABSTRACT

Findings in animal models demonstrate that activity within hierarchically early sensory cortical regions can be modulated by cross-sensory inputs through resetting of the phase of ongoing intrinsic neural oscillations. Here, subdural recordings evaluated whether phase resetting by auditory inputs would impact multisensory integration processes in human visual cortex. Results clearly showed auditory-driven phase reset in visual cortices and, in some cases, frank auditory event-related potentials (ERP) were also observed over these regions. Further, when audiovisual bisensory stimuli were presented, this led to robust multisensory integration effects which were observed in both the ERP and in measures of phase concentration. These results extend findings from animal models to human visual cortices, and highlight the impact of cross-sensory phase resetting by a non-primary stimulus on multisensory integration in ostensibly unisensory cortices.


Subject(s)
Acoustic Stimulation/methods , Auditory Perception/physiology , Brain Mapping/methods , Electroencephalography/methods , Evoked Potentials, Auditory/physiology , Evoked Potentials, Visual/physiology , Visual Cortex/physiology , Biological Clocks/physiology , Cues , Humans
20.
J Neurosci ; 32(44): 15338-44, 2012 Oct 31.
Article in English | MEDLINE | ID: mdl-23115172

ABSTRACT

The frequency of environmental vibrations is sampled by two of the major sensory systems, audition and touch, notwithstanding that these signals are transduced through very different physical media and entirely separate sensory epithelia. Psychophysical studies have shown that manipulating frequency in audition or touch can have a significant cross-sensory impact on perceived frequency in the other sensory system, pointing to intimate links between these senses during computation of frequency. In this regard, the frequency of a vibratory event can be thought of as a multisensory perceptual construct. In turn, electrophysiological studies point to temporally early multisensory interactions that occur in hierarchically early sensory regions where convergent inputs from the auditory and somatosensory systems are to be found. A key question pertains to the level of processing at which the multisensory integration of featural information, such as frequency, occurs. Do the sensory systems calculate frequency independently before this information is combined, or is this feature calculated in an integrated fashion during preattentive sensory processing? The well characterized mismatch negativity, an electrophysiological response that indexes preattentive detection of a change within the context of a regular pattern of stimulation, served as our dependent measure. High-density electrophysiological recordings were made in humans while they were presented with separate blocks of somatosensory, auditory, and audio-somatosensory "standards" and "deviants," where the deviant differed in frequency. Multisensory effects were identified beginning at ∼200 ms, with the multisensory mismatch negativity (MMN) significantly different from the sum of the unisensory MMNs. This provides compelling evidence for preattentive coupling between the somatosensory and auditory channels in the cortical representation of frequency.


Subject(s)
Brain Mapping , Hearing/physiology , Touch/physiology , Adult , Cluster Analysis , Electroencephalography , Electrophysiological Phenomena , Female , Humans , Male , Photic Stimulation , Pitch Perception/physiology , Vibration , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL