Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 123
Filter
1.
bioRxiv ; 2024 Jun 20.
Article in English | MEDLINE | ID: mdl-38948730

ABSTRACT

Syntax, the abstract structure of language, is a hallmark of human cognition. Despite its importance, its neural underpinnings remain obscured by inherent limitations of non-invasive brain measures and a near total focus on comprehension paradigms. Here, we address these limitations with high-resolution neurosurgical recordings (electrocorticography) and a controlled sentence production experiment. We uncover three syntactic networks that are broadly distributed across traditional language regions, but with focal concentrations in middle and inferior frontal gyri. In contrast to previous findings from comprehension studies, these networks process syntax mostly to the exclusion of words and meaning, supporting a cognitive architecture with a distinct syntactic system. Most strikingly, our data reveal an unexpected property of syntax: it is encoded independent of neural activity levels. We propose that this "low-activity coding" scheme represents a novel mechanism for encoding information, reserved for higher-order cognition more broadly.

2.
PLoS Comput Biol ; 20(5): e1012161, 2024 May.
Article in English | MEDLINE | ID: mdl-38815000

ABSTRACT

Neural responses in visual cortex adapt to prolonged and repeated stimuli. While adaptation occurs across the visual cortex, it is unclear how adaptation patterns and computational mechanisms differ across the visual hierarchy. Here we characterize two signatures of short-term neural adaptation in time-varying intracranial electroencephalography (iEEG) data collected while participants viewed naturalistic image categories varying in duration and repetition interval. Ventral- and lateral-occipitotemporal cortex exhibit slower and prolonged adaptation to single stimuli and slower recovery from adaptation to repeated stimuli compared to V1-V3. For category-selective electrodes, recovery from adaptation is slower for preferred than non-preferred stimuli. To model neural adaptation we augment our delayed divisive normalization (DN) model by scaling the input strength as a function of stimulus category, enabling the model to accurately predict neural responses across multiple image categories. The model fits suggest that differences in adaptation patterns arise from slower normalization dynamics in higher visual areas interacting with differences in input strength resulting from category selectivity. Our results reveal systematic differences in temporal adaptation of neural population responses between lower and higher visual brain areas and show that a single computational model of history-dependent normalization dynamics, fit with area-specific parameters, accounts for these differences.


Subject(s)
Adaptation, Physiological , Models, Neurological , Visual Cortex , Humans , Visual Cortex/physiology , Adaptation, Physiological/physiology , Adult , Male , Female , Photic Stimulation , Computational Biology , Young Adult , Electroencephalography , Visual Perception/physiology , Electrocorticography
3.
bioRxiv ; 2024 May 15.
Article in English | MEDLINE | ID: mdl-38798614

ABSTRACT

The ability to connect the form and meaning of a concept, known as word retrieval, is fundamental to human communication. While various input modalities could lead to identical word retrieval, the exact neural dynamics supporting this convergence relevant to daily auditory discourse remain poorly understood. Here, we leveraged neurosurgical electrocorticographic (ECoG) recordings from 48 patients and dissociated two key language networks that highly overlap in time and space integral to word retrieval. Using unsupervised temporal clustering techniques, we found a semantic processing network located in the middle and inferior frontal gyri. This network was distinct from an articulatory planning network in the inferior frontal and precentral gyri, which was agnostic to input modalities. Functionally, we confirmed that the semantic processing network encodes word surprisal during sentence perception. Our findings characterize how humans integrate ongoing auditory semantic information over time, a critical linguistic function from passive comprehension to daily discourse.

4.
medRxiv ; 2024 Jun 29.
Article in English | MEDLINE | ID: mdl-38585730

ABSTRACT

In medication-resistant epilepsy, the goal of epilepsy surgery is to make a patient seizure free with a resection/ablation that is as small as possible to minimize morbidity. The standard of care in planning the margins of epilepsy surgery involves electroclinical delineation of the seizure onset zone (SOZ) and incorporation of neuroimaging findings from MRI, PET, SPECT, and MEG modalities. Resecting cortical tissue generating high-frequency oscillations (HFOs) has been investigated as a more efficacious alternative to targeting the SOZ. In this study, we used a support vector machine (SVM), with four distinct fast ripple (FR: 350-600 Hz on oscillations, 200-600 Hz on spikes) metrics as factors. These metrics included the FR resection ratio (RR), a spatial FR network measure, and two temporal FR network measures. The SVM was trained by the value of these four factors with respect to the actual resection boundaries and actual seizure free labels of 18 patients with medically refractory focal epilepsy. Leave one out cross-validation of the trained SVM in this training set had an accuracy of 0.78. We next used a simulated iterative virtual resection targeting the FR sites that were highest rate and showed most temporal autonomy. The trained SVM utilized the four virtual FR metrics to predict virtual seizure freedom. In all but one of the nine patients seizure free after surgery, we found that the virtual resections sufficient for virtual seizure freedom were larger in volume (p<0.05). In nine patients who were not seizure free, a larger virtual resection made five virtually seizure free. We also examined 10 medically refractory focal epilepsy patients implanted with the responsive neurostimulator system (RNS) and virtually targeted the RNS stimulation contacts proximal to sites generating FR at highest rates to determine if the simulated value of the stimulated SOZ and stimulated FR metrics would trend toward those patients with a better seizure outcome. Our results suggest: 1) FR measures can accurately predict whether a resection, defined by the standard of care, will result in seizure freedom; 2) utilizing FR alone for planning an efficacious surgery can be associated with larger resections; 3) when FR metrics predict the standard of care resection will fail, amending the boundaries of the planned resection with certain FR generating sites may improve outcome; and 4) more work is required to determine if targeting RNS stimulation contact proximal to FR generating sites will improve seizure outcome.

5.
bioRxiv ; 2024 Mar 14.
Article in English | MEDLINE | ID: mdl-38559163

ABSTRACT

Objective: This study investigates speech decoding from neural signals captured by intracranial electrodes. Most prior works can only work with electrodes on a 2D grid (i.e., Electrocorticographic or ECoG array) and data from a single patient. We aim to design a deep-learning model architecture that can accommodate both surface (ECoG) and depth (stereotactic EEG or sEEG) electrodes. The architecture should allow training on data from multiple participants with large variability in electrode placements and the trained model should perform well on participants unseen during training. Approach: We propose a novel transformer-based model architecture named SwinTW that can work with arbitrarily positioned electrodes, by leveraging their 3D locations on the cortex rather than their positions on a 2D grid. We train both subject-specific models using data from a single participant as well as multi-patient models exploiting data from multiple participants. Main Results: The subject-specific models using only low-density 8x8 ECoG data achieved high decoding Pearson Correlation Coefficient with ground truth spectrogram (PCC=0.817), over N=43 participants, outperforming our prior convolutional ResNet model and the 3D Swin transformer model. Incorporating additional strip, depth, and grid electrodes available in each participant (N=39) led to further improvement (PCC=0.838). For participants with only sEEG electrodes (N=9), subject-specific models still enjoy comparable performance with an average PCC=0.798. The multi-subject models achieved high performance on unseen participants, with an average PCC=0.765 in leave-one-out cross-validation. Significance: The proposed SwinTW decoder enables future speech neuroprostheses to utilize any electrode placement that is clinically optimal or feasible for a particular participant, including using only depth electrodes, which are more routinely implanted in chronic neurosurgical procedures. Importantly, the generalizability of the multi-patient models suggests the exciting possibility of developing speech neuroprostheses for people with speech disability without relying on their own neural data for training, which is not always feasible.

6.
Brain Commun ; 6(2): fcae053, 2024.
Article in English | MEDLINE | ID: mdl-38505231

ABSTRACT

Cortical regions supporting speech production are commonly established using neuroimaging techniques in both research and clinical settings. However, for neurosurgical purposes, structural function is routinely mapped peri-operatively using direct electrocortical stimulation. While this method is the gold standard for identification of eloquent cortical regions to preserve in neurosurgical patients, there is lack of specificity of the actual underlying cognitive processes being interrupted. To address this, we propose mapping the temporal dynamics of speech arrest across peri-sylvian cortices by quantifying the latency between stimulation and speech deficits. In doing so, we are able to substantiate hypotheses about distinct region-specific functional roles (e.g. planning versus motor execution). In this retrospective observational study, we analysed 20 patients (12 female; age range 14-43) with refractory epilepsy who underwent continuous extra-operative intracranial EEG monitoring of an automatic speech task during clinical bedside language mapping. Latency to speech arrest was calculated as time from stimulation onset to speech arrest onset, controlling for individual speech rate. Most instances of motor-based arrest (87.5% of 96 instances) were in sensorimotor cortex with mid-range latencies to speech arrest with a distributional peak at 0.47 s. Speech arrest occurred in numerous regions, with relatively short latencies in supramarginal gyrus (0.46 s), superior temporal gyrus (0.51 s) and middle temporal gyrus (0.54 s), followed by relatively long latencies in sensorimotor cortex (0.72 s) and especially long latencies in inferior frontal gyrus (0.95 s). Non-parametric testing for speech arrest revealed that region predicted latency; latencies in supramarginal gyrus and in superior temporal gyrus were shorter than in sensorimotor cortex and in inferior frontal gyrus. Sensorimotor cortex is primarily responsible for motor-based arrest. Latencies to speech arrest in supramarginal gyrus and superior temporal gyrus (and to a lesser extent middle temporal gyrus) align with latencies to motor-based arrest in sensorimotor cortex. This pattern of relatively quick cessation of speech suggests that stimulating these regions interferes with the outgoing motor execution. In contrast, the latencies to speech arrest in inferior frontal gyrus and in ventral regions of sensorimotor cortex were significantly longer than those in temporoparietal regions. Longer latencies in the more frontal areas (including inferior frontal gyrus and ventral areas of precentral gyrus and postcentral gyrus) suggest that stimulating these areas interrupts a higher-level speech production process involved in planning. These results implicate the ventral specialization of sensorimotor cortex (including both precentral and postcentral gyri) for speech planning above and beyond motor execution.

7.
Nat Commun ; 15(1): 2768, 2024 Mar 30.
Article in English | MEDLINE | ID: mdl-38553456

ABSTRACT

Contextual embeddings, derived from deep language models (DLMs), provide a continuous vectorial representation of language. This embedding space differs fundamentally from the symbolic representations posited by traditional psycholinguistics. We hypothesize that language areas in the human brain, similar to DLMs, rely on a continuous embedding space to represent language. To test this hypothesis, we densely record the neural activity patterns in the inferior frontal gyrus (IFG) of three participants using dense intracranial arrays while they listened to a 30-minute podcast. From these fine-grained spatiotemporal neural recordings, we derive a continuous vectorial representation for each word (i.e., a brain embedding) in each patient. Using stringent zero-shot mapping we demonstrate that brain embeddings in the IFG and the DLM contextual embedding space have common geometric patterns. The common geometric patterns allow us to predict the brain embedding in IFG of a given left-out word based solely on its geometrical relationship to other non-overlapping words in the podcast. Furthermore, we show that contextual embeddings capture the geometry of IFG embeddings better than static word embeddings. The continuous brain embedding space exposes a vector-based neural code for natural language processing in the human brain.


Subject(s)
Brain , Language , Humans , Prefrontal Cortex , Natural Language Processing
8.
bioRxiv ; 2024 Feb 12.
Article in English | MEDLINE | ID: mdl-38405990

ABSTRACT

Interictal epileptiform discharges (IEDs) are ubiquitously expressed in epileptic networks and disrupt cognitive functions. It is unclear whether addressing IED-induced dysfunction could improve epilepsy outcomes as most therapeutics target seizures. We show in a model of progressive hippocampal epilepsy that IEDs produce pathological oscillatory coupling which is associated with prolonged, hypersynchronous neural spiking in synaptically connected cortex and expands the brain territory capable of generating IEDs. A similar relationship between IED-mediated oscillatory coupling and temporal organization of IEDs across brain regions was identified in human subjects with refractory focal epilepsy. Spatiotemporally targeted closed-loop electrical stimulation triggered on hippocampal IED occurrence eliminated the abnormal cortical activity patterns, preventing spread of the epileptic network and ameliorating long-term spatial memory deficits in rodents. These findings suggest that stimulation-based network interventions that normalize interictal dynamics may be an effective treatment of epilepsy and its comorbidities, with a low barrier to clinical translation. One-Sentence Summary: Targeted closed-loop electrical stimulation prevents spread of the epileptic network and ameliorates long-term spatial memory deficits.

9.
bioRxiv ; 2024 Jun 21.
Article in English | MEDLINE | ID: mdl-38370843

ABSTRACT

Across the animal kingdom, neural responses in the auditory cortex are suppressed during vocalization, and humans are no exception. A common hypothesis is that suppression increases sensitivity to auditory feedback, enabling the detection of vocalization errors. This hypothesis has been previously confirmed in non-human primates, however a direct link between auditory suppression and sensitivity in human speech monitoring remains elusive. To address this issue, we obtained intracranial electroencephalography (iEEG) recordings from 35 neurosurgical participants during speech production. We first characterized the detailed topography of auditory suppression, which varied across superior temporal gyrus (STG). Next, we performed a delayed auditory feedback (DAF) task to determine whether the suppressed sites were also sensitive to auditory feedback alterations. Indeed, overlapping sites showed enhanced responses to feedback, indicating sensitivity. Importantly, there was a strong correlation between the degree of auditory suppression and feedback sensitivity, suggesting suppression might be a key mechanism that underlies speech monitoring. Further, we found that when participants produced speech with simultaneous auditory feedback, posterior STG was selectively activated if participants were engaged in a DAF paradigm, suggesting that increased attentional load can modulate auditory feedback sensitivity.

10.
bioRxiv ; 2024 Jan 17.
Article in English | MEDLINE | ID: mdl-37745363

ABSTRACT

Cortical regions supporting speech production are commonly established using neuroimaging techniques in both research and clinical settings. However, for neurosurgical purposes, structural function is routinely mapped peri-operatively using direct electrocortical stimulation. While this method is the gold standard for identification of eloquent cortical regions to preserve in neurosurgical patients, there is lack of specificity of the actual underlying cognitive processes being interrupted. To address this, we propose mapping the temporal dynamics of speech arrest across peri-sylvian cortices by quantifying the latency between stimulation and speech deficits. In doing so, we are able to substantiate hypotheses about distinct region-specific functional roles (e.g., planning versus motor execution). In this retrospective observational study, we analyzed 20 patients (12 female; age range 14-43) with refractory epilepsy who underwent continuous extra-operative intracranial EEG monitoring of an automatic speech task during clinical bedside language mapping. Latency to speech arrest was calculated as time from stimulation onset to speech arrest onset, controlling for individual speech rate. Most instances of motor-based arrest (87.5% of 96 instances) were in sensorimotor cortex with mid-range latencies to speech arrest with a distributional peak at 0.47 seconds. Speech arrest occurred in numerous regions, with relatively short latencies in supramarginal gyrus (0.46 seconds), superior temporal gyrus (0.51 seconds), and middle temporal gyrus (0.54 seconds), followed by relatively long latencies in sensorimotor cortex (0.72 seconds) and especially long latencies in inferior frontal gyrus (0.95 seconds). Nonparametric testing for speech arrest revealed that region predicted latency; latencies in supramarginal gyrus and in superior temporal gyrus were shorter than in sensorimotor cortex and in inferior frontal gyrus. Sensorimotor cortex is primarily responsible for motor-based arrest. Latencies to speech arrest in supramarginal gyrus and superior temporal gyrus (and to a lesser extent middle temporal gyrus) align with latencies to motor-based arrest in sensorimotor cortex. This pattern of relatively quick cessation of speech suggests that stimulating these regions interferes with the outgoing motor execution. In contrast, the latencies to speech arrest in inferior frontal gyrus and in ventral regions of sensorimotor cortex were significantly longer than those in temporoparietal regions. Longer latencies in the more frontal areas (including inferior frontal gyrus and ventral areas of precentral gyrus and postcentral gyrus) suggest that stimulating these areas interrupts a higher-level speech production process involved in planning. These results implicate the ventral specialization of sensorimotor cortex (including both precentral and postcentral gyri) for speech planning above and beyond motor execution.

11.
bioRxiv ; 2024 Feb 27.
Article in English | MEDLINE | ID: mdl-37745548

ABSTRACT

Neural responses in visual cortex adapt to prolonged and repeated stimuli. While adaptation occurs across the visual cortex, it is unclear how adaptation patterns and computational mechanisms differ across the visual hierarchy. Here we characterize two signatures of short-term neural adaptation in time-varying intracranial electroencephalography (iEEG) data collected while participants viewed naturalistic image categories varying in duration and repetition interval. Ventral- and lateral-occipitotemporal cortex exhibit slower and prolonged adaptation to single stimuli and slower recovery from adaptation to repeated stimuli compared to V1-V3. For category-selective electrodes, recovery from adaptation is slower for preferred than non-preferred stimuli. To model neural adaptation we augment our delayed divisive normalization (DN) model by scaling the input strength as a function of stimulus category, enabling the model to accurately predict neural responses across multiple image categories. The model fits suggest that differences in adaptation patterns arise from slower normalization dynamics in higher visual areas interacting with differences in input strength resulting from category selectivity. Our results reveal systematic differences in temporal adaptation of neural population responses across the human visual hierarchy and show that a single computational model of history-dependent normalization dynamics, fit with area-specific parameters, accounts for these differences.

12.
Proc Natl Acad Sci U S A ; 120(42): e2300255120, 2023 10 17.
Article in English | MEDLINE | ID: mdl-37819985

ABSTRACT

Speech production is a complex human function requiring continuous feedforward commands together with reafferent feedback processing. These processes are carried out by distinct frontal and temporal cortical networks, but the degree and timing of their recruitment and dynamics remain poorly understood. We present a deep learning architecture that translates neural signals recorded directly from the cortex to an interpretable representational space that can reconstruct speech. We leverage learned decoding networks to disentangle feedforward vs. feedback processing. Unlike prevailing models, we find a mixed cortical architecture in which frontal and temporal networks each process both feedforward and feedback information in tandem. We elucidate the timing of feedforward and feedback-related processing by quantifying the derived receptive fields. Our approach provides evidence for a surprisingly mixed cortical architecture of speech circuitry together with decoding advances that have important implications for neural prosthetics.


Subject(s)
Speech , Temporal Lobe , Humans , Feedback , Acoustic Stimulation
13.
bioRxiv ; 2023 Sep 17.
Article in English | MEDLINE | ID: mdl-37745380

ABSTRACT

Decoding human speech from neural signals is essential for brain-computer interface (BCI) technologies restoring speech function in populations with neurological deficits. However, it remains a highly challenging task, compounded by the scarce availability of neural signals with corresponding speech, data complexity, and high dimensionality, and the limited publicly available source code. Here, we present a novel deep learning-based neural speech decoding framework that includes an ECoG Decoder that translates electrocorticographic (ECoG) signals from the cortex into interpretable speech parameters and a novel differentiable Speech Synthesizer that maps speech parameters to spectrograms. We develop a companion audio-to-audio auto-encoder consisting of a Speech Encoder and the same Speech Synthesizer to generate reference speech parameters to facilitate the ECoG Decoder training. This framework generates natural-sounding speech and is highly reproducible across a cohort of 48 participants. Among three neural network architectures for the ECoG Decoder, the 3D ResNet model has the best decoding performance (PCC=0.804) in predicting the original speech spectrogram, closely followed by the SWIN model (PCC=0.796). Our experimental results show that our models can decode speech with high correlation even when limited to only causal operations, which is necessary for adoption by real-time neural prostheses. We successfully decode speech in participants with either left or right hemisphere coverage, which could lead to speech prostheses in patients with speech deficits resulting from left hemisphere damage. Further, we use an occlusion analysis to identify cortical regions contributing to speech decoding across our models. Finally, we provide open-source code for our two-stage training pipeline along with associated preprocessing and visualization tools to enable reproducible research and drive research across the speech science and prostheses communities.

14.
bioRxiv ; 2023 Jun 29.
Article in English | MEDLINE | ID: mdl-37425747

ABSTRACT

Effective communication hinges on a mutual understanding of word meaning in different contexts. The embedding space learned by large language models can serve as an explicit model of the shared, context-rich meaning space humans use to communicate their thoughts. We recorded brain activity using electrocorticography during spontaneous, face-to-face conversations in five pairs of epilepsy patients. We demonstrate that the linguistic embedding space can capture the linguistic content of word-by-word neural alignment between speaker and listener. Linguistic content emerged in the speaker's brain before word articulation, and the same linguistic content rapidly reemerged in the listener's brain after word articulation. These findings establish a computational framework to study how human brains transmit their thoughts to one another in real-world contexts.

15.
Sci Rep ; 13(1): 9620, 2023 06 14.
Article in English | MEDLINE | ID: mdl-37316509

ABSTRACT

Describing intracortical laminar organization of interictal epileptiform discharges (IED) and high frequency oscillations (HFOs), also known as ripples. Defining the frequency limits of slow and fast ripples. We recorded potential gradients with laminar multielectrode arrays (LME) for current source density (CSD) and multi-unit activity (MUA) analysis of interictal epileptiform discharges IEDs and HFOs in the neocortex and mesial temporal lobe of focal epilepsy patients. IEDs were observed in 20/29, while ripples only in 9/29 patients. Ripples were all detected within the seizure onset zone (SOZ). Compared to hippocampal HFOs, neocortical ripples proved to be longer, lower in frequency and amplitude, and presented non-uniform cycles. A subset of ripples (≈ 50%) co-occurred with IEDs, while IEDs were shown to contain variable high-frequency activity, even below HFO detection threshold. The limit between slow and fast ripples was defined at 150 Hz, while IEDs' high frequency components form clusters separated at 185 Hz. CSD analysis of IEDs and ripples revealed an alternating sink-source pair in the supragranular cortical layers, although fast ripple CSD appeared lower and engaged a wider cortical domain than slow ripples MUA analysis suggested a possible role of infragranularly located neural populations in ripple and IED generation. Laminar distribution of peak frequencies derived from HFOs and IEDs, respectively, showed that supragranular layers were dominated by slower (< 150 Hz) components. Our findings suggest that cortical slow ripples are generated primarily in upper layers while fast ripples and associated MUA in deeper layers. The dissociation of macro- and microdomains suggests that microelectrode recordings may be more selective for SOZ-linked ripples. We found a complex interplay between neural activity in the neocortical laminae during ripple and IED formation. We observed a potential leading role of cortical neurons in deeper layers, suggesting a refined utilization of LMEs in SOZ localization.


Subject(s)
Body Fluids , Coleoptera , Endocrine Glands , Epilepsies, Partial , High-Frequency Ventilation , Humans , Animals
16.
bioRxiv ; 2023 Aug 04.
Article in English | MEDLINE | ID: mdl-37292795

ABSTRACT

High-frequency phase-locked oscillations have been hypothesized to facilitate integration ('binding') of information encoded across widespread cortical areas. Ripples (~100ms long ~90Hz oscillations) co-occur ('co-ripple') broadly in multiple states and locations, but have only been associated with memory replay. We tested whether cortico-cortical co-ripples subserve a general role in binding by recording intracranial EEG during reading. Co-rippling increased to words versus consonant-strings between visual, wordform and semantic cortical areas when letters are binding into words, and words to meaning. Similarly, co-ripples strongly increased before correct responses between executive, response, wordform and semantic areas when word meanings bind instructions and response. Task-selective co-rippling dissociated from non-oscillatory activation and memory reinstatement. Co-ripples were phase-locked at zero-lag, even at long distances (>12cm), supporting a general role in cognitive binding.

17.
Epilepsia ; 64(7): 1910-1924, 2023 07.
Article in English | MEDLINE | ID: mdl-37150937

ABSTRACT

OBJECTIVE: Effective surgical treatment of drug-resistant epilepsy depends on accurate localization of the epileptogenic zone (EZ). High-frequency oscillations (HFOs) are potential biomarkers of the EZ. Previous research has shown that HFOs often occur within submillimeter areas of brain tissue and that the coarse spatial sampling of clinical intracranial electrode arrays may limit the accurate capture of HFO activity. In this study, we sought to characterize microscale HFO activity captured on thin, flexible microelectrocorticographic (µECoG) arrays, which provide high spatial resolution over large cortical surface areas. METHODS: We used novel liquid crystal polymer thin-film µECoG arrays (.76-1.72-mm intercontact spacing) to capture HFOs in eight intraoperative recordings from seven patients with epilepsy. We identified ripple (80-250 Hz) and fast ripple (250-600 Hz) HFOs using a common energy thresholding detection algorithm along with two stages of artifact rejection. We visualized microscale subregions of HFO activity using spatial maps of HFO rate, signal-to-noise ratio, and mean peak frequency. We quantified the spatial extent of HFO events by measuring covariance between detected HFOs and surrounding activity. We also compared HFO detection rates on microcontacts to simulated macrocontacts by spatially averaging data. RESULTS: We found visually delineable subregions of elevated HFO activity within each µECoG recording. Forty-seven percent of HFOs occurred on single 200-µm-diameter recording contacts, with minimal high-frequency activity on surrounding contacts. Other HFO events occurred across multiple contacts simultaneously, with covarying activity most often limited to a .95-mm radius. Through spatial averaging, we estimated that macrocontacts with 2-3-mm diameter would only capture 44% of the HFOs detected in our µECoG recordings. SIGNIFICANCE: These results demonstrate that thin-film microcontact surface arrays with both highresolution and large coverage accurately capture microscale HFO activity and may improve the utility of HFOs to localize the EZ for treatment of drug-resistant epilepsy.


Subject(s)
Brain Waves , Drug Resistant Epilepsy , Epilepsy , Humans , Electroencephalography/methods , Epilepsy/surgery , Epilepsy/diagnosis , Brain , Drug Resistant Epilepsy/diagnosis , Drug Resistant Epilepsy/surgery
18.
bioRxiv ; 2023 Jul 12.
Article in English | MEDLINE | ID: mdl-36865223

ABSTRACT

Neuronal oscillations at about 10 Hz, called alpha oscillations, are often thought to arise from synchronous activity across occipital cortex, reflecting general cognitive states such as arousal and alertness. However, there is also evidence that modulation of alpha oscillations in visual cortex can be spatially specific. Here, we used intracranial electrodes in human patients to measure alpha oscillations in response to visual stimuli whose location varied systematically across the visual field. We separated the alpha oscillatory power from broadband power changes. The variation in alpha oscillatory power with stimulus position was then fit by a population receptive field (pRF) model. We find that the alpha pRFs have similar center locations to pRFs estimated from broadband power (70-180 Hz), but are several times larger. The results demonstrate that alpha suppression in human visual cortex can be precisely tuned. Finally, we show how the pattern of alpha responses can explain several features of exogenous visual attention. Significance Statement: The alpha oscillation is the largest electrical signal generated by the human brain. An important question in systems neuroscience is the degree to which this oscillation reflects system-wide states and behaviors such as arousal, alertness, and attention, versus much more specific functions in the routing and processing of information. We examined alpha oscillations at high spatial precision in human patients with intracranial electrodes implanted over visual cortex. We discovered a surprisingly high spatial specificity of visually driven alpha oscillations, which we quantified with receptive field models. We further use our discoveries about properties of the alpha response to show a link between these oscillations and the spread of visual attention.

19.
J Neurosci ; 42(40): 7562-7580, 2022 10 05.
Article in English | MEDLINE | ID: mdl-35999054

ABSTRACT

Neural responses to visual stimuli exhibit complex temporal dynamics, including subadditive temporal summation, response reduction with repeated or sustained stimuli (adaptation), and slower dynamics at low contrast. These phenomena are often studied independently. Here, we demonstrate these phenomena within the same experiment and model the underlying neural computations with a single computational model. We extracted time-varying responses from electrocorticographic recordings from patients presented with stimuli that varied in duration, interstimulus interval (ISI) and contrast. Aggregating data across patients from both sexes yielded 98 electrodes with robust visual responses, covering both earlier (V1-V3) and higher-order (V3a/b, LO, TO, IPS) retinotopic maps. In all regions, the temporal dynamics of neural responses exhibit several nonlinear features. Peak response amplitude saturates with high contrast and longer stimulus durations, the response to a second stimulus is suppressed for short ISIs and recovers for longer ISIs, and response latency decreases with increasing contrast. These features are accurately captured by a computational model composed of a small set of canonical neuronal operations, that is, linear filtering, rectification, exponentiation, and a delayed divisive normalization. We find that an increased normalization term captures both contrast- and adaptation-related response reductions, suggesting potentially shared underlying mechanisms. We additionally demonstrate both changes and invariance in temporal response dynamics between earlier and higher-order visual areas. Together, our results reveal the presence of a wide range of temporal and contrast-dependent neuronal dynamics in the human visual cortex and demonstrate that a simple model captures these dynamics at millisecond resolution.SIGNIFICANCE STATEMENT Sensory inputs and neural responses change continuously over time. It is especially challenging to understand a system that has both dynamic inputs and outputs. Here, we use a computational modeling approach that specifies computations to convert a time-varying input stimulus to a neural response time course, and we use this to predict neural activity measured in the human visual cortex. We show that this computational model predicts a wide variety of complex neural response shapes, which we induced experimentally by manipulating the duration, repetition, and contrast of visual stimuli. By comparing data and model predictions, we uncover systematic properties of temporal dynamics of neural signals, allowing us to better understand how the brain processes dynamic sensory information.


Subject(s)
Brain , Visual Cortex , Male , Female , Humans , Photic Stimulation/methods , Brain/physiology , Brain Mapping/methods , Time Factors , Visual Cortex/physiology
20.
PLoS Comput Biol ; 18(8): e1010401, 2022 08.
Article in English | MEDLINE | ID: mdl-35939509

ABSTRACT

In analyzing the neural correlates of naturalistic and unstructured behaviors, features of neural activity that are ignored in a trial-based experimental paradigm can be more fully studied and investigated. Here, we analyze neural activity from two patients using electrocorticography (ECoG) and stereo-electroencephalography (sEEG) recordings, and reveal that multiple neural signal characteristics exist that discriminate between unstructured and naturalistic behavioral states such as "engaging in dialogue" and "using electronics". Using the high gamma amplitude as an estimate of neuronal firing rate, we demonstrate that behavioral states in a naturalistic setting are discriminable based on long-term mean shifts, variance shifts, and differences in the specific neural activity's covariance structure. Both the rapid and slow changes in high gamma band activity separate unstructured behavioral states. We also use Gaussian process factor analysis (GPFA) to show the existence of salient spatiotemporal features with variable smoothness in time. Further, we demonstrate that both temporally smooth and stochastic spatiotemporal activity can be used to differentiate unstructured behavioral states. This is the first attempt to elucidate how different neural signal features contain information about behavioral states collected outside the conventional experimental paradigm.


Subject(s)
Electrocorticography , Electroencephalography , Brain Mapping , Humans , Normal Distribution
SELECTION OF CITATIONS
SEARCH DETAIL
...