RESUMEN
The discrete events of our narrative experience are organized by the neural substrate that underlies episodic memory. This narrative process is segmented into discrete units by event boundaries. This permits a replay process that acts to consolidate each event into a narrative memory. High frequency oscillations (HFOs) are a potential mechanism for synchronizing neural activity during these processes. Here, we use intracranial recordings from participants viewing and freely recalling a naturalistic stimulus. We show that hippocampal HFOs increase following event boundaries and that coincident hippocampal-cortical HFOs (co-HFOs) occur in cortical regions previously shown to underlie event segmentation (inferior parietal, precuneus, lateral occipital, inferior frontal cortices). We also show that event-specific patterns of co-HFOs that occur during event viewing re-occur following the subsequent three event boundaries (in decaying fashion) and also during recall. This is consistent with models that support replay as a mechanism for memory consolidation. Hence, HFOs may coordinate activity across brain regions serving widespread event segmentation, encode naturalistic memory, and bind representations to assemble memory of a coherent, continuous experience.
RESUMEN
Sensory stimulation of the brain reverberates in its recurrent neuronal networks. However, current computational models of brain activity do not separate immediate sensory responses from intrinsic recurrent dynamics. We apply a vector-autoregressive model with external input (VARX), combining the concepts of "functional connectivity" and "encoding models", to intracranial recordings in humans. We find that the recurrent connectivity during rest is largely unaltered during movie watching. The intrinsic recurrent dynamic enhances and prolongs the neural responses to scene cuts, eye movements, and sounds. Failing to account for these exogenous inputs, leads to spurious connections in the intrinsic "connectivity". The model shows that an external stimulus can reduce intrinsic noise. It also shows that sensory areas have mostly outward, whereas higher-order brain areas mostly incoming connections. We conclude that the response to an external audiovisual stimulus can largely be attributed to the intrinsic dynamic of the brain, already observed during rest.
RESUMEN
Focusing on a specific conversation amidst multiple interfering talkers is challenging, especially for those with hearing loss. Brain-controlled assistive hearing devices aim to alleviate this problem by enhancing the attended speech based on the listener's neural signals using auditory attention decoding (AAD). Departing from conventional AAD studies that relied on oversimplified scenarios with stationary talkers, a realistic AAD task that involves multiple talkers taking turns as they continuously move in space in background noise is presented. Invasive electroencephalography (iEEG) data are collected from three neurosurgical patients as they focused on one of the two moving conversations. An enhanced brain-controlled assistive hearing system that combines AAD and a binaural speaker-independent speech separation model is presented. The separation model unmixes talkers while preserving their spatial location and provides talker trajectories to the neural decoder to improve AAD accuracy. Subjective and objective evaluations show that the proposed system enhances speech intelligibility and facilitates conversation tracking while maintaining spatial cues and voice quality in challenging acoustic environments. This research demonstrates the potential of this approach in real-world scenarios and marks a significant step toward developing assistive hearing technologies that adapt to the intricate dynamics of everyday auditory experiences.
RESUMEN
Listeners readily extract multi-dimensional auditory objects such as a 'localized talker' from complex acoustic scenes with multiple talkers. Yet, the neural mechanisms underlying simultaneous encoding and linking of different sound features - for example, a talker's voice and location - are poorly understood. We analyzed invasive intracranial recordings in neurosurgical patients attending to a localized talker in real-life cocktail party scenarios. We found that sensitivity to an individual talker's voice and location features was distributed throughout auditory cortex and that neural sites exhibited a gradient from sensitivity to a single feature to joint sensitivity to both features. On a population level, cortical response patterns of both dual-feature sensitive sites but also single-feature sensitive sites revealed simultaneous encoding of an attended talker's voice and location features. However, for single-feature sensitive sites, the representation of the primary feature was more precise. Further, sites which selective tracked an attended speech stream concurrently encoded an attended talker's voice and location features, indicating that such sites combine selective tracking of an attended auditory object with encoding of the object's features. Finally, we found that attending a localized talker selectively enhanced temporal coherence between single-feature voice sensitive sites and single-feature location sensitive sites, providing an additional mechanism for linking voice and location in multi-talker scenes. These results demonstrate that a talker's voice and location features are linked during multi-dimensional object formation in naturalistic multi-talker scenes by joint population coding as well as by temporal coherence between neural sites. SIGNIFICANCE STATEMENT: Listeners effortlessly extract auditory objects from complex acoustic scenes consisting of multiple sound sources in naturalistic, spatial sound scenes. Yet, how the brain links different sound features to form a multi-dimensional auditory object is poorly understood. We investigated how neural responses encode and integrate an attended talker's voice and location features in spatial multi-talker sound scenes to elucidate which neural mechanisms underlie simultaneous encoding and linking of different auditory features. Our results show that joint population coding as well as temporal coherence mechanisms contribute to distributed multi-dimensional auditory object encoding. These findings shed new light on cortical functional specialization and multidimensional auditory object formation in complex, naturalistic listening scenes. HIGHLIGHTS: Cortical responses to an single talker exhibit a distributed gradient, ranging from sites that are sensitive to both a talker's voice and location (dual-feature sensitive sites) to sites that are sensitive to either voice or location (single-feature sensitive sites).Population response patterns of dual-feature sensitive sites encode voice and location features of the attended talker in multi-talker scenes jointly and with equal precision.Despite their sensitivity to a single feature at the level of individual cortical sites, population response patterns of single-feature sensitive sites also encode location and voice features of a talker jointly, but with higher precision for the feature they are primarily sensitive to.Neural sites which selectively track an attended speech stream concurrently encode the attended talker's voice and location features.Attention selectively enhances temporal coherence between voice and location selective sites over time.Joint population coding as well as temporal coherence mechanisms underlie distributed multi-dimensional auditory object encoding in auditory cortex.
RESUMEN
We present a method for direct imaging of the electric field networks in the human brain from electroencephalography (EEG) data with much higher temporal and spatial resolution than functional MRI (fMRI), without the concomitant distortions. The method is validated using simultaneous EEG/fMRI data in healthy subjects, intracranial EEG data in epilepsy patients, and in a direct comparison with standard EEG analysis in a well-established attention paradigm. The method is then demonstrated on a very large cohort of subjects performing a standard gambling task designed to activate the brain's 'reward circuit'. The technique uses the output from standard EEG systems and thus has potential for immediate benefit to a broad range of important basic scientific and clinical questions concerning brain electrical activity, but also provides an inexpensive and portable alternative to function MRI (fMRI).
RESUMEN
SUMMARY: Current preoperative evaluation of epilepsy can be challenging because of the lack of a comprehensive view of the network's dysfunctions. To demonstrate the utility of our multimodal neurophysiology and neuroimaging integration approach in the presurgical evaluation, we present a proof-of-concept for using this approach in a patient with nonlesional frontal lobe epilepsy who underwent two resective surgeries to achieve seizure control. We conducted a post-hoc investigation using four neuroimaging and neurophysiology modalities: diffusion tensor imaging, resting-state functional MRI, and stereoelectroencephalography at rest and during seizures. We computed region-of-interest-based connectivity for each modality and applied betweenness centrality to identify key network hubs across modalities. Our results revealed that despite seizure semiology and stereoelectroencephalography indicating dysfunction in the right orbitofrontal region, the maximum overlap on the hubs across modalities extended to right temporal areas. Notably, the right middle temporal lobe region served as an overlap hub across diffusion tensor imaging, resting-state functional MRI, and rest stereoelectroencephalography networks and was only included in the resected area in the second surgery, which led to long-term seizure control of this patient. Our findings demonstrated that transmodal hubs could help identify key areas related to epileptogenic network. Therefore, this case presents a promising perspective of using a multimodal approach to improve the presurgical evaluation of patients with epilepsy.
Asunto(s)
Imagen de Difusión Tensora , Electroencefalografía , Imagen por Resonancia Magnética , Imagen Multimodal , Humanos , Encéfalo/cirugía , Encéfalo/fisiopatología , Encéfalo/diagnóstico por imagen , Electroencefalografía/métodos , Epilepsia/cirugía , Epilepsia/fisiopatología , Epilepsia/diagnóstico por imagen , Epilepsia del Lóbulo Frontal/cirugía , Epilepsia del Lóbulo Frontal/fisiopatología , Epilepsia del Lóbulo Frontal/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Neuroimagen/métodosRESUMEN
Neural representations of perceptual decision formation that are abstracted from specific motor requirements have previously been identified in humans using non-invasive electrophysiology; however, it is currently unclear where these originate in the brain. Here we capitalized on the high spatiotemporal precision of intracranial EEG to localize such abstract decision signals. Participants undergoing invasive electrophysiological monitoring for epilepsy were asked to judge the direction of random-dot stimuli and respond either with a speeded button press (N = 24), or vocally, after a randomized delay (N = 12). We found a widely distributed motor-independent network of regions where high-frequency activity exhibited key characteristics consistent with evidence accumulation, including a gradual buildup that was modulated by the strength of the sensory evidence, and an amplitude that predicted participants' choice accuracy and response time. Our findings offer a new view on the brain networks governing human decision-making.
Asunto(s)
Toma de Decisiones , Electrocorticografía , Humanos , Adulto , Masculino , Toma de Decisiones/fisiología , Femenino , Electrocorticografía/métodos , Encéfalo/fisiología , Epilepsia/fisiopatología , Adulto Joven , Electroencefalografía , Tiempo de Reacción/fisiología , Mapeo Encefálico/métodos , Persona de Mediana EdadRESUMEN
Humans can easily tune in to one talker in a multitalker environment while still picking up bits of background speech; however, it remains unclear how we perceive speech that is masked and to what degree non-target speech is processed. Some models suggest that perception can be achieved through glimpses, which are spectrotemporal regions where a talker has more energy than the background. Other models, however, require the recovery of the masked regions. To clarify this issue, we directly recorded from primary and non-primary auditory cortex (AC) in neurosurgical patients as they attended to one talker in multitalker speech and trained temporal response function models to predict high-gamma neural activity from glimpsed and masked stimulus features. We found that glimpsed speech is encoded at the level of phonetic features for target and non-target talkers, with enhanced encoding of target speech in non-primary AC. In contrast, encoding of masked phonetic features was found only for the target, with a greater response latency and distinct anatomical organization compared to glimpsed phonetic features. These findings suggest separate mechanisms for encoding glimpsed and masked speech and provide neural evidence for the glimpsing model of speech perception.
Asunto(s)
Percepción del Habla , Habla , Humanos , Habla/fisiología , Estimulación Acústica , Fonética , Percepción del Habla/fisiología , Tiempo de ReacciónRESUMEN
Our continuous visual experience in daily life is dominated by change. Previous research has focused on visual change due to stimulus motion, eye movements or unfolding events, but not their combined impact across the brain, or their interactions with semantic novelty. We investigate the neural responses to these sources of novelty during film viewing. We analyzed intracranial recordings in humans across 6328 electrodes from 23 individuals. Responses associated with saccades and film cuts were dominant across the entire brain. Film cuts at semantic event boundaries were particularly effective in the temporal and medial temporal lobe. Saccades to visual targets with high visual novelty were also associated with strong neural responses. Specific locations in higher-order association areas showed selectivity to either high or low-novelty saccades. We conclude that neural activity associated with film cuts and eye movements is widespread across the brain and is modulated by semantic novelty.
Asunto(s)
Encéfalo , Semántica , Humanos , Encéfalo/fisiología , Movimientos Oculares , Movimientos Sacádicos , Lóbulo Temporal/fisiología , Estimulación LuminosaRESUMEN
The precise role of the human auditory cortex in representing speech sounds and transforming them to meaning is not yet fully understood. Here we used intracranial recordings from the auditory cortex of neurosurgical patients as they listened to natural speech. We found an explicit, temporally ordered and anatomically distributed neural encoding of multiple linguistic features, including phonetic, prelexical phonotactics, word frequency, and lexical-phonological and lexical-semantic information. Grouping neural sites on the basis of their encoded linguistic features revealed a hierarchical pattern, with distinct representations of prelexical and postlexical features distributed across various auditory areas. While sites with longer response latencies and greater distance from the primary auditory cortex encoded higher-level linguistic features, the encoding of lower-level features was preserved and not discarded. Our study reveals a cumulative mapping of sound to meaning and provides empirical evidence for validating neurolinguistic and psycholinguistic models of spoken word recognition that preserve the acoustic variations in speech.
Asunto(s)
Corteza Auditiva , Percepción del Habla , Humanos , Corteza Auditiva/fisiología , Percepción del Habla/fisiología , Percepción Auditiva/fisiología , Habla/fisiología , FonéticaRESUMEN
In natural "active" vision, humans and other primates use eye movements (saccades) to sample bits of information from visual scenes. In the visual cortex, non-retinal signals linked to saccades shift visual cortical neurons into a high excitability state as each saccade ends. The extent of this saccadic modulation outside of the visual system is unknown. Here, we show that during natural viewing, saccades modulate excitability in numerous auditory cortical areas with a temporal pattern complementary to that seen in visual areas. Control somatosensory cortical recordings indicate that the temporal pattern is unique to auditory areas. Bidirectional functional connectivity patterns suggest that these effects may arise from regions involved in saccade generation. We propose that by using saccadic signals to yoke excitability states in auditory areas to those in visual areas, the brain can improve information processing in complex natural settings.
Asunto(s)
Corteza Auditiva , Neocórtex , Animales , Humanos , Movimientos Sacádicos , Movimientos Oculares , Visión Ocular , PrimatesRESUMEN
The human auditory system displays a robust capacity to adapt to sudden changes in background noise, allowing for continuous speech comprehension despite changes in background environments. However, despite comprehensive studies characterizing this ability, the computations that underly this process are not well understood. The first step towards understanding a complex system is to propose a suitable model, but the classical and easily interpreted model for the auditory system, the spectro-temporal receptive field (STRF), cannot match the nonlinear neural dynamics involved in noise adaptation. Here, we utilize a deep neural network (DNN) to model neural adaptation to noise, illustrating its effectiveness at reproducing the complex dynamics at the levels of both individual electrodes and the cortical population. By closely inspecting the model's STRF-like computations over time, we find that the model alters both the gain and shape of its receptive field when adapting to a sudden noise change. We show that the DNN model's gain changes allow it to perform adaptive gain control, while the spectro-temporal change creates noise filtering by altering the inhibitory region of the model's receptive field. Further, we find that models of electrodes in nonprimary auditory cortex also exhibit noise filtering changes in their excitatory regions, suggesting differences in noise filtering mechanisms along the cortical hierarchy. These findings demonstrate the capability of deep neural networks to model complex neural adaptation and offer new hypotheses about the computations the auditory cortex performs to enable noise-robust speech perception in real-world, dynamic environments.
Asunto(s)
Corteza Auditiva , Humanos , Estimulación Acústica/métodos , Percepción Auditiva , Neuronas , Redes Neurales de la ComputaciónRESUMEN
Eye tracking and other behavioral measurements collected from patient-participants in their hospital rooms afford a unique opportunity to study natural behavior for basic and clinical translational research. We describe an immersive social and behavioral paradigm implemented in patients undergoing evaluation for surgical treatment of epilepsy, with electrodes implanted in the brain to determine the source of their seizures. Our studies entail collecting eye tracking with other behavioral and psychophysiological measurements from patient-participants during unscripted behavior, including social interactions with clinical staff, friends, and family in the hospital room. This approach affords a unique opportunity to study the neurobiology of natural social behavior, though it requires carefully addressing distinct logistical, technical, and ethical challenges. Collecting neurophysiological data synchronized to behavioral and psychophysiological measures helps us to study the relationship between behavior and physiology. Combining across these rich data sources while participants eat, read, converse with friends and family, etc., enables clinical-translational research aimed at understanding the participants' disorders and clinician-patient interactions, as well as basic research into natural, real-world behavior. We discuss data acquisition, quality control, annotation, and analysis pipelines that are required for our studies. We also discuss the clinical, logistical, and ethical and privacy considerations critical to working in the hospital setting.
Asunto(s)
Encéfalo , Conducta Social , Humanos , PrivacidadRESUMEN
How the human auditory cortex represents spatially separated simultaneous talkers and how talkers' locations and voices modulate the neural representations of attended and unattended speech are unclear. Here, we measured the neural responses from electrodes implanted in neurosurgical patients as they performed single-talker and multi-talker speech perception tasks. We found that spatial separation between talkers caused a preferential encoding of the contralateral speech in Heschl's gyrus (HG), planum temporale (PT), and superior temporal gyrus (STG). Location and spectrotemporal features were encoded in different aspects of the neural response. Specifically, the talker's location changed the mean response level, whereas the talker's spectrotemporal features altered the variation of response around response's baseline. These components were differentially modulated by the attended talker's voice or location, which improved the population decoding of attended speech features. Attentional modulation due to the talker's voice only appeared in the auditory areas with longer latencies, but attentional modulation due to location was present throughout. Our results show that spatial multi-talker speech perception relies upon a separable pre-attentive neural representation, which could be further tuned by top-down attention to the location and voice of the talker.
Asunto(s)
Corteza Auditiva , Percepción del Habla , Voz , Corteza Auditiva/fisiología , Humanos , Habla , Percepción del Habla/fisiología , Lóbulo TemporalRESUMEN
Electrophysiological oscillations in the brain have been shown to occur as multicycle events, with onset and offset dependent on behavioral and cognitive state. To provide a baseline for state-related and task-related events, we quantified oscillation features in resting-state recordings. We developed an open-source wavelet-based tool to detect and characterize such oscillation events (OEvents) and exemplify the use of this tool in both simulations and two invasively-recorded electrophysiology datasets: one from human, and one from nonhuman primate (NHP) auditory system. After removing incidentally occurring event-related potentials (ERPs), we used OEvents to quantify oscillation features. We identified â¼2 million oscillation events, classified within traditional frequency bands: δ, θ, α, ß, low γ, γ, and high γ. Oscillation events of 1-44 cycles could be identified in at least one frequency band 90% of the time in human and NHP recordings. Individual oscillation events were characterized by nonconstant frequency and amplitude. This result necessarily contrasts with prior studies which assumed frequency constancy, but is consistent with evidence from event-associated oscillations. We measured oscillation event duration, frequency span, and waveform shape. Oscillations tended to exhibit multiple cycles per event, verifiable by comparing filtered to unfiltered waveforms. In addition to the clear intraevent rhythmicity, there was also evidence of interevent rhythmicity within bands, demonstrated by finding that coefficient of variation of interval distributions and Fano factor (FF) measures differed significantly from a Poisson distribution assumption. Overall, our study provides an easy-to-use tool to study oscillation events at the single-trial level or in ongoing recordings, and demonstrates that rhythmic, multicycle oscillation events dominate auditory cortical dynamics.
Asunto(s)
Corteza Auditiva , Animales , Encéfalo , Potenciales Evocados , Humanos , Periodicidad , PrimatesRESUMEN
Speech perception in noise is a challenging everyday task with which many listeners have difficulty. Here, we report a case in which electrical brain stimulation of implanted intracranial electrodes in the left planum temporale (PT) of a neurosurgical patient significantly and reliably improved subjective quality (up to 50%) and objective intelligibility (up to 97%) of speech in noise perception. Stimulation resulted in a selective enhancement of speech sounds compared with the background noises. The receptive fields of the PT sites whose stimulation improved speech perception were tuned to spectrally broad and rapidly changing sounds. Corticocortical evoked potential analysis revealed that the PT sites were located between the sites in Heschl's gyrus and the superior temporal gyrus. Moreover, the discriminability of speech from nonspeech sounds increased in population neural responses from Heschl's gyrus to the PT to the superior temporal gyrus sites. These findings causally implicate the PT in background noise suppression and may point to a novel potential neuroprosthetic solution to assist in the challenging task of speech perception in noise.SIGNIFICANCE STATEMENT Speech perception in noise remains a challenging task for many individuals. Here, we present a case in which the electrical brain stimulation of intracranially implanted electrodes in the planum temporale of a neurosurgical patient significantly improved both the subjective quality (up to 50%) and objective intelligibility (up to 97%) of speech perception in noise. Stimulation resulted in a selective enhancement of speech sounds compared with the background noises. Our local and network-level functional analyses placed the planum temporale sites in between the sites in the primary auditory areas in Heschl's gyrus and nonprimary auditory areas in the superior temporal gyrus. These findings causally implicate planum temporale in acoustic scene analysis and suggest potential neuroprosthetic applications to assist hearing in noise.
Asunto(s)
Corteza Auditiva , Percepción del Habla , Estimulación Acústica , Corteza Auditiva/fisiología , Encéfalo , Mapeo Encefálico/métodos , Audición , Humanos , Imagen por Resonancia Magnética/métodos , Habla/fisiología , Percepción del Habla/fisiologíaRESUMEN
The progress of therapeutic neuromodulation greatly depends on improving stimulation parameters to most efficiently induce neuroplasticity effects. Intermittent θ-burst stimulation (iTBS), a form of electrical stimulation that mimics natural brain activity patterns, has proved to efficiently induce such effects in animal studies and rhythmic transcranial magnetic stimulation studies in humans. However, little is known about the potential neuroplasticity effects of iTBS applied through intracranial electrodes in humans. This study characterizes the physiological effects of intracranial iTBS in humans and compare them with α-frequency stimulation, another frequently used neuromodulatory pattern. We applied these two stimulation patterns to well-defined regions in the sensorimotor cortex, which elicited contralateral hand muscle contractions during clinical mapping, in patients with epilepsy implanted with intracranial electrodes. Treatment effects were evaluated using oscillatory coherence across areas connected to the treatment site, as defined with corticocortical-evoked potentials. Our results show that iTBS increases coherence in the ß-frequency band within the sensorimotor network indicating a potential neuroplasticity effect. The effect is specific to the sensorimotor system, the ß band, and the stimulation pattern and outlasted the stimulation period by â¼3 min. The effect occurred in four out of seven subjects depending on the buildup of the effect during iTBS treatment and other patterns of oscillatory activity related to ceiling effects within the ß band and to preexistent coherence within the α band. By characterizing the neurophysiological effects of iTBS within well-defined cortical networks, we hope to provide an electrophysiological framework that allows clinicians/researchers to optimize brain stimulation protocols which may have translational value.NEW & NOTEWORTHY θ-Burst stimulation (TBS) protocols in transcranial magnetic stimulation studies have shown improved treatment efficacy in a variety of neuropsychiatric disorders. The optimal protocol to induce neuroplasticity in invasive direct electrical stimulation approaches is not known. We report that intracranial TBS applied in human sensorimotor cortex increases local coherence of preexistent ß rhythms. The effect is specific to the stimulation frequency and the stimulated network and outlasts the stimulation period by â¼3 min.
Asunto(s)
Ritmo beta/fisiología , Terapia por Estimulación Eléctrica , Estimulación Eléctrica , Electrocorticografía , Red Nerviosa/fisiología , Plasticidad Neuronal/fisiología , Corteza Sensoriomotora/fisiología , Adulto , Femenino , Humanos , Masculino , Adulto JovenRESUMEN
Almost 100 years ago experiments involving electrically stimulating and recording from the brain and the body launched new discoveries and debates on how electricity, movement, and thoughts are related. Decades later the development of brain-computer interface technology began, which now targets a wide range of applications. Potential uses include augmentative communication for locked-in patients and restoring sensorimotor function in those who are battling disease or have suffered traumatic injury. Technical and surgical challenges still surround the development of brain-computer technology, however, before it can be widely deployed. In this review we explore these challenges, historical perspectives, and the remarkable achievements of clinical study participants who have bravely forged new paths for future beneficiaries.
RESUMEN
Millions of people worldwide suffer motor or sensory impairment due to stroke, spinal cord injury, multiple sclerosis, traumatic brain injury, diabetes, and motor neuron diseases such as ALS (amyotrophic lateral sclerosis). A brain-computer interface (BCI), which links the brain directly to a computer, offers a new way to study the brain and potentially restore impairments in patients living with these debilitating conditions. One of the challenges currently facing BCI technology, however, is to minimize surgical risk while maintaining efficacy. Minimally invasive techniques, such as stereoelectroencephalography (SEEG) have become more widely used in clinical applications in epilepsy patients since they can lead to fewer complications. SEEG depth electrodes also give access to sulcal and white matter areas of the brain but have not been widely studied in brain-computer interfaces. Here we show the first demonstration of decoding sulcal and subcortical activity related to both movement and tactile sensation in the human hand. Furthermore, we have compared decoding performance in SEEG-based depth recordings versus those obtained with electrocorticography electrodes (ECoG) placed on gyri. Initial poor decoding performance and the observation that most neural modulation patterns varied in amplitude trial-to-trial and were transient (significantly shorter than the sustained finger movements studied), led to the development of a feature selection method based on a repeatability metric using temporal correlation. An algorithm based on temporal correlation was developed to isolate features that consistently repeated (required for accurate decoding) and possessed information content related to movement or touch-related stimuli. We subsequently used these features, along with deep learning methods, to automatically classify various motor and sensory events for individual fingers with high accuracy. Repeating features were found in sulcal, gyral, and white matter areas and were predominantly phasic or phasic-tonic across a wide frequency range for both HD (high density) ECoG and SEEG recordings. These findings motivated the use of long short-term memory (LSTM) recurrent neural networks (RNNs) which are well-suited to handling transient input features. Combining temporal correlation-based feature selection with LSTM yielded decoding accuracies of up to 92.04 ± 1.51% for hand movements, up to 91.69 ± 0.49% for individual finger movements, and up to 83.49 ± 0.72% for focal tactile stimuli to individual finger pads while using a relatively small number of SEEG electrodes. These findings may lead to a new class of minimally invasive brain-computer interface systems in the future, increasing its applicability to a wide variety of conditions.
RESUMEN
BACKGROUND: Paralysis and neuropathy, affecting millions of people worldwide, can be accompanied by significant loss of somatosensation. With tactile sensation being central to achieving dexterous movement, brain-computer interface (BCI) researchers have used intracortical and cortical surface electrical stimulation to restore somatotopically-relevant sensation to the hand. However, these approaches are restricted to stimulating the gyral areas of the brain. Since representation of distal regions of the hand extends into the sulcal regions of human primary somatosensory cortex (S1), it has been challenging to evoke sensory percepts localized to the fingertips. OBJECTIVE/HYPOTHESIS: Targeted stimulation of sulcal regions of S1, using stereoelectroencephalography (SEEG) depth electrodes, can evoke focal sensory percepts in the fingertips. METHODS: Two participants with intractable epilepsy received cortical stimulation both at the gyri via high-density electrocorticography (HD-ECoG) grids and in the sulci via SEEG depth electrode leads. We characterized the evoked sensory percepts localized to the hand. RESULTS: We show that highly focal percepts can be evoked in the fingertips of the hand through sulcal stimulation. fMRI, myelin content, and cortical thickness maps from the Human Connectome Project elucidated specific cortical areas and sub-regions within S1 that evoked these focal percepts. Within-participant comparisons showed that percepts evoked by sulcal stimulation via SEEG electrodes were significantly more focal (80% less area; p = 0.02) and localized to the fingertips more often, than by gyral stimulation via HD-ECoG electrodes. Finally, sulcal locations with consistent modulation of high-frequency neural activity during mechanical tactile stimulation of the fingertips showed the same somatotopic correspondence as cortical stimulation. CONCLUSIONS: Our findings indicate minimally invasive sulcal stimulation via SEEG electrodes could be a clinically viable approach to restoring sensation.