Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 47
1.
bioRxiv ; 2024 May 14.
Article En | MEDLINE | ID: mdl-38798551

Listeners readily extract multi-dimensional auditory objects such as a 'localized talker' from complex acoustic scenes with multiple talkers. Yet, the neural mechanisms underlying simultaneous encoding and linking of different sound features - for example, a talker's voice and location - are poorly understood. We analyzed invasive intracranial recordings in neurosurgical patients attending to a localized talker in real-life cocktail party scenarios. We found that sensitivity to an individual talker's voice and location features was distributed throughout auditory cortex and that neural sites exhibited a gradient from sensitivity to a single feature to joint sensitivity to both features. On a population level, cortical response patterns of both dual-feature sensitive sites but also single-feature sensitive sites revealed simultaneous encoding of an attended talker's voice and location features. However, for single-feature sensitive sites, the representation of the primary feature was more precise. Further, sites which selective tracked an attended speech stream concurrently encoded an attended talker's voice and location features, indicating that such sites combine selective tracking of an attended auditory object with encoding of the object's features. Finally, we found that attending a localized talker selectively enhanced temporal coherence between single-feature voice sensitive sites and single-feature location sensitive sites, providing an additional mechanism for linking voice and location in multi-talker scenes. These results demonstrate that a talker's voice and location features are linked during multi-dimensional object formation in naturalistic multi-talker scenes by joint population coding as well as by temporal coherence between neural sites. SIGNIFICANCE STATEMENT: Listeners effortlessly extract auditory objects from complex acoustic scenes consisting of multiple sound sources in naturalistic, spatial sound scenes. Yet, how the brain links different sound features to form a multi-dimensional auditory object is poorly understood. We investigated how neural responses encode and integrate an attended talker's voice and location features in spatial multi-talker sound scenes to elucidate which neural mechanisms underlie simultaneous encoding and linking of different auditory features. Our results show that joint population coding as well as temporal coherence mechanisms contribute to distributed multi-dimensional auditory object encoding. These findings shed new light on cortical functional specialization and multidimensional auditory object formation in complex, naturalistic listening scenes. HIGHLIGHTS: Cortical responses to an single talker exhibit a distributed gradient, ranging from sites that are sensitive to both a talker's voice and location (dual-feature sensitive sites) to sites that are sensitive to either voice or location (single-feature sensitive sites).Population response patterns of dual-feature sensitive sites encode voice and location features of the attended talker in multi-talker scenes jointly and with equal precision.Despite their sensitivity to a single feature at the level of individual cortical sites, population response patterns of single-feature sensitive sites also encode location and voice features of a talker jointly, but with higher precision for the feature they are primarily sensitive to.Neural sites which selectively track an attended speech stream concurrently encode the attended talker's voice and location features.Attention selectively enhances temporal coherence between voice and location selective sites over time.Joint population coding as well as temporal coherence mechanisms underlie distributed multi-dimensional auditory object encoding in auditory cortex.

2.
Res Sq ; 2024 Apr 12.
Article En | MEDLINE | ID: mdl-38659785

We present a method for direct imaging of the electric field networks in the human brain from electroencephalography (EEG) data with much higher temporal and spatial resolution than functional MRI (fMRI), without the concomitant distortions. The method is validated using simultaneous EEG/fMRI data in healthy subjects, intracranial EEG data in epilepsy patients, and in a direct comparison with standard EEG analysis in a well-established attention paradigm. The method is then demonstrated on a very large cohort of subjects performing a standard gambling task designed to activate the brain's 'reward circuit'. The technique uses the output from standard EEG systems and thus has potential for immediate benefit to a broad range of important basic scientific and clinical questions concerning brain electrical activity, but also provides an inexpensive and portable alternative to function MRI (fMRI).

3.
Nat Hum Behav ; 8(4): 758-770, 2024 Apr.
Article En | MEDLINE | ID: mdl-38366105

Neural representations of perceptual decision formation that are abstracted from specific motor requirements have previously been identified in humans using non-invasive electrophysiology; however, it is currently unclear where these originate in the brain. Here we capitalized on the high spatiotemporal precision of intracranial EEG to localize such abstract decision signals. Participants undergoing invasive electrophysiological monitoring for epilepsy were asked to judge the direction of random-dot stimuli and respond either with a speeded button press (N = 24), or vocally, after a randomized delay (N = 12). We found a widely distributed motor-independent network of regions where high-frequency activity exhibited key characteristics consistent with evidence accumulation, including a gradual buildup that was modulated by the strength of the sensory evidence, and an amplitude that predicted participants' choice accuracy and response time. Our findings offer a new view on the brain networks governing human decision-making.


Decision Making , Electrocorticography , Humans , Adult , Male , Decision Making/physiology , Female , Electrocorticography/methods , Brain/physiology , Epilepsy/physiopathology , Young Adult , Electroencephalography , Reaction Time/physiology , Brain Mapping/methods , Middle Aged
4.
J Clin Neurophysiol ; 41(4): 317-321, 2024 May 01.
Article En | MEDLINE | ID: mdl-38376938

SUMMARY: Current preoperative evaluation of epilepsy can be challenging because of the lack of a comprehensive view of the network's dysfunctions. To demonstrate the utility of our multimodal neurophysiology and neuroimaging integration approach in the presurgical evaluation, we present a proof-of-concept for using this approach in a patient with nonlesional frontal lobe epilepsy who underwent two resective surgeries to achieve seizure control. We conducted a post-hoc investigation using four neuroimaging and neurophysiology modalities: diffusion tensor imaging, resting-state functional MRI, and stereoelectroencephalography at rest and during seizures. We computed region-of-interest-based connectivity for each modality and applied betweenness centrality to identify key network hubs across modalities. Our results revealed that despite seizure semiology and stereoelectroencephalography indicating dysfunction in the right orbitofrontal region, the maximum overlap on the hubs across modalities extended to right temporal areas. Notably, the right middle temporal lobe region served as an overlap hub across diffusion tensor imaging, resting-state functional MRI, and rest stereoelectroencephalography networks and was only included in the resected area in the second surgery, which led to long-term seizure control of this patient. Our findings demonstrated that transmodal hubs could help identify key areas related to epileptogenic network. Therefore, this case presents a promising perspective of using a multimodal approach to improve the presurgical evaluation of patients with epilepsy.


Diffusion Tensor Imaging , Electroencephalography , Magnetic Resonance Imaging , Multimodal Imaging , Humans , Electroencephalography/methods , Magnetic Resonance Imaging/methods , Neuroimaging/methods , Adult , Male , Female , Brain/surgery , Brain/physiopathology , Brain/diagnostic imaging , Epilepsy/surgery , Epilepsy/physiopathology , Epilepsy/diagnostic imaging , Epilepsy, Frontal Lobe/surgery , Epilepsy, Frontal Lobe/physiopathology , Epilepsy, Frontal Lobe/diagnostic imaging
5.
PLoS Biol ; 21(6): e3002128, 2023 06.
Article En | MEDLINE | ID: mdl-37279203

Humans can easily tune in to one talker in a multitalker environment while still picking up bits of background speech; however, it remains unclear how we perceive speech that is masked and to what degree non-target speech is processed. Some models suggest that perception can be achieved through glimpses, which are spectrotemporal regions where a talker has more energy than the background. Other models, however, require the recovery of the masked regions. To clarify this issue, we directly recorded from primary and non-primary auditory cortex (AC) in neurosurgical patients as they attended to one talker in multitalker speech and trained temporal response function models to predict high-gamma neural activity from glimpsed and masked stimulus features. We found that glimpsed speech is encoded at the level of phonetic features for target and non-target talkers, with enhanced encoding of target speech in non-primary AC. In contrast, encoding of masked phonetic features was found only for the target, with a greater response latency and distinct anatomical organization compared to glimpsed phonetic features. These findings suggest separate mechanisms for encoding glimpsed and masked speech and provide neural evidence for the glimpsing model of speech perception.


Speech Perception , Speech , Humans , Speech/physiology , Acoustic Stimulation , Phonetics , Speech Perception/physiology , Reaction Time
6.
Nat Commun ; 14(1): 2910, 2023 05 22.
Article En | MEDLINE | ID: mdl-37217478

Our continuous visual experience in daily life is dominated by change. Previous research has focused on visual change due to stimulus motion, eye movements or unfolding events, but not their combined impact across the brain, or their interactions with semantic novelty. We investigate the neural responses to these sources of novelty during film viewing. We analyzed intracranial recordings in humans across 6328 electrodes from 23 individuals. Responses associated with saccades and film cuts were dominant across the entire brain. Film cuts at semantic event boundaries were particularly effective in the temporal and medial temporal lobe. Saccades to visual targets with high visual novelty were also associated with strong neural responses. Specific locations in higher-order association areas showed selectivity to either high or low-novelty saccades. We conclude that neural activity associated with film cuts and eye movements is widespread across the brain and is modulated by semantic novelty.


Brain , Semantics , Humans , Brain/physiology , Eye Movements , Saccades , Temporal Lobe/physiology , Photic Stimulation
7.
Nat Hum Behav ; 7(5): 740-753, 2023 05.
Article En | MEDLINE | ID: mdl-36864134

The precise role of the human auditory cortex in representing speech sounds and transforming them to meaning is not yet fully understood. Here we used intracranial recordings from the auditory cortex of neurosurgical patients as they listened to natural speech. We found an explicit, temporally ordered and anatomically distributed neural encoding of multiple linguistic features, including phonetic, prelexical phonotactics, word frequency, and lexical-phonological and lexical-semantic information. Grouping neural sites on the basis of their encoded linguistic features revealed a hierarchical pattern, with distinct representations of prelexical and postlexical features distributed across various auditory areas. While sites with longer response latencies and greater distance from the primary auditory cortex encoded higher-level linguistic features, the encoding of lower-level features was preserved and not discarded. Our study reveals a cumulative mapping of sound to meaning and provides empirical evidence for validating neurolinguistic and psycholinguistic models of spoken word recognition that preserve the acoustic variations in speech.


Auditory Cortex , Speech Perception , Humans , Auditory Cortex/physiology , Speech Perception/physiology , Auditory Perception/physiology , Speech/physiology , Phonetics
8.
Curr Biol ; 33(7): 1185-1195.e6, 2023 04 10.
Article En | MEDLINE | ID: mdl-36863343

In natural "active" vision, humans and other primates use eye movements (saccades) to sample bits of information from visual scenes. In the visual cortex, non-retinal signals linked to saccades shift visual cortical neurons into a high excitability state as each saccade ends. The extent of this saccadic modulation outside of the visual system is unknown. Here, we show that during natural viewing, saccades modulate excitability in numerous auditory cortical areas with a temporal pattern complementary to that seen in visual areas. Control somatosensory cortical recordings indicate that the temporal pattern is unique to auditory areas. Bidirectional functional connectivity patterns suggest that these effects may arise from regions involved in saccade generation. We propose that by using saccadic signals to yoke excitability states in auditory areas to those in visual areas, the brain can improve information processing in complex natural settings.


Auditory Cortex , Neocortex , Animals , Humans , Saccades , Eye Movements , Vision, Ocular , Primates
9.
Neuroimage ; 266: 119819, 2023 02 01.
Article En | MEDLINE | ID: mdl-36529203

The human auditory system displays a robust capacity to adapt to sudden changes in background noise, allowing for continuous speech comprehension despite changes in background environments. However, despite comprehensive studies characterizing this ability, the computations that underly this process are not well understood. The first step towards understanding a complex system is to propose a suitable model, but the classical and easily interpreted model for the auditory system, the spectro-temporal receptive field (STRF), cannot match the nonlinear neural dynamics involved in noise adaptation. Here, we utilize a deep neural network (DNN) to model neural adaptation to noise, illustrating its effectiveness at reproducing the complex dynamics at the levels of both individual electrodes and the cortical population. By closely inspecting the model's STRF-like computations over time, we find that the model alters both the gain and shape of its receptive field when adapting to a sudden noise change. We show that the DNN model's gain changes allow it to perform adaptive gain control, while the spectro-temporal change creates noise filtering by altering the inhibitory region of the model's receptive field. Further, we find that models of electrodes in nonprimary auditory cortex also exhibit noise filtering changes in their excitatory regions, suggesting differences in noise filtering mechanisms along the cortical hierarchy. These findings demonstrate the capability of deep neural networks to model complex neural adaptation and offer new hypotheses about the computations the auditory cortex performs to enable noise-robust speech perception in real-world, dynamic environments.


Auditory Cortex , Humans , Acoustic Stimulation/methods , Auditory Perception , Neurons , Neural Networks, Computer
10.
Behav Res Methods ; 55(5): 2333-2352, 2023 08.
Article En | MEDLINE | ID: mdl-35877024

Eye tracking and other behavioral measurements collected from patient-participants in their hospital rooms afford a unique opportunity to study natural behavior for basic and clinical translational research. We describe an immersive social and behavioral paradigm implemented in patients undergoing evaluation for surgical treatment of epilepsy, with electrodes implanted in the brain to determine the source of their seizures. Our studies entail collecting eye tracking with other behavioral and psychophysiological measurements from patient-participants during unscripted behavior, including social interactions with clinical staff, friends, and family in the hospital room. This approach affords a unique opportunity to study the neurobiology of natural social behavior, though it requires carefully addressing distinct logistical, technical, and ethical challenges. Collecting neurophysiological data synchronized to behavioral and psychophysiological measures helps us to study the relationship between behavior and physiology. Combining across these rich data sources while participants eat, read, converse with friends and family, etc., enables clinical-translational research aimed at understanding the participants' disorders and clinician-patient interactions, as well as basic research into natural, real-world behavior. We discuss data acquisition, quality control, annotation, and analysis pipelines that are required for our studies. We also discuss the clinical, logistical, and ethical and privacy considerations critical to working in the hospital setting.


Brain , Social Behavior , Humans , Privacy
11.
Curr Biol ; 32(18): 3971-3986.e4, 2022 09 26.
Article En | MEDLINE | ID: mdl-35973430

How the human auditory cortex represents spatially separated simultaneous talkers and how talkers' locations and voices modulate the neural representations of attended and unattended speech are unclear. Here, we measured the neural responses from electrodes implanted in neurosurgical patients as they performed single-talker and multi-talker speech perception tasks. We found that spatial separation between talkers caused a preferential encoding of the contralateral speech in Heschl's gyrus (HG), planum temporale (PT), and superior temporal gyrus (STG). Location and spectrotemporal features were encoded in different aspects of the neural response. Specifically, the talker's location changed the mean response level, whereas the talker's spectrotemporal features altered the variation of response around response's baseline. These components were differentially modulated by the attended talker's voice or location, which improved the population decoding of attended speech features. Attentional modulation due to the talker's voice only appeared in the auditory areas with longer latencies, but attentional modulation due to location was present throughout. Our results show that spatial multi-talker speech perception relies upon a separable pre-attentive neural representation, which could be further tuned by top-down attention to the location and voice of the talker.


Auditory Cortex , Speech Perception , Voice , Auditory Cortex/physiology , Humans , Speech , Speech Perception/physiology , Temporal Lobe
12.
eNeuro ; 9(4)2022.
Article En | MEDLINE | ID: mdl-35906065

Electrophysiological oscillations in the brain have been shown to occur as multicycle events, with onset and offset dependent on behavioral and cognitive state. To provide a baseline for state-related and task-related events, we quantified oscillation features in resting-state recordings. We developed an open-source wavelet-based tool to detect and characterize such oscillation events (OEvents) and exemplify the use of this tool in both simulations and two invasively-recorded electrophysiology datasets: one from human, and one from nonhuman primate (NHP) auditory system. After removing incidentally occurring event-related potentials (ERPs), we used OEvents to quantify oscillation features. We identified ∼2 million oscillation events, classified within traditional frequency bands: δ, θ, α, ß, low γ, γ, and high γ. Oscillation events of 1-44 cycles could be identified in at least one frequency band 90% of the time in human and NHP recordings. Individual oscillation events were characterized by nonconstant frequency and amplitude. This result necessarily contrasts with prior studies which assumed frequency constancy, but is consistent with evidence from event-associated oscillations. We measured oscillation event duration, frequency span, and waveform shape. Oscillations tended to exhibit multiple cycles per event, verifiable by comparing filtered to unfiltered waveforms. In addition to the clear intraevent rhythmicity, there was also evidence of interevent rhythmicity within bands, demonstrated by finding that coefficient of variation of interval distributions and Fano factor (FF) measures differed significantly from a Poisson distribution assumption. Overall, our study provides an easy-to-use tool to study oscillation events at the single-trial level or in ongoing recordings, and demonstrates that rhythmic, multicycle oscillation events dominate auditory cortical dynamics.


Auditory Cortex , Animals , Brain , Evoked Potentials , Humans , Periodicity , Primates
13.
J Neurosci ; 42(17): 3648-3658, 2022 04 27.
Article En | MEDLINE | ID: mdl-35347046

Speech perception in noise is a challenging everyday task with which many listeners have difficulty. Here, we report a case in which electrical brain stimulation of implanted intracranial electrodes in the left planum temporale (PT) of a neurosurgical patient significantly and reliably improved subjective quality (up to 50%) and objective intelligibility (up to 97%) of speech in noise perception. Stimulation resulted in a selective enhancement of speech sounds compared with the background noises. The receptive fields of the PT sites whose stimulation improved speech perception were tuned to spectrally broad and rapidly changing sounds. Corticocortical evoked potential analysis revealed that the PT sites were located between the sites in Heschl's gyrus and the superior temporal gyrus. Moreover, the discriminability of speech from nonspeech sounds increased in population neural responses from Heschl's gyrus to the PT to the superior temporal gyrus sites. These findings causally implicate the PT in background noise suppression and may point to a novel potential neuroprosthetic solution to assist in the challenging task of speech perception in noise.SIGNIFICANCE STATEMENT Speech perception in noise remains a challenging task for many individuals. Here, we present a case in which the electrical brain stimulation of intracranially implanted electrodes in the planum temporale of a neurosurgical patient significantly improved both the subjective quality (up to 50%) and objective intelligibility (up to 97%) of speech perception in noise. Stimulation resulted in a selective enhancement of speech sounds compared with the background noises. Our local and network-level functional analyses placed the planum temporale sites in between the sites in the primary auditory areas in Heschl's gyrus and nonprimary auditory areas in the superior temporal gyrus. These findings causally implicate planum temporale in acoustic scene analysis and suggest potential neuroprosthetic applications to assist hearing in noise.


Auditory Cortex , Speech Perception , Acoustic Stimulation , Auditory Cortex/physiology , Brain , Brain Mapping/methods , Hearing , Humans , Magnetic Resonance Imaging/methods , Speech/physiology , Speech Perception/physiology
14.
J Neurophysiol ; 126(5): 1723-1739, 2021 11 01.
Article En | MEDLINE | ID: mdl-34644179

The progress of therapeutic neuromodulation greatly depends on improving stimulation parameters to most efficiently induce neuroplasticity effects. Intermittent θ-burst stimulation (iTBS), a form of electrical stimulation that mimics natural brain activity patterns, has proved to efficiently induce such effects in animal studies and rhythmic transcranial magnetic stimulation studies in humans. However, little is known about the potential neuroplasticity effects of iTBS applied through intracranial electrodes in humans. This study characterizes the physiological effects of intracranial iTBS in humans and compare them with α-frequency stimulation, another frequently used neuromodulatory pattern. We applied these two stimulation patterns to well-defined regions in the sensorimotor cortex, which elicited contralateral hand muscle contractions during clinical mapping, in patients with epilepsy implanted with intracranial electrodes. Treatment effects were evaluated using oscillatory coherence across areas connected to the treatment site, as defined with corticocortical-evoked potentials. Our results show that iTBS increases coherence in the ß-frequency band within the sensorimotor network indicating a potential neuroplasticity effect. The effect is specific to the sensorimotor system, the ß band, and the stimulation pattern and outlasted the stimulation period by ∼3 min. The effect occurred in four out of seven subjects depending on the buildup of the effect during iTBS treatment and other patterns of oscillatory activity related to ceiling effects within the ß band and to preexistent coherence within the α band. By characterizing the neurophysiological effects of iTBS within well-defined cortical networks, we hope to provide an electrophysiological framework that allows clinicians/researchers to optimize brain stimulation protocols which may have translational value.NEW & NOTEWORTHY θ-Burst stimulation (TBS) protocols in transcranial magnetic stimulation studies have shown improved treatment efficacy in a variety of neuropsychiatric disorders. The optimal protocol to induce neuroplasticity in invasive direct electrical stimulation approaches is not known. We report that intracranial TBS applied in human sensorimotor cortex increases local coherence of preexistent ß rhythms. The effect is specific to the stimulation frequency and the stimulated network and outlasts the stimulation period by ∼3 min.


Beta Rhythm/physiology , Electric Stimulation Therapy , Electric Stimulation , Electrocorticography , Nerve Net/physiology , Neuronal Plasticity/physiology , Sensorimotor Cortex/physiology , Adult , Female , Humans , Male , Young Adult
15.
Front Neurosci ; 15: 699631, 2021.
Article En | MEDLINE | ID: mdl-34483823

Millions of people worldwide suffer motor or sensory impairment due to stroke, spinal cord injury, multiple sclerosis, traumatic brain injury, diabetes, and motor neuron diseases such as ALS (amyotrophic lateral sclerosis). A brain-computer interface (BCI), which links the brain directly to a computer, offers a new way to study the brain and potentially restore impairments in patients living with these debilitating conditions. One of the challenges currently facing BCI technology, however, is to minimize surgical risk while maintaining efficacy. Minimally invasive techniques, such as stereoelectroencephalography (SEEG) have become more widely used in clinical applications in epilepsy patients since they can lead to fewer complications. SEEG depth electrodes also give access to sulcal and white matter areas of the brain but have not been widely studied in brain-computer interfaces. Here we show the first demonstration of decoding sulcal and subcortical activity related to both movement and tactile sensation in the human hand. Furthermore, we have compared decoding performance in SEEG-based depth recordings versus those obtained with electrocorticography electrodes (ECoG) placed on gyri. Initial poor decoding performance and the observation that most neural modulation patterns varied in amplitude trial-to-trial and were transient (significantly shorter than the sustained finger movements studied), led to the development of a feature selection method based on a repeatability metric using temporal correlation. An algorithm based on temporal correlation was developed to isolate features that consistently repeated (required for accurate decoding) and possessed information content related to movement or touch-related stimuli. We subsequently used these features, along with deep learning methods, to automatically classify various motor and sensory events for individual fingers with high accuracy. Repeating features were found in sulcal, gyral, and white matter areas and were predominantly phasic or phasic-tonic across a wide frequency range for both HD (high density) ECoG and SEEG recordings. These findings motivated the use of long short-term memory (LSTM) recurrent neural networks (RNNs) which are well-suited to handling transient input features. Combining temporal correlation-based feature selection with LSTM yielded decoding accuracies of up to 92.04 ± 1.51% for hand movements, up to 91.69 ± 0.49% for individual finger movements, and up to 83.49 ± 0.72% for focal tactile stimuli to individual finger pads while using a relatively small number of SEEG electrodes. These findings may lead to a new class of minimally invasive brain-computer interface systems in the future, increasing its applicability to a wide variety of conditions.

16.
Bioelectron Med ; 7(1): 14, 2021 Sep 22.
Article En | MEDLINE | ID: mdl-34548098

Almost 100 years ago experiments involving electrically stimulating and recording from the brain and the body launched new discoveries and debates on how electricity, movement, and thoughts are related. Decades later the development of brain-computer interface technology began, which now targets a wide range of applications. Potential uses include augmentative communication for locked-in patients and restoring sensorimotor function in those who are battling disease or have suffered traumatic injury. Technical and surgical challenges still surround the development of brain-computer technology, however, before it can be widely deployed. In this review we explore these challenges, historical perspectives, and the remarkable achievements of clinical study participants who have bravely forged new paths for future beneficiaries.

17.
Brain Stimul ; 14(5): 1184-1196, 2021.
Article En | MEDLINE | ID: mdl-34358704

BACKGROUND: Paralysis and neuropathy, affecting millions of people worldwide, can be accompanied by significant loss of somatosensation. With tactile sensation being central to achieving dexterous movement, brain-computer interface (BCI) researchers have used intracortical and cortical surface electrical stimulation to restore somatotopically-relevant sensation to the hand. However, these approaches are restricted to stimulating the gyral areas of the brain. Since representation of distal regions of the hand extends into the sulcal regions of human primary somatosensory cortex (S1), it has been challenging to evoke sensory percepts localized to the fingertips. OBJECTIVE/HYPOTHESIS: Targeted stimulation of sulcal regions of S1, using stereoelectroencephalography (SEEG) depth electrodes, can evoke focal sensory percepts in the fingertips. METHODS: Two participants with intractable epilepsy received cortical stimulation both at the gyri via high-density electrocorticography (HD-ECoG) grids and in the sulci via SEEG depth electrode leads. We characterized the evoked sensory percepts localized to the hand. RESULTS: We show that highly focal percepts can be evoked in the fingertips of the hand through sulcal stimulation. fMRI, myelin content, and cortical thickness maps from the Human Connectome Project elucidated specific cortical areas and sub-regions within S1 that evoked these focal percepts. Within-participant comparisons showed that percepts evoked by sulcal stimulation via SEEG electrodes were significantly more focal (80% less area; p = 0.02) and localized to the fingertips more often, than by gyral stimulation via HD-ECoG electrodes. Finally, sulcal locations with consistent modulation of high-frequency neural activity during mechanical tactile stimulation of the fingertips showed the same somatotopic correspondence as cortical stimulation. CONCLUSIONS: Our findings indicate minimally invasive sulcal stimulation via SEEG electrodes could be a clinically viable approach to restoring sensation.


Hand , Somatosensory Cortex , Electric Stimulation , Electrocorticography , Electrodes, Implanted , Humans , Touch
18.
Bioelectron Med ; 7(1): 7, 2021 May 24.
Article En | MEDLINE | ID: mdl-34024277

There is a broad and growing interest in Bioelectronic Medicine, a dynamic field that continues to generate new approaches in disease treatment. The fourth bioelectronic medicine summit "Technology targeting molecular mechanisms" took place on September 23 and 24, 2020. This virtual meeting was hosted by the Feinstein Institutes for Medical Research, Northwell Health. The summit called international attention to Bioelectronic Medicine as a platform for new developments in science, technology, and healthcare. The meeting was an arena for exchanging new ideas and seeding potential collaborations involving teams in academia and industry. The summit provided a forum for leaders in the field to discuss current progress, challenges, and future developments in Bioelectronic Medicine. The main topics discussed at the summit are outlined here.

19.
Neuroimage ; 235: 118003, 2021 07 15.
Article En | MEDLINE | ID: mdl-33789135

Heschl's gyrus (HG) is a brain area that includes the primary auditory cortex in humans. Due to the limitations in obtaining direct neural measurements from this region during naturalistic speech listening, the functional organization and the role of HG in speech perception remain uncertain. Here, we used intracranial EEG to directly record neural activity in HG in eight neurosurgical patients as they listened to continuous speech stories. We studied the spatial distribution of acoustic tuning and the organization of linguistic feature encoding. We found a main gradient of change from posteromedial to anterolateral parts of HG. We also observed a decrease in frequency and temporal modulation tuning and an increase in phonemic representation, speaker normalization, speech sensitivity, and response latency. We did not observe a difference between the two brain hemispheres. These findings reveal a functional role for HG in processing and transforming simple to complex acoustic features and inform neurophysiological models of speech processing in the human auditory cortex.


Auditory Cortex/physiology , Brain Mapping , Speech Perception/physiology , Adult , Electrocorticography , Epilepsy/diagnosis , Epilepsy/surgery , Female , Humans , Male , Middle Aged , Neurosurgical Procedures
20.
Cereb Cortex Commun ; 2(1): tgaa091, 2021.
Article En | MEDLINE | ID: mdl-33506209

Action and perception are closely linked in many behaviors necessitating a close coordination between sensory and motor neural processes so as to achieve a well-integrated smoothly evolving task performance. To investigate the detailed nature of these sensorimotor interactions, and their role in learning and executing the skilled motor task of speaking, we analyzed ECoG recordings of responses in the high-γ band (70-150 Hz) in human subjects while they listened to, spoke, or silently articulated speech. We found elaborate spectrotemporally modulated neural activity projecting in both "forward" (motor-to-sensory) and "inverse" directions between the higher-auditory and motor cortical regions engaged during speaking. Furthermore, mathematical simulations demonstrate a key role for the forward projection in "learning" to control the vocal tract, beyond its commonly postulated predictive role during execution. These results therefore offer a broader view of the functional role of the ubiquitous forward projection as an important ingredient in learning, rather than just control, of skilled sensorimotor tasks.

...