Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 83
Filter
1.
Sci Rep ; 14(1): 13784, 2024 06 14.
Article in English | MEDLINE | ID: mdl-38877093

ABSTRACT

Cortico-cortical evoked potentials (CCEPs) elicited by single-pulse electric stimulation (SPES) are widely used to assess effective connectivity between cortical areas and are also implemented in the presurgical evaluation of epileptic patients. Nevertheless, the cortical generators underlying the various components of CCEPs in humans have not yet been elucidated. Our aim was to describe the laminar pattern arising under SPES evoked CCEP components (P1, N1, P2, N2, P3) and to evaluate the similarities between N2 and the downstate of sleep slow waves. We used intra-cortical laminar microelectrodes (LMEs) to record CCEPs evoked by 10 mA bipolar 0.5 Hz electric pulses in seven patients with medically intractable epilepsy implanted with subdural grids. Based on the laminar profile of CCEPs, the latency of components is not layer-dependent, however their rate of appearance varies across cortical depth and stimulation distance, while the seizure onset zone does not seem to affect the emergence of components. Early neural excitation primarily engages middle and deep layers, propagating to the superficial layers, followed by mainly superficial inhibition, concluding in a sleep slow wave-like inhibition and excitation sequence.


Subject(s)
Electric Stimulation , Evoked Potentials , Humans , Male , Female , Adult , Electric Stimulation/methods , Cerebral Cortex/physiology , Cerebral Cortex/physiopathology , Drug Resistant Epilepsy/therapy , Drug Resistant Epilepsy/physiopathology , Electroencephalography , Young Adult , Middle Aged , Epilepsy/physiopathology , Epilepsy/therapy
2.
bioRxiv ; 2024 May 14.
Article in English | MEDLINE | ID: mdl-38798551

ABSTRACT

Listeners readily extract multi-dimensional auditory objects such as a 'localized talker' from complex acoustic scenes with multiple talkers. Yet, the neural mechanisms underlying simultaneous encoding and linking of different sound features - for example, a talker's voice and location - are poorly understood. We analyzed invasive intracranial recordings in neurosurgical patients attending to a localized talker in real-life cocktail party scenarios. We found that sensitivity to an individual talker's voice and location features was distributed throughout auditory cortex and that neural sites exhibited a gradient from sensitivity to a single feature to joint sensitivity to both features. On a population level, cortical response patterns of both dual-feature sensitive sites but also single-feature sensitive sites revealed simultaneous encoding of an attended talker's voice and location features. However, for single-feature sensitive sites, the representation of the primary feature was more precise. Further, sites which selective tracked an attended speech stream concurrently encoded an attended talker's voice and location features, indicating that such sites combine selective tracking of an attended auditory object with encoding of the object's features. Finally, we found that attending a localized talker selectively enhanced temporal coherence between single-feature voice sensitive sites and single-feature location sensitive sites, providing an additional mechanism for linking voice and location in multi-talker scenes. These results demonstrate that a talker's voice and location features are linked during multi-dimensional object formation in naturalistic multi-talker scenes by joint population coding as well as by temporal coherence between neural sites. SIGNIFICANCE STATEMENT: Listeners effortlessly extract auditory objects from complex acoustic scenes consisting of multiple sound sources in naturalistic, spatial sound scenes. Yet, how the brain links different sound features to form a multi-dimensional auditory object is poorly understood. We investigated how neural responses encode and integrate an attended talker's voice and location features in spatial multi-talker sound scenes to elucidate which neural mechanisms underlie simultaneous encoding and linking of different auditory features. Our results show that joint population coding as well as temporal coherence mechanisms contribute to distributed multi-dimensional auditory object encoding. These findings shed new light on cortical functional specialization and multidimensional auditory object formation in complex, naturalistic listening scenes. HIGHLIGHTS: Cortical responses to an single talker exhibit a distributed gradient, ranging from sites that are sensitive to both a talker's voice and location (dual-feature sensitive sites) to sites that are sensitive to either voice or location (single-feature sensitive sites).Population response patterns of dual-feature sensitive sites encode voice and location features of the attended talker in multi-talker scenes jointly and with equal precision.Despite their sensitivity to a single feature at the level of individual cortical sites, population response patterns of single-feature sensitive sites also encode location and voice features of a talker jointly, but with higher precision for the feature they are primarily sensitive to.Neural sites which selectively track an attended speech stream concurrently encode the attended talker's voice and location features.Attention selectively enhances temporal coherence between voice and location selective sites over time.Joint population coding as well as temporal coherence mechanisms underlie distributed multi-dimensional auditory object encoding in auditory cortex.

3.
Nat Hum Behav ; 8(4): 758-770, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38366105

ABSTRACT

Neural representations of perceptual decision formation that are abstracted from specific motor requirements have previously been identified in humans using non-invasive electrophysiology; however, it is currently unclear where these originate in the brain. Here we capitalized on the high spatiotemporal precision of intracranial EEG to localize such abstract decision signals. Participants undergoing invasive electrophysiological monitoring for epilepsy were asked to judge the direction of random-dot stimuli and respond either with a speeded button press (N = 24), or vocally, after a randomized delay (N = 12). We found a widely distributed motor-independent network of regions where high-frequency activity exhibited key characteristics consistent with evidence accumulation, including a gradual buildup that was modulated by the strength of the sensory evidence, and an amplitude that predicted participants' choice accuracy and response time. Our findings offer a new view on the brain networks governing human decision-making.


Subject(s)
Decision Making , Electrocorticography , Humans , Adult , Male , Decision Making/physiology , Female , Electrocorticography/methods , Brain/physiology , Epilepsy/physiopathology , Young Adult , Electroencephalography , Reaction Time/physiology , Brain Mapping/methods , Middle Aged
4.
J Neurol Neurosurg Psychiatry ; 94(11): 879-886, 2023 11.
Article in English | MEDLINE | ID: mdl-37336643

ABSTRACT

BACKGROUND: Magnetic resonance-guided laser interstitial thermal therapy (MRgLITT) is a minimally invasive alternative to surgical resection for drug-resistant mesial temporal lobe epilepsy (mTLE). Reported rates of seizure freedom are variable and long-term durability is largely unproven. Anterior temporal lobectomy (ATL) remains an option for patients with MRgLITT treatment failure. However, the safety and efficacy of this staged strategy is unknown. METHODS: This multicentre, retrospective cohort study included 268 patients consecutively treated with mesial temporal MRgLITT at 11 centres between 2012 and 2018. Seizure outcomes and complications of MRgLITT and any subsequent surgery are reported. Predictive value of preoperative variables for seizure outcome was assessed. RESULTS: Engel I seizure freedom was achieved in 55.8% (149/267) at 1 year, 52.5% (126/240) at 2 years and 49.3% (132/268) at the last follow-up ≥1 year (median 47 months). Engel I or II outcomes were achieved in 74.2% (198/267) at 1 year, 75.0% (180/240) at 2 years and 66.0% (177/268) at the last follow-up. Preoperative focal to bilateral tonic-clonic seizures were independently associated with seizure recurrence. Among patients with seizure recurrence, 14/21 (66.7%) became seizure-free after subsequent ATL and 5/10 (50%) after repeat MRgLITT at last follow-up≥1 year. CONCLUSIONS: MRgLITT is a viable treatment with durable outcomes for patients with drug-resistant mTLE evaluated at a comprehensive epilepsy centre. Although seizure freedom rates were lower than reported with ATL, this series represents the early experience of each centre and a heterogeneous cohort. ATL remains a safe and effective treatment for well-selected patients who fail MRgLITT.


Subject(s)
Drug Resistant Epilepsy , Epilepsy, Temporal Lobe , Epilepsy , Laser Therapy , Humans , Epilepsy, Temporal Lobe/surgery , Retrospective Studies , Seizures/surgery , Drug Resistant Epilepsy/surgery , Epilepsy/surgery , Treatment Outcome , Magnetic Resonance Imaging , Lasers
5.
Cell Rep ; 42(6): 112614, 2023 06 27.
Article in English | MEDLINE | ID: mdl-37285270

ABSTRACT

The magnitude of neuronal activation is commonly considered a critical factor for conscious perception of visual content. However, this dogma contrasts with the phenomenon of rapid adaptation, in which the magnitude of neuronal activation drops dramatically in a rapid manner while the visual stimulus and the conscious experience it elicits remain stable. Here, we report that the profiles of multi-site activation patterns and their relational geometry-i.e., the similarity distances between activation patterns, as revealed using intracranial electroencephalographic (iEEG) recordings-are sustained during extended visual stimulation despite the major magnitude decrease. These results are compatible with the hypothesis that conscious perceptual content is associated with the neuronal pattern profiles and their similarity distances, rather than the overall activation magnitude, in human visual cortex.


Subject(s)
Visual Cortex , Visual Perception , Humans , Visual Perception/physiology , Visual Cortex/physiology , Consciousness/physiology , Electrocorticography , Photic Stimulation/methods
6.
PLoS Biol ; 21(6): e3002128, 2023 06.
Article in English | MEDLINE | ID: mdl-37279203

ABSTRACT

Humans can easily tune in to one talker in a multitalker environment while still picking up bits of background speech; however, it remains unclear how we perceive speech that is masked and to what degree non-target speech is processed. Some models suggest that perception can be achieved through glimpses, which are spectrotemporal regions where a talker has more energy than the background. Other models, however, require the recovery of the masked regions. To clarify this issue, we directly recorded from primary and non-primary auditory cortex (AC) in neurosurgical patients as they attended to one talker in multitalker speech and trained temporal response function models to predict high-gamma neural activity from glimpsed and masked stimulus features. We found that glimpsed speech is encoded at the level of phonetic features for target and non-target talkers, with enhanced encoding of target speech in non-primary AC. In contrast, encoding of masked phonetic features was found only for the target, with a greater response latency and distinct anatomical organization compared to glimpsed phonetic features. These findings suggest separate mechanisms for encoding glimpsed and masked speech and provide neural evidence for the glimpsing model of speech perception.


Subject(s)
Speech Perception , Speech , Humans , Speech/physiology , Acoustic Stimulation , Phonetics , Speech Perception/physiology , Reaction Time
7.
Nat Commun ; 14(1): 2910, 2023 05 22.
Article in English | MEDLINE | ID: mdl-37217478

ABSTRACT

Our continuous visual experience in daily life is dominated by change. Previous research has focused on visual change due to stimulus motion, eye movements or unfolding events, but not their combined impact across the brain, or their interactions with semantic novelty. We investigate the neural responses to these sources of novelty during film viewing. We analyzed intracranial recordings in humans across 6328 electrodes from 23 individuals. Responses associated with saccades and film cuts were dominant across the entire brain. Film cuts at semantic event boundaries were particularly effective in the temporal and medial temporal lobe. Saccades to visual targets with high visual novelty were also associated with strong neural responses. Specific locations in higher-order association areas showed selectivity to either high or low-novelty saccades. We conclude that neural activity associated with film cuts and eye movements is widespread across the brain and is modulated by semantic novelty.


Subject(s)
Brain , Semantics , Humans , Brain/physiology , Eye Movements , Saccades , Temporal Lobe/physiology , Photic Stimulation
8.
Nat Hum Behav ; 7(5): 740-753, 2023 05.
Article in English | MEDLINE | ID: mdl-36864134

ABSTRACT

The precise role of the human auditory cortex in representing speech sounds and transforming them to meaning is not yet fully understood. Here we used intracranial recordings from the auditory cortex of neurosurgical patients as they listened to natural speech. We found an explicit, temporally ordered and anatomically distributed neural encoding of multiple linguistic features, including phonetic, prelexical phonotactics, word frequency, and lexical-phonological and lexical-semantic information. Grouping neural sites on the basis of their encoded linguistic features revealed a hierarchical pattern, with distinct representations of prelexical and postlexical features distributed across various auditory areas. While sites with longer response latencies and greater distance from the primary auditory cortex encoded higher-level linguistic features, the encoding of lower-level features was preserved and not discarded. Our study reveals a cumulative mapping of sound to meaning and provides empirical evidence for validating neurolinguistic and psycholinguistic models of spoken word recognition that preserve the acoustic variations in speech.


Subject(s)
Auditory Cortex , Speech Perception , Humans , Auditory Cortex/physiology , Speech Perception/physiology , Auditory Perception/physiology , Speech/physiology , Phonetics
9.
Neuroimage ; 266: 119819, 2023 02 01.
Article in English | MEDLINE | ID: mdl-36529203

ABSTRACT

The human auditory system displays a robust capacity to adapt to sudden changes in background noise, allowing for continuous speech comprehension despite changes in background environments. However, despite comprehensive studies characterizing this ability, the computations that underly this process are not well understood. The first step towards understanding a complex system is to propose a suitable model, but the classical and easily interpreted model for the auditory system, the spectro-temporal receptive field (STRF), cannot match the nonlinear neural dynamics involved in noise adaptation. Here, we utilize a deep neural network (DNN) to model neural adaptation to noise, illustrating its effectiveness at reproducing the complex dynamics at the levels of both individual electrodes and the cortical population. By closely inspecting the model's STRF-like computations over time, we find that the model alters both the gain and shape of its receptive field when adapting to a sudden noise change. We show that the DNN model's gain changes allow it to perform adaptive gain control, while the spectro-temporal change creates noise filtering by altering the inhibitory region of the model's receptive field. Further, we find that models of electrodes in nonprimary auditory cortex also exhibit noise filtering changes in their excitatory regions, suggesting differences in noise filtering mechanisms along the cortical hierarchy. These findings demonstrate the capability of deep neural networks to model complex neural adaptation and offer new hypotheses about the computations the auditory cortex performs to enable noise-robust speech perception in real-world, dynamic environments.


Subject(s)
Auditory Cortex , Humans , Acoustic Stimulation/methods , Auditory Perception , Neurons , Neural Networks, Computer
10.
Curr Biol ; 32(18): 3971-3986.e4, 2022 09 26.
Article in English | MEDLINE | ID: mdl-35973430

ABSTRACT

How the human auditory cortex represents spatially separated simultaneous talkers and how talkers' locations and voices modulate the neural representations of attended and unattended speech are unclear. Here, we measured the neural responses from electrodes implanted in neurosurgical patients as they performed single-talker and multi-talker speech perception tasks. We found that spatial separation between talkers caused a preferential encoding of the contralateral speech in Heschl's gyrus (HG), planum temporale (PT), and superior temporal gyrus (STG). Location and spectrotemporal features were encoded in different aspects of the neural response. Specifically, the talker's location changed the mean response level, whereas the talker's spectrotemporal features altered the variation of response around response's baseline. These components were differentially modulated by the attended talker's voice or location, which improved the population decoding of attended speech features. Attentional modulation due to the talker's voice only appeared in the auditory areas with longer latencies, but attentional modulation due to location was present throughout. Our results show that spatial multi-talker speech perception relies upon a separable pre-attentive neural representation, which could be further tuned by top-down attention to the location and voice of the talker.


Subject(s)
Auditory Cortex , Speech Perception , Voice , Auditory Cortex/physiology , Humans , Speech , Speech Perception/physiology , Temporal Lobe
11.
J Neurosci ; 42(17): 3648-3658, 2022 04 27.
Article in English | MEDLINE | ID: mdl-35347046

ABSTRACT

Speech perception in noise is a challenging everyday task with which many listeners have difficulty. Here, we report a case in which electrical brain stimulation of implanted intracranial electrodes in the left planum temporale (PT) of a neurosurgical patient significantly and reliably improved subjective quality (up to 50%) and objective intelligibility (up to 97%) of speech in noise perception. Stimulation resulted in a selective enhancement of speech sounds compared with the background noises. The receptive fields of the PT sites whose stimulation improved speech perception were tuned to spectrally broad and rapidly changing sounds. Corticocortical evoked potential analysis revealed that the PT sites were located between the sites in Heschl's gyrus and the superior temporal gyrus. Moreover, the discriminability of speech from nonspeech sounds increased in population neural responses from Heschl's gyrus to the PT to the superior temporal gyrus sites. These findings causally implicate the PT in background noise suppression and may point to a novel potential neuroprosthetic solution to assist in the challenging task of speech perception in noise.SIGNIFICANCE STATEMENT Speech perception in noise remains a challenging task for many individuals. Here, we present a case in which the electrical brain stimulation of intracranially implanted electrodes in the planum temporale of a neurosurgical patient significantly improved both the subjective quality (up to 50%) and objective intelligibility (up to 97%) of speech perception in noise. Stimulation resulted in a selective enhancement of speech sounds compared with the background noises. Our local and network-level functional analyses placed the planum temporale sites in between the sites in the primary auditory areas in Heschl's gyrus and nonprimary auditory areas in the superior temporal gyrus. These findings causally implicate planum temporale in acoustic scene analysis and suggest potential neuroprosthetic applications to assist hearing in noise.


Subject(s)
Auditory Cortex , Speech Perception , Acoustic Stimulation , Auditory Cortex/physiology , Brain , Brain Mapping/methods , Hearing , Humans , Magnetic Resonance Imaging/methods , Speech/physiology , Speech Perception/physiology
12.
Front Neurol ; 12: 696492, 2021.
Article in English | MEDLINE | ID: mdl-34690909

ABSTRACT

Objective: It has been asserted that high-frequency analysis of intracranial EEG (iEEG) data may yield information useful in localizing epileptogenic foci. Methods: We tested whether proposed biomarkers could predict lateralization based on iEEG data collected prior to corpus callosotomy (CC) in three patients with bisynchronous epilepsy, whose seizures lateralized definitively post-CC. Lateralization data derived from algorithmically-computed ictal phase-locked high gamma (PLHG), high gamma amplitude (HGA), and low-frequency (filtered) line length (LFLL), as well as interictal high-frequency oscillation (HFO) and interictal epileptiform discharge (IED) rate metrics were compared against ground-truth lateralization from post-CC ictal iEEG. Results: Pre-CC unilateral IEDs were more frequent on the more-pathologic side in all subjects. HFO rate predicted lateralization in one subject, but was sensitive to detection threshold. On pre-CC data, no ictal metric showed better predictive power than any other. All post-corpus callosotomy seizures lateralized to the pathological hemisphere using PLHG, HGA, and LFLL metrics. Conclusions: While quantitative metrics of IED rate and ictal HGA, PHLG, and LFLL all accurately lateralize based on post-CC iEEG, only IED rate consistently did so based on pre-CC data. Significance: Quantitative analysis of IEDs may be useful in lateralizing seizure pathology. More work is needed to develop reliable techniques for high-frequency iEEG analysis.

13.
J Neurophysiol ; 126(5): 1723-1739, 2021 11 01.
Article in English | MEDLINE | ID: mdl-34644179

ABSTRACT

The progress of therapeutic neuromodulation greatly depends on improving stimulation parameters to most efficiently induce neuroplasticity effects. Intermittent θ-burst stimulation (iTBS), a form of electrical stimulation that mimics natural brain activity patterns, has proved to efficiently induce such effects in animal studies and rhythmic transcranial magnetic stimulation studies in humans. However, little is known about the potential neuroplasticity effects of iTBS applied through intracranial electrodes in humans. This study characterizes the physiological effects of intracranial iTBS in humans and compare them with α-frequency stimulation, another frequently used neuromodulatory pattern. We applied these two stimulation patterns to well-defined regions in the sensorimotor cortex, which elicited contralateral hand muscle contractions during clinical mapping, in patients with epilepsy implanted with intracranial electrodes. Treatment effects were evaluated using oscillatory coherence across areas connected to the treatment site, as defined with corticocortical-evoked potentials. Our results show that iTBS increases coherence in the ß-frequency band within the sensorimotor network indicating a potential neuroplasticity effect. The effect is specific to the sensorimotor system, the ß band, and the stimulation pattern and outlasted the stimulation period by ∼3 min. The effect occurred in four out of seven subjects depending on the buildup of the effect during iTBS treatment and other patterns of oscillatory activity related to ceiling effects within the ß band and to preexistent coherence within the α band. By characterizing the neurophysiological effects of iTBS within well-defined cortical networks, we hope to provide an electrophysiological framework that allows clinicians/researchers to optimize brain stimulation protocols which may have translational value.NEW & NOTEWORTHY θ-Burst stimulation (TBS) protocols in transcranial magnetic stimulation studies have shown improved treatment efficacy in a variety of neuropsychiatric disorders. The optimal protocol to induce neuroplasticity in invasive direct electrical stimulation approaches is not known. We report that intracranial TBS applied in human sensorimotor cortex increases local coherence of preexistent ß rhythms. The effect is specific to the stimulation frequency and the stimulated network and outlasts the stimulation period by ∼3 min.


Subject(s)
Beta Rhythm/physiology , Electric Stimulation Therapy , Electric Stimulation , Electrocorticography , Nerve Net/physiology , Neuronal Plasticity/physiology , Sensorimotor Cortex/physiology , Adult , Female , Humans , Male , Young Adult
14.
Bioelectron Med ; 7(1): 14, 2021 Sep 22.
Article in English | MEDLINE | ID: mdl-34548098

ABSTRACT

Almost 100 years ago experiments involving electrically stimulating and recording from the brain and the body launched new discoveries and debates on how electricity, movement, and thoughts are related. Decades later the development of brain-computer interface technology began, which now targets a wide range of applications. Potential uses include augmentative communication for locked-in patients and restoring sensorimotor function in those who are battling disease or have suffered traumatic injury. Technical and surgical challenges still surround the development of brain-computer technology, however, before it can be widely deployed. In this review we explore these challenges, historical perspectives, and the remarkable achievements of clinical study participants who have bravely forged new paths for future beneficiaries.

15.
Brain Stimul ; 14(5): 1184-1196, 2021.
Article in English | MEDLINE | ID: mdl-34358704

ABSTRACT

BACKGROUND: Paralysis and neuropathy, affecting millions of people worldwide, can be accompanied by significant loss of somatosensation. With tactile sensation being central to achieving dexterous movement, brain-computer interface (BCI) researchers have used intracortical and cortical surface electrical stimulation to restore somatotopically-relevant sensation to the hand. However, these approaches are restricted to stimulating the gyral areas of the brain. Since representation of distal regions of the hand extends into the sulcal regions of human primary somatosensory cortex (S1), it has been challenging to evoke sensory percepts localized to the fingertips. OBJECTIVE/HYPOTHESIS: Targeted stimulation of sulcal regions of S1, using stereoelectroencephalography (SEEG) depth electrodes, can evoke focal sensory percepts in the fingertips. METHODS: Two participants with intractable epilepsy received cortical stimulation both at the gyri via high-density electrocorticography (HD-ECoG) grids and in the sulci via SEEG depth electrode leads. We characterized the evoked sensory percepts localized to the hand. RESULTS: We show that highly focal percepts can be evoked in the fingertips of the hand through sulcal stimulation. fMRI, myelin content, and cortical thickness maps from the Human Connectome Project elucidated specific cortical areas and sub-regions within S1 that evoked these focal percepts. Within-participant comparisons showed that percepts evoked by sulcal stimulation via SEEG electrodes were significantly more focal (80% less area; p = 0.02) and localized to the fingertips more often, than by gyral stimulation via HD-ECoG electrodes. Finally, sulcal locations with consistent modulation of high-frequency neural activity during mechanical tactile stimulation of the fingertips showed the same somatotopic correspondence as cortical stimulation. CONCLUSIONS: Our findings indicate minimally invasive sulcal stimulation via SEEG electrodes could be a clinically viable approach to restoring sensation.


Subject(s)
Hand , Somatosensory Cortex , Electric Stimulation , Electrocorticography , Electrodes, Implanted , Humans , Touch
16.
Bioelectron Med ; 7(1): 7, 2021 May 24.
Article in English | MEDLINE | ID: mdl-34024277

ABSTRACT

There is a broad and growing interest in Bioelectronic Medicine, a dynamic field that continues to generate new approaches in disease treatment. The fourth bioelectronic medicine summit "Technology targeting molecular mechanisms" took place on September 23 and 24, 2020. This virtual meeting was hosted by the Feinstein Institutes for Medical Research, Northwell Health. The summit called international attention to Bioelectronic Medicine as a platform for new developments in science, technology, and healthcare. The meeting was an arena for exchanging new ideas and seeding potential collaborations involving teams in academia and industry. The summit provided a forum for leaders in the field to discuss current progress, challenges, and future developments in Bioelectronic Medicine. The main topics discussed at the summit are outlined here.

17.
Neuroimage ; 235: 118003, 2021 07 15.
Article in English | MEDLINE | ID: mdl-33789135

ABSTRACT

Heschl's gyrus (HG) is a brain area that includes the primary auditory cortex in humans. Due to the limitations in obtaining direct neural measurements from this region during naturalistic speech listening, the functional organization and the role of HG in speech perception remain uncertain. Here, we used intracranial EEG to directly record neural activity in HG in eight neurosurgical patients as they listened to continuous speech stories. We studied the spatial distribution of acoustic tuning and the organization of linguistic feature encoding. We found a main gradient of change from posteromedial to anterolateral parts of HG. We also observed a decrease in frequency and temporal modulation tuning and an increase in phonemic representation, speaker normalization, speech sensitivity, and response latency. We did not observe a difference between the two brain hemispheres. These findings reveal a functional role for HG in processing and transforming simple to complex acoustic features and inform neurophysiological models of speech processing in the human auditory cortex.


Subject(s)
Auditory Cortex/physiology , Brain Mapping , Speech Perception/physiology , Adult , Electrocorticography , Epilepsy/diagnosis , Epilepsy/surgery , Female , Humans , Male , Middle Aged , Neurosurgical Procedures
18.
J Neurosci ; 41(15): 3386-3399, 2021 04 14.
Article in English | MEDLINE | ID: mdl-33431634

ABSTRACT

Research in functional neuroimaging has suggested that category-selective regions of visual cortex, including the ventral temporal cortex (VTC), can be reactivated endogenously through imagery and recall. Face representation in the monkey face-patch system has been well studied and is an attractive domain in which to explore these processes in humans. The VTCs of 8 human subjects (4 female) undergoing invasive monitoring for epilepsy surgery were implanted with microelectrodes. Most (26 of 33) category-selective units showed specificity for face stimuli. Different face exemplars evoked consistent and discriminable responses in the population of units sampled. During free recall, face-selective units preferentially reactivated in the absence of visual stimulation during a 2 s window preceding face recall events. Furthermore, we show that in at least 1 subject, the identity of the recalled face could be predicted by comparing activity preceding recall events to activity evoked by visual stimulation. We show that face-selective units in the human VTC are reactivated endogenously, and present initial evidence that consistent representations of individual face exemplars are specifically reactivated in this manner.SIGNIFICANCE STATEMENT The role of "top-down" endogenous reactivation of native representations in higher sensory areas is poorly understood in humans. We conducted the first detailed single-unit survey of ventral temporal cortex (VTC) in human subjects, showing that, similarly to nonhuman primates, humans encode different faces using different rate codes. Then, we demonstrated that, when subjects recalled and imagined a given face, VTC neurons reactivated with the same rate codes as when subjects initially viewed that face. This suggests that the VTC units not only carry durable representations of faces, but that those representations can be endogenously reactivated via "top-down" mechanisms.


Subject(s)
Facial Recognition , Temporal Lobe/physiology , Adult , Evoked Potentials, Visual , Female , Humans , Male , Mental Recall , Middle Aged , Neurons/physiology , Temporal Lobe/cytology
19.
J Neurosci ; 40(44): 8530-8542, 2020 10 28.
Article in English | MEDLINE | ID: mdl-33023923

ABSTRACT

Natural conversation is multisensory: when we can see the speaker's face, visual speech cues improve our comprehension. The neuronal mechanisms underlying this phenomenon remain unclear. The two main alternatives are visually mediated phase modulation of neuronal oscillations (excitability fluctuations) in auditory neurons and visual input-evoked responses in auditory neurons. Investigating this question using naturalistic audiovisual speech with intracranial recordings in humans of both sexes, we find evidence for both mechanisms. Remarkably, auditory cortical neurons track the temporal dynamics of purely visual speech using the phase of their slow oscillations and phase-related modulations in broadband high-frequency activity. Consistent with known perceptual enhancement effects, the visual phase reset amplifies the cortical representation of concomitant auditory speech. In contrast to this, and in line with earlier reports, visual input reduces the amplitude of evoked responses to concomitant auditory input. We interpret the combination of improved phase tracking and reduced response amplitude as evidence for more efficient and reliable stimulus processing in the presence of congruent auditory and visual speech inputs.SIGNIFICANCE STATEMENT Watching the speaker can facilitate our understanding of what is being said. The mechanisms responsible for this influence of visual cues on the processing of speech remain incompletely understood. We studied these mechanisms by recording the electrical activity of the human brain through electrodes implanted surgically inside the brain. We found that visual inputs can operate by directly activating auditory cortical areas, and also indirectly by modulating the strength of cortical responses to auditory input. Our results help to understand the mechanisms by which the brain merges auditory and visual speech into a unitary perception.


Subject(s)
Auditory Cortex/physiology , Evoked Potentials/physiology , Nonverbal Communication/physiology , Adult , Drug Resistant Epilepsy/surgery , Electrocorticography , Evoked Potentials, Auditory/physiology , Evoked Potentials, Visual/physiology , Female , Humans , Middle Aged , Neurons/physiology , Nonverbal Communication/psychology , Photic Stimulation , Young Adult
20.
Neuroimage ; 223: 117282, 2020 12.
Article in English | MEDLINE | ID: mdl-32828921

ABSTRACT

Hearing-impaired people often struggle to follow the speech stream of an individual talker in noisy environments. Recent studies show that the brain tracks attended speech and that the attended talker can be decoded from neural data on a single-trial level. This raises the possibility of "neuro-steered" hearing devices in which the brain-decoded intention of a hearing-impaired listener is used to enhance the voice of the attended speaker from a speech separation front-end. So far, methods that use this paradigm have focused on optimizing the brain decoding and the acoustic speech separation independently. In this work, we propose a novel framework called brain-informed speech separation (BISS)1 in which the information about the attended speech, as decoded from the subject's brain, is directly used to perform speech separation in the front-end. We present a deep learning model that uses neural data to extract the clean audio signal that a listener is attending to from a multi-talker speech mixture. We show that the framework can be applied successfully to the decoded output from either invasive intracranial electroencephalography (iEEG) or non-invasive electroencephalography (EEG) recordings from hearing-impaired subjects. It also results in improved speech separation, even in scenes with background noise. The generalization capability of the system renders it a perfect candidate for neuro-steered hearing-assistive devices.


Subject(s)
Brain/physiology , Electroencephalography , Signal Processing, Computer-Assisted , Speech Acoustics , Speech Perception/physiology , Acoustic Stimulation , Adult , Algorithms , Deep Learning , Hearing Loss/physiopathology , Humans , Middle Aged
SELECTION OF CITATIONS
SEARCH DETAIL
...