RESUMEN
The discrete events of our narrative experience are organized by the neural substrate that underlies episodic memory. This narrative process is segmented into discrete units by event boundaries. This permits a replay process that acts to consolidate each event into a narrative memory. High frequency oscillations (HFOs) are a potential mechanism for synchronizing neural activity during these processes. Here, we use intracranial recordings from participants viewing and freely recalling a naturalistic stimulus. We show that hippocampal HFOs increase following event boundaries and that coincident hippocampal-cortical HFOs (co-HFOs) occur in cortical regions previously shown to underlie event segmentation (inferior parietal, precuneus, lateral occipital, inferior frontal cortices). We also show that event-specific patterns of co-HFOs that occur during event viewing re-occur following the subsequent three event boundaries (in decaying fashion) and also during recall. This is consistent with models that support replay as a mechanism for memory consolidation. Hence, HFOs may coordinate activity across brain regions serving widespread event segmentation, encode naturalistic memory, and bind representations to assemble memory of a coherent, continuous experience.
RESUMEN
Focusing on a specific conversation amidst multiple interfering talkers is challenging, especially for those with hearing loss. Brain-controlled assistive hearing devices aim to alleviate this problem by enhancing the attended speech based on the listener's neural signals using auditory attention decoding (AAD). Departing from conventional AAD studies that relied on oversimplified scenarios with stationary talkers, a realistic AAD task that involves multiple talkers taking turns as they continuously move in space in background noise is presented. Invasive electroencephalography (iEEG) data are collected from three neurosurgical patients as they focused on one of the two moving conversations. An enhanced brain-controlled assistive hearing system that combines AAD and a binaural speaker-independent speech separation model is presented. The separation model unmixes talkers while preserving their spatial location and provides talker trajectories to the neural decoder to improve AAD accuracy. Subjective and objective evaluations show that the proposed system enhances speech intelligibility and facilitates conversation tracking while maintaining spatial cues and voice quality in challenging acoustic environments. This research demonstrates the potential of this approach in real-world scenarios and marks a significant step toward developing assistive hearing technologies that adapt to the intricate dynamics of everyday auditory experiences.
Asunto(s)
Electroencefalografía , Percepción del Habla , Humanos , Electroencefalografía/métodos , Percepción del Habla/fisiología , Masculino , Audífonos , Femenino , Pérdida Auditiva/fisiopatología , Persona de Mediana Edad , Interfaces Cerebro-Computador , Inteligibilidad del Habla/fisiología , Adulto , Atención/fisiologíaRESUMEN
Cortico-cortical evoked potentials (CCEPs) elicited by single-pulse electric stimulation (SPES) are widely used to assess effective connectivity between cortical areas and are also implemented in the presurgical evaluation of epileptic patients. Nevertheless, the cortical generators underlying the various components of CCEPs in humans have not yet been elucidated. Our aim was to describe the laminar pattern arising under SPES evoked CCEP components (P1, N1, P2, N2, P3) and to evaluate the similarities between N2 and the downstate of sleep slow waves. We used intra-cortical laminar microelectrodes (LMEs) to record CCEPs evoked by 10 mA bipolar 0.5 Hz electric pulses in seven patients with medically intractable epilepsy implanted with subdural grids. Based on the laminar profile of CCEPs, the latency of components is not layer-dependent, however their rate of appearance varies across cortical depth and stimulation distance, while the seizure onset zone does not seem to affect the emergence of components. Early neural excitation primarily engages middle and deep layers, propagating to the superficial layers, followed by mainly superficial inhibition, concluding in a sleep slow wave-like inhibition and excitation sequence.
Asunto(s)
Estimulación Eléctrica , Potenciales Evocados , Humanos , Masculino , Femenino , Adulto , Estimulación Eléctrica/métodos , Corteza Cerebral/fisiología , Corteza Cerebral/fisiopatología , Epilepsia Refractaria/terapia , Epilepsia Refractaria/fisiopatología , Electroencefalografía , Adulto Joven , Persona de Mediana Edad , Epilepsia/fisiopatología , Epilepsia/terapiaRESUMEN
Listeners readily extract multi-dimensional auditory objects such as a 'localized talker' from complex acoustic scenes with multiple talkers. Yet, the neural mechanisms underlying simultaneous encoding and linking of different sound features - for example, a talker's voice and location - are poorly understood. We analyzed invasive intracranial recordings in neurosurgical patients attending to a localized talker in real-life cocktail party scenarios. We found that sensitivity to an individual talker's voice and location features was distributed throughout auditory cortex and that neural sites exhibited a gradient from sensitivity to a single feature to joint sensitivity to both features. On a population level, cortical response patterns of both dual-feature sensitive sites but also single-feature sensitive sites revealed simultaneous encoding of an attended talker's voice and location features. However, for single-feature sensitive sites, the representation of the primary feature was more precise. Further, sites which selective tracked an attended speech stream concurrently encoded an attended talker's voice and location features, indicating that such sites combine selective tracking of an attended auditory object with encoding of the object's features. Finally, we found that attending a localized talker selectively enhanced temporal coherence between single-feature voice sensitive sites and single-feature location sensitive sites, providing an additional mechanism for linking voice and location in multi-talker scenes. These results demonstrate that a talker's voice and location features are linked during multi-dimensional object formation in naturalistic multi-talker scenes by joint population coding as well as by temporal coherence between neural sites. SIGNIFICANCE STATEMENT: Listeners effortlessly extract auditory objects from complex acoustic scenes consisting of multiple sound sources in naturalistic, spatial sound scenes. Yet, how the brain links different sound features to form a multi-dimensional auditory object is poorly understood. We investigated how neural responses encode and integrate an attended talker's voice and location features in spatial multi-talker sound scenes to elucidate which neural mechanisms underlie simultaneous encoding and linking of different auditory features. Our results show that joint population coding as well as temporal coherence mechanisms contribute to distributed multi-dimensional auditory object encoding. These findings shed new light on cortical functional specialization and multidimensional auditory object formation in complex, naturalistic listening scenes. HIGHLIGHTS: Cortical responses to an single talker exhibit a distributed gradient, ranging from sites that are sensitive to both a talker's voice and location (dual-feature sensitive sites) to sites that are sensitive to either voice or location (single-feature sensitive sites).Population response patterns of dual-feature sensitive sites encode voice and location features of the attended talker in multi-talker scenes jointly and with equal precision.Despite their sensitivity to a single feature at the level of individual cortical sites, population response patterns of single-feature sensitive sites also encode location and voice features of a talker jointly, but with higher precision for the feature they are primarily sensitive to.Neural sites which selectively track an attended speech stream concurrently encode the attended talker's voice and location features.Attention selectively enhances temporal coherence between voice and location selective sites over time.Joint population coding as well as temporal coherence mechanisms underlie distributed multi-dimensional auditory object encoding in auditory cortex.
RESUMEN
Neural representations of perceptual decision formation that are abstracted from specific motor requirements have previously been identified in humans using non-invasive electrophysiology; however, it is currently unclear where these originate in the brain. Here we capitalized on the high spatiotemporal precision of intracranial EEG to localize such abstract decision signals. Participants undergoing invasive electrophysiological monitoring for epilepsy were asked to judge the direction of random-dot stimuli and respond either with a speeded button press (N = 24), or vocally, after a randomized delay (N = 12). We found a widely distributed motor-independent network of regions where high-frequency activity exhibited key characteristics consistent with evidence accumulation, including a gradual buildup that was modulated by the strength of the sensory evidence, and an amplitude that predicted participants' choice accuracy and response time. Our findings offer a new view on the brain networks governing human decision-making.
Asunto(s)
Toma de Decisiones , Electrocorticografía , Humanos , Adulto , Masculino , Toma de Decisiones/fisiología , Femenino , Electrocorticografía/métodos , Encéfalo/fisiología , Epilepsia/fisiopatología , Adulto Joven , Electroencefalografía , Tiempo de Reacción/fisiología , Mapeo Encefálico/métodos , Persona de Mediana EdadRESUMEN
BACKGROUND: Magnetic resonance-guided laser interstitial thermal therapy (MRgLITT) is a minimally invasive alternative to surgical resection for drug-resistant mesial temporal lobe epilepsy (mTLE). Reported rates of seizure freedom are variable and long-term durability is largely unproven. Anterior temporal lobectomy (ATL) remains an option for patients with MRgLITT treatment failure. However, the safety and efficacy of this staged strategy is unknown. METHODS: This multicentre, retrospective cohort study included 268 patients consecutively treated with mesial temporal MRgLITT at 11 centres between 2012 and 2018. Seizure outcomes and complications of MRgLITT and any subsequent surgery are reported. Predictive value of preoperative variables for seizure outcome was assessed. RESULTS: Engel I seizure freedom was achieved in 55.8% (149/267) at 1 year, 52.5% (126/240) at 2 years and 49.3% (132/268) at the last follow-up ≥1 year (median 47 months). Engel I or II outcomes were achieved in 74.2% (198/267) at 1 year, 75.0% (180/240) at 2 years and 66.0% (177/268) at the last follow-up. Preoperative focal to bilateral tonic-clonic seizures were independently associated with seizure recurrence. Among patients with seizure recurrence, 14/21 (66.7%) became seizure-free after subsequent ATL and 5/10 (50%) after repeat MRgLITT at last follow-up≥1 year. CONCLUSIONS: MRgLITT is a viable treatment with durable outcomes for patients with drug-resistant mTLE evaluated at a comprehensive epilepsy centre. Although seizure freedom rates were lower than reported with ATL, this series represents the early experience of each centre and a heterogeneous cohort. ATL remains a safe and effective treatment for well-selected patients who fail MRgLITT.
Asunto(s)
Epilepsia Refractaria , Epilepsia del Lóbulo Temporal , Epilepsia , Terapia por Láser , Humanos , Epilepsia del Lóbulo Temporal/cirugía , Estudios Retrospectivos , Convulsiones/cirugía , Epilepsia Refractaria/cirugía , Epilepsia/cirugía , Resultado del Tratamiento , Imagen por Resonancia Magnética , Rayos LáserRESUMEN
The magnitude of neuronal activation is commonly considered a critical factor for conscious perception of visual content. However, this dogma contrasts with the phenomenon of rapid adaptation, in which the magnitude of neuronal activation drops dramatically in a rapid manner while the visual stimulus and the conscious experience it elicits remain stable. Here, we report that the profiles of multi-site activation patterns and their relational geometry-i.e., the similarity distances between activation patterns, as revealed using intracranial electroencephalographic (iEEG) recordings-are sustained during extended visual stimulation despite the major magnitude decrease. These results are compatible with the hypothesis that conscious perceptual content is associated with the neuronal pattern profiles and their similarity distances, rather than the overall activation magnitude, in human visual cortex.
Asunto(s)
Corteza Visual , Percepción Visual , Humanos , Percepción Visual/fisiología , Corteza Visual/fisiología , Estado de Conciencia/fisiología , Electrocorticografía , Estimulación Luminosa/métodosRESUMEN
Humans can easily tune in to one talker in a multitalker environment while still picking up bits of background speech; however, it remains unclear how we perceive speech that is masked and to what degree non-target speech is processed. Some models suggest that perception can be achieved through glimpses, which are spectrotemporal regions where a talker has more energy than the background. Other models, however, require the recovery of the masked regions. To clarify this issue, we directly recorded from primary and non-primary auditory cortex (AC) in neurosurgical patients as they attended to one talker in multitalker speech and trained temporal response function models to predict high-gamma neural activity from glimpsed and masked stimulus features. We found that glimpsed speech is encoded at the level of phonetic features for target and non-target talkers, with enhanced encoding of target speech in non-primary AC. In contrast, encoding of masked phonetic features was found only for the target, with a greater response latency and distinct anatomical organization compared to glimpsed phonetic features. These findings suggest separate mechanisms for encoding glimpsed and masked speech and provide neural evidence for the glimpsing model of speech perception.
Asunto(s)
Percepción del Habla , Habla , Humanos , Habla/fisiología , Estimulación Acústica , Fonética , Percepción del Habla/fisiología , Tiempo de ReacciónRESUMEN
Our continuous visual experience in daily life is dominated by change. Previous research has focused on visual change due to stimulus motion, eye movements or unfolding events, but not their combined impact across the brain, or their interactions with semantic novelty. We investigate the neural responses to these sources of novelty during film viewing. We analyzed intracranial recordings in humans across 6328 electrodes from 23 individuals. Responses associated with saccades and film cuts were dominant across the entire brain. Film cuts at semantic event boundaries were particularly effective in the temporal and medial temporal lobe. Saccades to visual targets with high visual novelty were also associated with strong neural responses. Specific locations in higher-order association areas showed selectivity to either high or low-novelty saccades. We conclude that neural activity associated with film cuts and eye movements is widespread across the brain and is modulated by semantic novelty.
Asunto(s)
Encéfalo , Semántica , Humanos , Encéfalo/fisiología , Movimientos Oculares , Movimientos Sacádicos , Lóbulo Temporal/fisiología , Estimulación LuminosaRESUMEN
The precise role of the human auditory cortex in representing speech sounds and transforming them to meaning is not yet fully understood. Here we used intracranial recordings from the auditory cortex of neurosurgical patients as they listened to natural speech. We found an explicit, temporally ordered and anatomically distributed neural encoding of multiple linguistic features, including phonetic, prelexical phonotactics, word frequency, and lexical-phonological and lexical-semantic information. Grouping neural sites on the basis of their encoded linguistic features revealed a hierarchical pattern, with distinct representations of prelexical and postlexical features distributed across various auditory areas. While sites with longer response latencies and greater distance from the primary auditory cortex encoded higher-level linguistic features, the encoding of lower-level features was preserved and not discarded. Our study reveals a cumulative mapping of sound to meaning and provides empirical evidence for validating neurolinguistic and psycholinguistic models of spoken word recognition that preserve the acoustic variations in speech.
Asunto(s)
Corteza Auditiva , Percepción del Habla , Humanos , Corteza Auditiva/fisiología , Percepción del Habla/fisiología , Percepción Auditiva/fisiología , Habla/fisiología , FonéticaRESUMEN
The human auditory system displays a robust capacity to adapt to sudden changes in background noise, allowing for continuous speech comprehension despite changes in background environments. However, despite comprehensive studies characterizing this ability, the computations that underly this process are not well understood. The first step towards understanding a complex system is to propose a suitable model, but the classical and easily interpreted model for the auditory system, the spectro-temporal receptive field (STRF), cannot match the nonlinear neural dynamics involved in noise adaptation. Here, we utilize a deep neural network (DNN) to model neural adaptation to noise, illustrating its effectiveness at reproducing the complex dynamics at the levels of both individual electrodes and the cortical population. By closely inspecting the model's STRF-like computations over time, we find that the model alters both the gain and shape of its receptive field when adapting to a sudden noise change. We show that the DNN model's gain changes allow it to perform adaptive gain control, while the spectro-temporal change creates noise filtering by altering the inhibitory region of the model's receptive field. Further, we find that models of electrodes in nonprimary auditory cortex also exhibit noise filtering changes in their excitatory regions, suggesting differences in noise filtering mechanisms along the cortical hierarchy. These findings demonstrate the capability of deep neural networks to model complex neural adaptation and offer new hypotheses about the computations the auditory cortex performs to enable noise-robust speech perception in real-world, dynamic environments.
Asunto(s)
Corteza Auditiva , Humanos , Estimulación Acústica/métodos , Percepción Auditiva , Neuronas , Redes Neurales de la ComputaciónRESUMEN
How the human auditory cortex represents spatially separated simultaneous talkers and how talkers' locations and voices modulate the neural representations of attended and unattended speech are unclear. Here, we measured the neural responses from electrodes implanted in neurosurgical patients as they performed single-talker and multi-talker speech perception tasks. We found that spatial separation between talkers caused a preferential encoding of the contralateral speech in Heschl's gyrus (HG), planum temporale (PT), and superior temporal gyrus (STG). Location and spectrotemporal features were encoded in different aspects of the neural response. Specifically, the talker's location changed the mean response level, whereas the talker's spectrotemporal features altered the variation of response around response's baseline. These components were differentially modulated by the attended talker's voice or location, which improved the population decoding of attended speech features. Attentional modulation due to the talker's voice only appeared in the auditory areas with longer latencies, but attentional modulation due to location was present throughout. Our results show that spatial multi-talker speech perception relies upon a separable pre-attentive neural representation, which could be further tuned by top-down attention to the location and voice of the talker.
Asunto(s)
Corteza Auditiva , Percepción del Habla , Voz , Corteza Auditiva/fisiología , Humanos , Habla , Percepción del Habla/fisiología , Lóbulo TemporalRESUMEN
Speech perception in noise is a challenging everyday task with which many listeners have difficulty. Here, we report a case in which electrical brain stimulation of implanted intracranial electrodes in the left planum temporale (PT) of a neurosurgical patient significantly and reliably improved subjective quality (up to 50%) and objective intelligibility (up to 97%) of speech in noise perception. Stimulation resulted in a selective enhancement of speech sounds compared with the background noises. The receptive fields of the PT sites whose stimulation improved speech perception were tuned to spectrally broad and rapidly changing sounds. Corticocortical evoked potential analysis revealed that the PT sites were located between the sites in Heschl's gyrus and the superior temporal gyrus. Moreover, the discriminability of speech from nonspeech sounds increased in population neural responses from Heschl's gyrus to the PT to the superior temporal gyrus sites. These findings causally implicate the PT in background noise suppression and may point to a novel potential neuroprosthetic solution to assist in the challenging task of speech perception in noise.SIGNIFICANCE STATEMENT Speech perception in noise remains a challenging task for many individuals. Here, we present a case in which the electrical brain stimulation of intracranially implanted electrodes in the planum temporale of a neurosurgical patient significantly improved both the subjective quality (up to 50%) and objective intelligibility (up to 97%) of speech perception in noise. Stimulation resulted in a selective enhancement of speech sounds compared with the background noises. Our local and network-level functional analyses placed the planum temporale sites in between the sites in the primary auditory areas in Heschl's gyrus and nonprimary auditory areas in the superior temporal gyrus. These findings causally implicate planum temporale in acoustic scene analysis and suggest potential neuroprosthetic applications to assist hearing in noise.
Asunto(s)
Corteza Auditiva , Percepción del Habla , Estimulación Acústica , Corteza Auditiva/fisiología , Encéfalo , Mapeo Encefálico/métodos , Audición , Humanos , Imagen por Resonancia Magnética/métodos , Habla/fisiología , Percepción del Habla/fisiologíaRESUMEN
The progress of therapeutic neuromodulation greatly depends on improving stimulation parameters to most efficiently induce neuroplasticity effects. Intermittent θ-burst stimulation (iTBS), a form of electrical stimulation that mimics natural brain activity patterns, has proved to efficiently induce such effects in animal studies and rhythmic transcranial magnetic stimulation studies in humans. However, little is known about the potential neuroplasticity effects of iTBS applied through intracranial electrodes in humans. This study characterizes the physiological effects of intracranial iTBS in humans and compare them with α-frequency stimulation, another frequently used neuromodulatory pattern. We applied these two stimulation patterns to well-defined regions in the sensorimotor cortex, which elicited contralateral hand muscle contractions during clinical mapping, in patients with epilepsy implanted with intracranial electrodes. Treatment effects were evaluated using oscillatory coherence across areas connected to the treatment site, as defined with corticocortical-evoked potentials. Our results show that iTBS increases coherence in the ß-frequency band within the sensorimotor network indicating a potential neuroplasticity effect. The effect is specific to the sensorimotor system, the ß band, and the stimulation pattern and outlasted the stimulation period by â¼3 min. The effect occurred in four out of seven subjects depending on the buildup of the effect during iTBS treatment and other patterns of oscillatory activity related to ceiling effects within the ß band and to preexistent coherence within the α band. By characterizing the neurophysiological effects of iTBS within well-defined cortical networks, we hope to provide an electrophysiological framework that allows clinicians/researchers to optimize brain stimulation protocols which may have translational value.NEW & NOTEWORTHY θ-Burst stimulation (TBS) protocols in transcranial magnetic stimulation studies have shown improved treatment efficacy in a variety of neuropsychiatric disorders. The optimal protocol to induce neuroplasticity in invasive direct electrical stimulation approaches is not known. We report that intracranial TBS applied in human sensorimotor cortex increases local coherence of preexistent ß rhythms. The effect is specific to the stimulation frequency and the stimulated network and outlasts the stimulation period by â¼3 min.
Asunto(s)
Ritmo beta/fisiología , Terapia por Estimulación Eléctrica , Estimulación Eléctrica , Electrocorticografía , Red Nerviosa/fisiología , Plasticidad Neuronal/fisiología , Corteza Sensoriomotora/fisiología , Adulto , Femenino , Humanos , Masculino , Adulto JovenRESUMEN
Objective: It has been asserted that high-frequency analysis of intracranial EEG (iEEG) data may yield information useful in localizing epileptogenic foci. Methods: We tested whether proposed biomarkers could predict lateralization based on iEEG data collected prior to corpus callosotomy (CC) in three patients with bisynchronous epilepsy, whose seizures lateralized definitively post-CC. Lateralization data derived from algorithmically-computed ictal phase-locked high gamma (PLHG), high gamma amplitude (HGA), and low-frequency (filtered) line length (LFLL), as well as interictal high-frequency oscillation (HFO) and interictal epileptiform discharge (IED) rate metrics were compared against ground-truth lateralization from post-CC ictal iEEG. Results: Pre-CC unilateral IEDs were more frequent on the more-pathologic side in all subjects. HFO rate predicted lateralization in one subject, but was sensitive to detection threshold. On pre-CC data, no ictal metric showed better predictive power than any other. All post-corpus callosotomy seizures lateralized to the pathological hemisphere using PLHG, HGA, and LFLL metrics. Conclusions: While quantitative metrics of IED rate and ictal HGA, PHLG, and LFLL all accurately lateralize based on post-CC iEEG, only IED rate consistently did so based on pre-CC data. Significance: Quantitative analysis of IEDs may be useful in lateralizing seizure pathology. More work is needed to develop reliable techniques for high-frequency iEEG analysis.
RESUMEN
Almost 100 years ago experiments involving electrically stimulating and recording from the brain and the body launched new discoveries and debates on how electricity, movement, and thoughts are related. Decades later the development of brain-computer interface technology began, which now targets a wide range of applications. Potential uses include augmentative communication for locked-in patients and restoring sensorimotor function in those who are battling disease or have suffered traumatic injury. Technical and surgical challenges still surround the development of brain-computer technology, however, before it can be widely deployed. In this review we explore these challenges, historical perspectives, and the remarkable achievements of clinical study participants who have bravely forged new paths for future beneficiaries.
RESUMEN
BACKGROUND: Paralysis and neuropathy, affecting millions of people worldwide, can be accompanied by significant loss of somatosensation. With tactile sensation being central to achieving dexterous movement, brain-computer interface (BCI) researchers have used intracortical and cortical surface electrical stimulation to restore somatotopically-relevant sensation to the hand. However, these approaches are restricted to stimulating the gyral areas of the brain. Since representation of distal regions of the hand extends into the sulcal regions of human primary somatosensory cortex (S1), it has been challenging to evoke sensory percepts localized to the fingertips. OBJECTIVE/HYPOTHESIS: Targeted stimulation of sulcal regions of S1, using stereoelectroencephalography (SEEG) depth electrodes, can evoke focal sensory percepts in the fingertips. METHODS: Two participants with intractable epilepsy received cortical stimulation both at the gyri via high-density electrocorticography (HD-ECoG) grids and in the sulci via SEEG depth electrode leads. We characterized the evoked sensory percepts localized to the hand. RESULTS: We show that highly focal percepts can be evoked in the fingertips of the hand through sulcal stimulation. fMRI, myelin content, and cortical thickness maps from the Human Connectome Project elucidated specific cortical areas and sub-regions within S1 that evoked these focal percepts. Within-participant comparisons showed that percepts evoked by sulcal stimulation via SEEG electrodes were significantly more focal (80% less area; p = 0.02) and localized to the fingertips more often, than by gyral stimulation via HD-ECoG electrodes. Finally, sulcal locations with consistent modulation of high-frequency neural activity during mechanical tactile stimulation of the fingertips showed the same somatotopic correspondence as cortical stimulation. CONCLUSIONS: Our findings indicate minimally invasive sulcal stimulation via SEEG electrodes could be a clinically viable approach to restoring sensation.
Asunto(s)
Mano , Corteza Somatosensorial , Estimulación Eléctrica , Electrocorticografía , Electrodos Implantados , Humanos , TactoRESUMEN
There is a broad and growing interest in Bioelectronic Medicine, a dynamic field that continues to generate new approaches in disease treatment. The fourth bioelectronic medicine summit "Technology targeting molecular mechanisms" took place on September 23 and 24, 2020. This virtual meeting was hosted by the Feinstein Institutes for Medical Research, Northwell Health. The summit called international attention to Bioelectronic Medicine as a platform for new developments in science, technology, and healthcare. The meeting was an arena for exchanging new ideas and seeding potential collaborations involving teams in academia and industry. The summit provided a forum for leaders in the field to discuss current progress, challenges, and future developments in Bioelectronic Medicine. The main topics discussed at the summit are outlined here.
RESUMEN
Heschl's gyrus (HG) is a brain area that includes the primary auditory cortex in humans. Due to the limitations in obtaining direct neural measurements from this region during naturalistic speech listening, the functional organization and the role of HG in speech perception remain uncertain. Here, we used intracranial EEG to directly record neural activity in HG in eight neurosurgical patients as they listened to continuous speech stories. We studied the spatial distribution of acoustic tuning and the organization of linguistic feature encoding. We found a main gradient of change from posteromedial to anterolateral parts of HG. We also observed a decrease in frequency and temporal modulation tuning and an increase in phonemic representation, speaker normalization, speech sensitivity, and response latency. We did not observe a difference between the two brain hemispheres. These findings reveal a functional role for HG in processing and transforming simple to complex acoustic features and inform neurophysiological models of speech processing in the human auditory cortex.