Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 54
Filter
1.
bioRxiv ; 2023 Nov 27.
Article in English | MEDLINE | ID: mdl-38077093

ABSTRACT

Congruent visual speech improves speech perception accuracy, particularly in noisy environments. Conversely, mismatched visual speech can alter what is heard, leading to an illusory percept known as the McGurk effect. This illusion has been widely used to study audiovisual speech integration, illustrating that auditory and visual cues are combined in the brain to generate a single coherent percept. While prior transcranial magnetic stimulation (TMS) and neuroimaging studies have identified the left posterior superior temporal sulcus (pSTS) as a causal region involved in the generation of the McGurk effect, it remains unclear whether this region is critical only for this illusion or also for the more general benefits of congruent visual speech (e.g., increased accuracy and faster reaction times). Indeed, recent correlative research suggests that the benefits of congruent visual speech and the McGurk effect reflect largely independent mechanisms. To better understand how these different features of audiovisual integration are causally generated by the left pSTS, we used single-pulse TMS to temporarily impair processing while subjects were presented with either incongruent (McGurk) or congruent audiovisual combinations. Consistent with past research, we observed that TMS to the left pSTS significantly reduced the strength of the McGurk effect. Importantly, however, left pSTS stimulation did not affect the positive benefits of congruent audiovisual speech (increased accuracy and faster reaction times), demonstrating a causal dissociation between the two processes. Our results are consistent with models proposing that the pSTS is but one of multiple critical areas supporting audiovisual speech interactions. Moreover, these data add to a growing body of evidence suggesting that the McGurk effect is an imperfect surrogate measure for more general and ecologically valid audiovisual speech behaviors.

2.
Nature ; 617(7961): 599-607, 2023 May.
Article in English | MEDLINE | ID: mdl-37138086

ABSTRACT

Gliomas synaptically integrate into neural circuits1,2. Previous research has demonstrated bidirectional interactions between neurons and glioma cells, with neuronal activity driving glioma growth1-4 and gliomas increasing neuronal excitability2,5-8. Here we sought to determine how glioma-induced neuronal changes influence neural circuits underlying cognition and whether these interactions influence patient survival. Using intracranial brain recordings during lexical retrieval language tasks in awake humans together with site-specific tumour tissue biopsies and cell biology experiments, we find that gliomas remodel functional neural circuitry such that task-relevant neural responses activate tumour-infiltrated cortex well beyond the cortical regions that are normally recruited in the healthy brain. Site-directed biopsies from regions within the tumour that exhibit high functional connectivity between the tumour and the rest of the brain are enriched for a glioblastoma subpopulation that exhibits a distinct synaptogenic and neuronotrophic phenotype. Tumour cells from functionally connected regions secrete the synaptogenic factor thrombospondin-1, which contributes to the differential neuron-glioma interactions observed in functionally connected tumour regions compared with tumour regions with less functional connectivity. Pharmacological inhibition of thrombospondin-1 using the FDA-approved drug gabapentin decreases glioblastoma proliferation. The degree of functional connectivity between glioblastoma and the normal brain negatively affects both patient survival and performance in language tasks. These data demonstrate that high-grade gliomas functionally remodel neural circuits in the human brain, which both promotes tumour progression and impairs cognition.


Subject(s)
Brain Neoplasms , Glioblastoma , Neural Pathways , Humans , Brain/drug effects , Brain/metabolism , Brain/pathology , Brain Neoplasms/drug therapy , Brain Neoplasms/metabolism , Brain Neoplasms/pathology , Glioblastoma/drug therapy , Glioblastoma/metabolism , Glioblastoma/pathology , Thrombospondin 1/antagonists & inhibitors , Gabapentin/pharmacology , Gabapentin/therapeutic use , Disease Progression , Cognition , Survival Rate , Wakefulness , Biopsy , Cell Proliferation/drug effects
3.
Proc Natl Acad Sci U S A ; 119(44): e2123430119, 2022 11.
Article in English | MEDLINE | ID: mdl-36279460

ABSTRACT

Human accomplishments depend on learning, and effective learning depends on consolidation. Consolidation is the process whereby new memories are gradually stored in an enduring way in the brain so that they can be available when needed. For factual or event knowledge, consolidation is thought to progress during sleep as well as during waking states and to be mediated by interactions between hippocampal and neocortical networks. However, consolidation is difficult to observe directly but rather is inferred through behavioral observations. Here, we investigated overnight memory change by measuring electrical activity in and near the hippocampus. Electroencephalographic (EEG) recordings were made in five patients from electrodes implanted to determine whether a surgical treatment could relieve their seizure disorders. One night, while each patient slept in a hospital monitoring room, we recorded electrophysiological responses to 10 to 20 specific sounds that were presented very quietly, to avoid arousal. Half of the sounds had been associated with objects and their precise spatial locations that patients learned before sleep. After sleep, we found systematic improvements in spatial recall, replicating prior results. We assume that when the sounds were presented during sleep, they reactivated and strengthened corresponding spatial memories. Notably, the sounds also elicited oscillatory intracranial EEG activity, including increases in theta, sigma, and gamma EEG bands. Gamma responses, in particular, were consistently associated with the degree of improvement in spatial memory exhibited after sleep. We thus conclude that this electrophysiological activity in the hippocampus and adjacent medial temporal cortex reflects sleep-based enhancement of memory storage.


Subject(s)
Memory Consolidation , Humans , Sleep/physiology , Mental Recall/physiology , Brain , Hippocampus/physiology , Spatial Memory
4.
J Neurophysiol ; 127(6): 1547-1563, 2022 06 01.
Article in English | MEDLINE | ID: mdl-35507478

ABSTRACT

Sounds enhance our ability to detect, localize, and respond to co-occurring visual targets. Research suggests that sounds improve visual processing by resetting the phase of ongoing oscillations in visual cortex. However, it remains unclear what information is relayed from the auditory system to visual areas and if sounds modulate visual activity even in the absence of visual stimuli (e.g., during passive listening). Using intracranial electroencephalography (iEEG) in humans, we examined the sensitivity of visual cortex to three forms of auditory information during a passive listening task: auditory onset responses, auditory offset responses, and rhythmic entrainment to sounds. Because some auditory neurons respond to both sound onsets and offsets, visual timing and duration processing may benefit from each. In addition, if auditory entrainment information is relayed to visual cortex, it could support the processing of complex stimulus dynamics that are aligned between auditory and visual stimuli. Results demonstrate that in visual cortex, amplitude-modulated sounds elicited transient onset and offset responses in multiple areas, but no entrainment to sound modulation frequencies. These findings suggest that activity in visual cortex (as measured with iEEG in response to auditory stimuli) may not be affected by temporally fine-grained auditory stimulus dynamics during passive listening (though it remains possible that this signal may be observable with simultaneous auditory-visual stimuli). Moreover, auditory responses were maximal in low-level visual cortex, potentially implicating a direct pathway for rapid interactions between auditory and visual cortices. This mechanism may facilitate perception by time-locking visual computations to environmental events marked by auditory discontinuities.NEW & NOTEWORTHY Using intracranial electroencephalography (iEEG) in humans during a passive listening task, we demonstrate that sounds modulate activity in visual cortex at both the onset and offset of sounds, which likely supports visual timing and duration processing. However, more complex auditory rate information did not affect visual activity. These findings are based on one of the largest multisensory iEEG studies to date and reveal the type of information transmitted between auditory and visual regions.


Subject(s)
Auditory Cortex , Visual Cortex , Acoustic Stimulation/methods , Auditory Cortex/physiology , Auditory Perception/physiology , Humans , Sound , Visual Cortex/physiology , Visual Perception/physiology
5.
Cancers (Basel) ; 14(3)2022 Jan 29.
Article in English | MEDLINE | ID: mdl-35158959

ABSTRACT

Language, cognition, and behavioral testing have become a fundamental component of standard clinical care for brain cancer patients. Many existing publications have identified and addressed potential ethical issues that are present in the biomedical setting mostly centering around the enrollment of vulnerable populations for therapeutic clinical trials. Well-established guides and publications have served as useful tools for clinicians; however, little has been published for researchers who share the same stage but administer tests and collect valuable data solely for non-therapeutic investigational purposes derived from voluntary patient participation. Obtaining informed consent and administering language, cognition, and behavioral tasks for the sole purpose of research involving cancer patients that exhibit motor speech difficulties and cognitive impairments has its own hardships. Researchers may encounter patients who experience emotional responses during tasks that challenge their existing impairments. Patients may have difficulty differentiating between clinical testing and research testing due to similarity of task design and their physician's dual role as a principal investigator in the study. It is important for researchers to practice the proposed methods emphasized in this article to maintain the overall well-being of patients while simultaneously fulfilling the purpose of the study in a research setting.

6.
Proc Natl Acad Sci U S A ; 118(46)2021 11 16.
Article in English | MEDLINE | ID: mdl-34753819

ABSTRACT

Recent developments in the biology of malignant gliomas have demonstrated that glioma cells interact with neurons through both paracrine signaling and electrochemical synapses. Glioma-neuron interactions consequently modulate the excitability of local neuronal circuits, and it is unclear the extent to which glioma-infiltrated cortex can meaningfully participate in neural computations. For example, gliomas may result in a local disorganization of activity that impedes the transient synchronization of neural oscillations. Alternatively, glioma-infiltrated cortex may retain the ability to engage in synchronized activity in a manner similar to normal-appearing cortex but exhibit other altered spatiotemporal patterns of activity with subsequent impact on cognitive processing. Here, we use subdural electrocorticography to sample both normal-appearing and glioma-infiltrated cortex during speech. We find that glioma-infiltrated cortex engages in synchronous activity during task performance in a manner similar to normal-appearing cortex but recruits a diffuse spatial network. On a temporal scale, we show that signals from glioma-infiltrated cortex have decreased entropy, which may affect its ability to encode information during nuanced tasks such as production of monosyllabic versus polysyllabic words. Furthermore, we show that temporal decoding strategies for distinguishing monosyllabic from polysyllabic words were feasible for signals arising from normal-appearing cortex but not from glioma-infiltrated cortex. These findings inform our understanding of cognitive processing in chronic disease states and have implications for neuromodulation and prosthetics in patients with malignant gliomas.


Subject(s)
Brain Neoplasms/physiopathology , Glioma/physiopathology , Speech/physiology , Adult , Cerebral Cortex/physiopathology , Electrocorticography/methods , Humans , Neurons/physiology , Temporal Lobe/physiopathology
7.
Sci Rep ; 11(1): 23052, 2021 11 29.
Article in English | MEDLINE | ID: mdl-34845325

ABSTRACT

Multisensory stimuli speed behavioral responses, but the mechanisms subserving these effects remain disputed. Historically, the observation that multisensory reaction times (RTs) outpace models assuming independent sensory channels has been taken as evidence for multisensory integration (the "redundant target effect"; RTE). However, this interpretation has been challenged by alternative explanations based on stimulus sequence effects, RT variability, and/or negative correlations in unisensory processing. To clarify the mechanisms subserving the RTE, we collected RTs from 78 undergraduates in a multisensory simple RT task. Based on previous neurophysiological findings, we hypothesized that the RTE was unlikely to reflect these alternative mechanisms, and more likely reflected pre-potentiation of sensory responses through crossmodal phase-resetting. Contrary to accounts based on stimulus sequence effects, we found that preceding stimuli explained only 3-9% of the variance in apparent RTEs. Comparing three plausible evidence accumulator models, we found that multisensory RT distributions were best explained by increased sensory evidence at stimulus onset. Because crossmodal phase-resetting increases cortical excitability before sensory input arrives, these results are consistent with a mechanism based on pre-potentiation through phase-resetting. Mathematically, this model entails increasing the prior log-odds of stimulus presence, providing a potential link between neurophysiological, behavioral, and computational accounts of multisensory interactions.


Subject(s)
Acoustic Stimulation , Auditory Perception , Behavior , Photic Stimulation , Reaction Time/physiology , Visual Perception , Adolescent , Adult , Computer Simulation , Humans , Models, Neurological , Probability , Reproducibility of Results , Time Factors , Young Adult
8.
Eur J Neurosci ; 54(9): 7301-7317, 2021 11.
Article in English | MEDLINE | ID: mdl-34587350

ABSTRACT

Speech perception is a central component of social communication. Although principally an auditory process, accurate speech perception in everyday settings is supported by meaningful information extracted from visual cues. Visual speech modulates activity in cortical areas subserving auditory speech perception including the superior temporal gyrus (STG). However, it is unknown whether visual modulation of auditory processing is a unitary phenomenon or, rather, consists of multiple functionally distinct processes. To explore this question, we examined neural responses to audiovisual speech measured from intracranially implanted electrodes in 21 patients with epilepsy. We found that visual speech modulated auditory processes in the STG in multiple ways, eliciting temporally and spatially distinct patterns of activity that differed across frequency bands. In the theta band, visual speech suppressed the auditory response from before auditory speech onset to after auditory speech onset (-93 to 500 ms) most strongly in the posterior STG. In the beta band, suppression was seen in the anterior STG from -311 to -195 ms before auditory speech onset and in the middle STG from -195 to 235 ms after speech onset. In high gamma, visual speech enhanced the auditory response from -45 to 24 ms only in the posterior STG. We interpret the visual-induced changes prior to speech onset as reflecting crossmodal prediction of speech signals. In contrast, modulations after sound onset may reflect a decrease in sustained feedforward auditory activity. These results are consistent with models that posit multiple distinct mechanisms supporting audiovisual speech perception.


Subject(s)
Auditory Cortex , Speech Perception , Acoustic Stimulation , Auditory Perception , Humans , Speech , Visual Perception
9.
Article in English | MEDLINE | ID: mdl-34423178

ABSTRACT

Movies, audio stories, and virtual reality are increasingly used as stimuli for functional brain imaging. Such naturalistic paradigms are in sharp contrast to the tradition of experimental reductionism in neuroscience research. Being complex, dynamic, and diverse, naturalistic stimuli set up a more ecologically relevant condition and induce highly reproducible brain responses across a wide range of spatiotemporal scales. Here, we review recent technical advances and scientific findings on imaging the brain under naturalistic stimuli. Then we elaborate on the premise of using naturalistic paradigms for multi-scale, multi-modal, and high-throughput functional characterization of the human brain. We further highlight the growing potential of using deep learning models to infer neural information processing from brain responses to naturalistic stimuli. Lastly, we advocate large-scale collaborations to combine brain imaging and recording data across experiments, subjects, and labs that use the same set of naturalistic stimuli.

10.
J Neurosurg ; 135(6): 1817-1824, 2021 May 28.
Article in English | MEDLINE | ID: mdl-34049284

ABSTRACT

OBJECTIVE: Intraoperative tasks for awake language mapping are typically selected based on the language tracts that will likely be encountered during tumor resection. However, diminished attention and arousal secondary to perioperative sedatives may reduce a task's usefulness for identifying eloquent cortex. For instance, accuracy in performing select language tasks may be high preoperatively but decline in the operating room. In the present study, the authors sought to identify language tasks that can be performed with high accuracy in both situational contexts so the neurosurgical team can be confident that speech errors committed during awake language mapping result from direct cortical stimulation to eloquent cortex, rather than from poor performance in general. METHODS: We administered five language tasks to 44 patients: picture naming (PN), text reading (TR), auditory object naming (AN), repetition of 4-syllable words (4SYL), and production of syntactically intact sentences (SYNTAX). Performance was assessed using the 4-point scale of the quick aphasia battery 24 hours preoperatively and intraoperatively. We next determined whether or not accuracy on each task was higher preoperatively than intraoperatively. We also determined whether 1) intraoperative accuracy on a given task predicted intraoperative performance on the other tasks and 2) low preoperative accuracy on a task predicted a decrease in accuracy intraoperatively. RESULTS: Relative to preoperative accuracy, intraoperative accuracy declined on PN (3.90 vs 3.82, p = 0.0001), 4SYL (3.96 vs 3.91, p = 0.0006), and SYNTAX (3.85 vs 3.67, p = 0.0001) but not on TR (3.96 vs 3.94, p = 0.13) or AN (3.70 vs 3.58, p = 0.058). Intraoperative accuracy on PN and AN independently predicted intraoperative accuracy on the remaining language tasks (p < 0.001 and p < 0.01, respectively). Finally, low preoperative accuracy on SYNTAX predicted a decrease in accuracy on this task intraoperatively (R2 = 0.36, p = 0.00002). CONCLUSIONS: While TR lacks sensitivity in identifying language deficits at baseline, accuracy on TR is stable across testing settings. Baseline accuracy on the other four of our five language tasks was not predictive of intraoperative performance, signifying the need to repeat language tests prior to stimulation mapping to confirm reliability.

11.
Sci Rep ; 11(1): 6305, 2021 03 18.
Article in English | MEDLINE | ID: mdl-33737672

ABSTRACT

Lexical retrieval requires selecting and retrieving the most appropriate word from the lexicon to express a desired concept. Few studies have probed lexical retrieval with tasks other than picture naming, and when non-picture naming lexical retrieval tasks have been applied, both convergent and divergent results emerged. The presence of a single construct for auditory and visual processes of lexical retrieval would influence cognitive rehabilitation strategies for patients with aphasia. In this study, we perform support vector regression lesion-symptom mapping using a brain tumor model to test the hypothesis that brain regions specifically involved in lexical retrieval from visual and auditory stimuli represent overlapping neural systems. We find that principal components analysis of language tasks revealed multicollinearity between picture naming, auditory naming, and a validated measure of word finding, implying the existence of redundant cognitive constructs. Nonparametric, multivariate lesion-symptom mapping across participants was used to model accuracies on each of the four language tasks. Lesions within overlapping clusters of 8,333 voxels and 21,512 voxels in the left lateral prefrontal cortex (PFC) were predictive of impaired picture naming and auditory naming, respectively. These data indicate a convergence of heteromodal lexical retrieval within the PFC.


Subject(s)
Brain Neoplasms/psychology , Comprehension , Glioma/psychology , Prefrontal Cortex/physiopathology , Reading , Speech , Adult , Aged , Aphasia/rehabilitation , Brain Mapping , Brain Neoplasms/diagnostic imaging , Case-Control Studies , Female , Glioma/diagnostic imaging , Humans , Language Tests , Longitudinal Studies , Magnetic Resonance Imaging/methods , Male , Middle Aged , Prefrontal Cortex/diagnostic imaging , Prospective Studies , Semantics
12.
Epilepsia ; 62(5): 1268-1279, 2021 05.
Article in English | MEDLINE | ID: mdl-33735460

ABSTRACT

OBJECTIVES: Focal cortical dysplasia type II (FCDII) is one of the most common underlying pathologies in patients with drug-resistant epilepsy. However, mechanistic understanding of FCDII fails to keep pace with genetic discoveries, primarily due to the significant challenge in developing a clinically relevant animal model. Conceptually and clinically important questions, such as the unknown latent period of epileptogenesis and the controversial epileptogenic zone, remain unknown in all experimental FCDII animal models, making it even more challenging to investigate the underlying epileptogenic mechanisms. METHODS: In this study, we used continuous video-electroencephalography (EEG) monitoring to detect the earliest interictal and ictal events in a clustered regularly interspaced short palindromic repeats (CRISPR)-in utero electroporation (IUE) FCDII rat model that shares genetic, pathological, and electroclinical signatures with those observed in humans. We then took advantage of in vivo local field potential (LFP) recordings to localize the epileptogenic zone in these animals. RESULTS: To the best of our knowledge, we showed for the first time that epileptiform discharges emerged during the third postnatal week, and that the first seizure occurred as early as during the fourth postnatal week. We also showed that both interictal and ictal discharges are localized within the dysplastic cortex, concordant with human clinical data. SIGNIFICANCE: Together, our work identified the temporal and spatial frame of epileptogenesis in a highly clinically relevant FCDII animal model, paving the way for mechanistic studies at molecular, cellular, and circuitry levels.


Subject(s)
Brain/physiopathology , Disease Models, Animal , Epilepsy/physiopathology , Malformations of Cortical Development, Group I/physiopathology , Animals , Humans , Rats
13.
Neurosurgery ; 89(4): 539-548, 2021 09 15.
Article in English | MEDLINE | ID: mdl-33476391

ABSTRACT

Gliomas exist within the framework of complex neuronal circuitry in which network dynamics influence both tumor biology and cognition. The generalized impairment of cognition or loss of language function is a common occurrence for glioma patients. The interface between intrinsic brain tumors such as gliomas and functional cognitive networks are poorly understood. The ability to communicate effectively is critically important for receiving oncological therapies and maintaining a high quality of life. Although the propensity of gliomas to infiltrate cortical and subcortical structures and disrupt key anatomic language pathways is well documented, there is new evidence offering insight into the network and cellular mechanisms underpinning glioma-related aphasia and aphasia recovery. In this review, we will outline the current understanding of the mechanisms of cognitive dysfunction and recovery, using aphasia as an illustrative model.


Subject(s)
Brain Neoplasms , Glioma , Adult , Brain Mapping , Brain Neoplasms/complications , Central Nervous System , Cognition , Glioma/complications , Humans , Language , Quality of Life
14.
Clin Neurophysiol ; 132(1): 80-93, 2021 01.
Article in English | MEDLINE | ID: mdl-33360179

ABSTRACT

OBJECTIVE: To describe the spatio-temporal dynamics and interactions during linguistic and memory tasks. METHODS: Event-related electrocorticographic (ECoG) spectral patterns obtained during cognitive tasks from 26 epilepsy patients (aged: 9-60 y) were analyzed in order to examine the spatio-temporal patterns of activation of cortical language areas. ECoGs (1024 Hz/channel) were recorded from 1567 subdural electrodes and 510 depth electrodes chronically implanted over or within the frontal, parietal, occipital and/or temporal lobes as part of their surgical work-up for intractable seizures. Six language/memory tasks were performed, which required responding verbally to auditory or visual word stimuli. Detailed analysis of electrode locations allowed combining results across patients. RESULTS: Transient increases in induced ECoG gamma power (70-100 Hz) were observed in response to hearing words (central superior temporal gyrus), reading text and naming pictures (occipital and fusiform cortex) and speaking (pre-central, post-central and sub-central cortex). CONCLUSIONS: Between these activations there was widespread spatial divergence followed by convergence of gamma activity that reliably identified cortical areas associated with task-specific processes. SIGNIFICANCE: The combined dataset supports the concept of functionally-specific locally parallel language networks that are widely distributed, partially interacting in succession to serve the cognitive and behavioral demands of the tasks.


Subject(s)
Cerebral Cortex/physiology , Language , Nerve Net/physiology , Adolescent , Adult , Brain Mapping , Cerebral Cortex/diagnostic imaging , Child , Electrocorticography , Female , Humans , Magnetic Resonance Imaging , Male , Middle Aged , Nerve Net/diagnostic imaging , Young Adult
15.
Neurosurgery ; 89(1): 1-10, 2021 06 15.
Article in English | MEDLINE | ID: mdl-33289504

ABSTRACT

Cognitive decline is common among patients with low- and high-grade glioma and can significantly impact quality of life. Although cognitive outcomes have been studied after therapeutic interventions such as surgery and radiation, it is important to understand the impact of the disease process itself prior to any interventions. Neurocognitive domains of interest in this disease context include intellectual function and premorbid ability, executive function, learning and memory, attention, language function, processing speed, visuospatial function, motor function, and emotional function. Here, we review oncologic factors associated with more neurocognitive impairment, key neurocognitive tasks relevant to glioma patient assessment, as well as the relevance of the human neural connectome in understanding cognitive dysfunction in glioma patients. A contextual understanding of glioma-functional network disruption and its impact on cognition is critical in the surgical management of eloquent area tumors.


Subject(s)
Brain Neoplasms , Cognitive Dysfunction , Glioma , Adult , Brain Neoplasms/complications , Brain Neoplasms/surgery , Cognition , Cognitive Dysfunction/etiology , Glioma/complications , Glioma/surgery , Humans , Neuropsychological Tests , Neurosurgeons , Quality of Life
16.
Proc Natl Acad Sci U S A ; 117(29): 16920-16927, 2020 07 21.
Article in English | MEDLINE | ID: mdl-32632010

ABSTRACT

Visual speech facilitates auditory speech perception, but the visual cues responsible for these benefits and the information they provide remain unclear. Low-level models emphasize basic temporal cues provided by mouth movements, but these impoverished signals may not fully account for the richness of auditory information provided by visual speech. High-level models posit interactions among abstract categorical (i.e., phonemes/visemes) or amodal (e.g., articulatory) speech representations, but require lossy remapping of speech signals onto abstracted representations. Because visible articulators shape the spectral content of speech, we hypothesized that the perceptual system might exploit natural correlations between midlevel visual (oral deformations) and auditory speech features (frequency modulations) to extract detailed spectrotemporal information from visual speech without employing high-level abstractions. Consistent with this hypothesis, we found that the time-frequency dynamics of oral resonances (formants) could be predicted with unexpectedly high precision from the changing shape of the mouth during speech. When isolated from other speech cues, speech-based shape deformations improved perceptual sensitivity for corresponding frequency modulations, suggesting that listeners could exploit this cross-modal correspondence to facilitate perception. To test whether this type of correspondence could improve speech comprehension, we selectively degraded the spectral or temporal dimensions of auditory sentence spectrograms to assess how well visual speech facilitated comprehension under each degradation condition. Visual speech produced drastically larger enhancements during spectral degradation, suggesting a condition-specific facilitation effect driven by cross-modal recovery of auditory speech spectra. The perceptual system may therefore use audiovisual correlations rooted in oral acoustics to extract detailed spectrotemporal information from visual speech.


Subject(s)
Speech Acoustics , Speech Perception , Visual Perception , Adult , Cues , Female , Humans , Lip/physiology , Male , Phonetics
17.
Philos Trans R Soc Lond B Biol Sci ; 374(1787): 20190029, 2019 12 09.
Article in English | MEDLINE | ID: mdl-31630652

ABSTRACT

In synaesthesia, stimulation of one sensory modality evokes additional experiences in another modality (e.g. sounds evoking colours). Along with these cross-sensory experiences, there are several cognitive and perceptual differences between synaesthetes and non-synaesthetes. For example, synaesthetes demonstrate enhanced imagery, increased cortical excitability and greater perceptual sensitivity in the concurrent modality. Previous models suggest that synaesthesia results from increased connectivity between corresponding sensory regions or disinhibited feedback from higher cortical areas. While these models explain how one sense can evoke qualitative experiences in another, they fail to predict the broader phenotype of differences observed in synaesthetes. Here, we propose a novel model of synaesthesia based on the principles of stochastic resonance. Specifically, we hypothesize that synaesthetes have greater neural noise in sensory regions, which allows pre-existing multisensory pathways to elicit supra-threshold activation (i.e. synaesthetic experiences). The strengths of this model are (a) it predicts the broader cognitive and perceptual differences in synaesthetes, (b) it provides a unified framework linking developmental and induced synaesthesias, and (c) it explains why synaesthetic associations are inconsistent at onset but stabilize over time. We review research consistent with this model and propose future studies to test its limits. This article is part of a discussion meeting issue 'Bridging senses: novel insights from synaesthesia'.


Subject(s)
Synesthesia/psychology , Cognition , Color Perception , Humans , Models, Neurological , Models, Psychological
18.
Nature ; 573(7775): 539-545, 2019 09.
Article in English | MEDLINE | ID: mdl-31534222

ABSTRACT

High-grade gliomas are lethal brain cancers whose progression is robustly regulated by neuronal activity. Activity-regulated release of growth factors promotes glioma growth, but this alone is insufficient to explain the effect that neuronal activity exerts on glioma progression. Here we show that neuron and glioma interactions include electrochemical communication through bona fide AMPA receptor-dependent neuron-glioma synapses. Neuronal activity also evokes non-synaptic activity-dependent potassium currents that are amplified by gap junction-mediated tumour interconnections, forming an electrically coupled network. Depolarization of glioma membranes assessed by in vivo optogenetics promotes proliferation, whereas pharmacologically or genetically blocking electrochemical signalling inhibits the growth of glioma xenografts and extends mouse survival. Emphasizing the positive feedback mechanisms by which gliomas increase neuronal excitability and thus activity-regulated glioma growth, human intraoperative electrocorticography demonstrates increased cortical excitability in the glioma-infiltrated brain. Together, these findings indicate that synaptic and electrical integration into neural circuits promotes glioma progression.


Subject(s)
Brain/physiopathology , Electrical Synapses/pathology , Electrophysiological Phenomena , Glioma/physiopathology , Animals , Brain/cytology , Cell Membrane/pathology , Cell Proliferation , Gap Junctions/pathology , Gene Expression Profiling , Gene Expression Regulation, Neoplastic , Heterografts , Humans , Mice , Mice, Inbred NOD , Neurons/pathology , Optogenetics , Potassium/metabolism , Synaptic Transmission , Tumor Cells, Cultured
20.
J Neurosurg ; 132(6): 1930-1937, 2019 May 31.
Article in English | MEDLINE | ID: mdl-31151102

ABSTRACT

OBJECTIVE: Maximal safe tumor resection in language areas of the brain relies on a patient's ability to perform intraoperative language tasks. Assessing the performance of these tasks during awake craniotomies allows the neurosurgeon to identify and preserve brain regions that are critical for language processing. However, receiving sedation and analgesia just prior to experiencing an awake craniotomy may reduce a patient's wakefulness, leading to transient language and/or cognitive impairments that do not completely subside before language testing begins. At present, the degree to which wakefulness influences intraoperative language task performance is unclear. Therefore, the authors sought to determine whether any of 5 brief measures of wakefulness predicts such performance during awake craniotomies for glioma resection. METHODS: The authors recruited 21 patients with dominant hemisphere low- and high-grade gliomas. Each patient performed baseline wakefulness measures in addition to picture-naming and text-reading language tasks 24 hours before undergoing an awake craniotomy. The patients performed these same tasks again in the operating room following the cessation of anesthesia medications. The authors then conducted statistical analyses to investigate potential relationships between wakefulness measures and language task performance. RESULTS: Relative to baseline, performance on 3 of the 4 objective wakefulness measures (rapid counting, button pressing, and vigilance) declined in the operating room. Moreover, these declines appeared in the complete absence of self-reported changes in arousal. Performance on language tasks similarly declined in the intraoperative setting, with patients experiencing greater declines in picture naming than in text reading. Finally, performance declines on rapid counting and vigilance wakefulness tasks predicted performance declines on the picture-naming task. CONCLUSIONS: Current subjective methods for assessing wakefulness during awake craniotomies may be insufficient. The administration of objective measures of wakefulness just prior to language task administration may help to ensure that patients are ready for testing. It may also allow neurosurgeons to identify patients who are at risk for poor intraoperative performance.

SELECTION OF CITATIONS
SEARCH DETAIL
...