Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Clin Neurophysiol ; 147: 31-44, 2023 03.
Article in English | MEDLINE | ID: mdl-36634533

ABSTRACT

OBJECTIVE: To investigate the feasibility of passive functional mapping in the receptive language cortex during general anesthesia using electrocorticographic (ECoG) signals. METHODS: We used subdurally placed ECoG grids to record cortical responses to speech stimuli during awake and anesthesia conditions. We identified the cortical areas with significant responses to the stimuli using the spectro-temporal consistency of the brain signal in the broadband gamma (BBG) frequency band (70-170 Hz). RESULTS: We found that ECoG BBG responses during general anesthesia effectively identify cortical regions associated with receptive language function. Our analyses demonstrated that the ability to identify receptive language cortex varies across different states and depths of anesthesia. We confirmed these results by comparing them to receptive language areas identified during the awake condition. Quantification of these results demonstrated an average sensitivity and specificity of passive language mapping during general anesthesia to be 49±7.7% and 100%, respectively. CONCLUSION: Our results demonstrate that mapping receptive language cortex in patients during general anesthesia is feasible. SIGNIFICANCE: Our proposed protocol could greatly expand the population of patients that can benefit from passive language mapping techniques, and could eliminate the risks associated with electrocortical stimulation during an awake craniotomy.


Subject(s)
Brain Mapping , Electrocorticography , Humans , Electrocorticography/methods , Brain Mapping/methods , Brain/surgery , Language , Anesthesia, General , Cerebral Cortex/physiology
2.
Epilepsy Behav Case Rep ; 5: 46-51, 2016.
Article in English | MEDLINE | ID: mdl-27408802

ABSTRACT

In this case report, we investigated the utility and practicality of passive intraoperative functional mapping of expressive language cortex using high-resolution electrocorticography (ECoG). The patient presented here experienced new-onset seizures caused by a medium-grade tumor in very close proximity to expressive language regions. In preparation of tumor resection, the patient underwent multiple functional language mapping procedures. We examined the relationship of results obtained with intraoperative high-resolution ECoG, extraoperative ECoG utilizing a conventional subdural grid, extraoperative electrical cortical stimulation (ECS) mapping, and functional magnetic resonance imaging (fMRI). Our results demonstrate that intraoperative mapping using high-resolution ECoG is feasible and, within minutes, produces results that are qualitatively concordant to those achieved by extraoperative mapping modalities. They also suggest that functional language mapping of expressive language areas with ECoG may prove useful in many intraoperative conditions given its time efficiency and safety. Finally, they demonstrate that integration of results from multiple functional mapping techniques, both intraoperative and extraoperative, may serve to improve the confidence in or precision of functional localization when pathology encroaches upon eloquent language cortex.

3.
Epilepsy Behav Case Rep ; 6: 13-8, 2016.
Article in English | MEDLINE | ID: mdl-27408803

ABSTRACT

OBJECTIVE: Patients requiring resective brain surgery often undergo functional brain mapping during perioperative planning to localize expressive language areas. Currently, all established protocols to perform such mapping require substantial time and patient participation during verb generation or similar tasks. These issues can make language mapping impractical in certain clinical circumstances (e.g., during awake craniotomies) or with certain populations (e.g., pediatric patients). Thus, it is important to develop new techniques that reduce mapping time and the requirement for active patient participation. Several neuroscientific studies reported that the mere auditory presentation of speech stimuli can engage not only receptive but also expressive language areas. Here, we tested the hypothesis that submission of electrocorticographic (ECoG) recordings during a short speech listening task to an appropriate analysis procedure can identify eloquent expressive language cortex without requiring the patient to speak. METHODS: Three patients undergoing temporary placement of subdural electrode grids passively listened to stories while we recorded their ECoG activity. We identified those sites whose activity in the broadband gamma range (70-170 Hz) changed immediately after presentation of the speech stimuli with respect to a prestimulus baseline. RESULTS: Our analyses revealed increased broadband gamma activity at distinct locations in the inferior frontal cortex, superior temporal gyrus, and/or perisylvian areas in all three patients and premotor and/or supplementary motor areas in two patients. The sites in the inferior frontal cortex that we identified with our procedure were either on or immediately adjacent to locations identified using electrical cortical stimulation (ECS) mapping. CONCLUSIONS: The results of this study provide encouraging preliminary evidence that it may be possible that a brief and practical protocol can identify expressive language areas without requiring the patient to speak. This protocol could provide the clinician with a map of expressive language cortex within a few minutes. This may be useful as an adjunct to ECS interrogation or as an alternative to mapping using functional magnetic resonance imaging (fMRI). In conclusion, with further development and validation in more subjects, the approach presented here could help in identifying expressive language areas in situations where patients cannot speak in response to task instructions.

4.
Front Neurosci ; 9: 217, 2015.
Article in English | MEDLINE | ID: mdl-26124702

ABSTRACT

It has long been speculated whether communication between humans and machines based on natural speech related cortical activity is possible. Over the past decade, studies have suggested that it is feasible to recognize isolated aspects of speech from neural signals, such as auditory features, phones or one of a few isolated words. However, until now it remained an unsolved challenge to decode continuously spoken speech from the neural substrate associated with speech and language processing. Here, we show for the first time that continuously spoken speech can be decoded into the expressed words from intracranial electrocorticographic (ECoG) recordings.Specifically, we implemented a system, which we call Brain-To-Text that models single phones, employs techniques from automatic speech recognition (ASR), and thereby transforms brain activity while speaking into the corresponding textual representation. Our results demonstrate that our system can achieve word error rates as low as 25% and phone error rates below 50%. Additionally, our approach contributes to the current understanding of the neural basis of continuous speech production by identifying those cortical regions that hold substantial information about individual phones. In conclusion, the Brain-To-Text system described in this paper represents an important step toward human-machine communication based on imagined speech.

SELECTION OF CITATIONS
SEARCH DETAIL
...