Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Sci Rep ; 14(1): 9617, 2024 04 26.
Artículo en Inglés | MEDLINE | ID: mdl-38671062

RESUMEN

Brain-computer interfaces (BCIs) that reconstruct and synthesize speech using brain activity recorded with intracranial electrodes may pave the way toward novel communication interfaces for people who have lost their ability to speak, or who are at high risk of losing this ability, due to neurological disorders. Here, we report online synthesis of intelligible words using a chronically implanted brain-computer interface (BCI) in a man with impaired articulation due to ALS, participating in a clinical trial (ClinicalTrials.gov, NCT03567213) exploring different strategies for BCI communication. The 3-stage approach reported here relies on recurrent neural networks to identify, decode and synthesize speech from electrocorticographic (ECoG) signals acquired across motor, premotor and somatosensory cortices. We demonstrate a reliable BCI that synthesizes commands freely chosen and spoken by the participant from a vocabulary of 6 keywords previously used for decoding commands to control a communication board. Evaluation of the intelligibility of the synthesized speech indicates that 80% of the words can be correctly recognized by human listeners. Our results show that a speech-impaired individual with ALS can use a chronically implanted BCI to reliably produce synthesized words while preserving the participant's voice profile, and provide further evidence for the stability of ECoG for speech-based BCIs.


Asunto(s)
Esclerosis Amiotrófica Lateral , Interfaces Cerebro-Computador , Habla , Humanos , Esclerosis Amiotrófica Lateral/fisiopatología , Esclerosis Amiotrófica Lateral/terapia , Masculino , Habla/fisiología , Persona de Mediana Edad , Electrodos Implantados , Electrocorticografía
2.
Res Sq ; 2023 Sep 25.
Artículo en Inglés | MEDLINE | ID: mdl-37841873

RESUMEN

Background: Brain-computer interfaces (BCIs) can restore communication in movement- and/or speech-impaired individuals by enabling neural control of computer typing applications. Single command "click" decoders provide a basic yet highly functional capability. Methods: We sought to test the performance and long-term stability of click-decoding using a chronically implanted high density electrocorticographic (ECoG) BCI with coverage of the sensorimotor cortex in a human clinical trial participant (ClinicalTrials.gov, NCT03567213) with amyotrophic lateral sclerosis (ALS). We trained the participant's click decoder using a small amount of training data (< 44 minutes across four days) collected up to 21 days prior to BCI use, and then tested it over a period of 90 days without any retraining or updating. Results: Using this click decoder to navigate a switch-scanning spelling interface, the study participant was able to maintain a median spelling rate of 10.2 characters per min. Though a transient reduction in signal power modulation interrupted testing with this fixed model, a new click decoder achieved comparable performance despite being trained with even less data (< 15 min, within one day). Conclusion: These results demonstrate that a click decoder can be trained with a small ECoG dataset while retaining robust performance for extended periods, providing functional text-based communication to BCI users.

3.
Adv Sci (Weinh) ; 10(35): e2304853, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37875404

RESUMEN

Brain-computer interfaces (BCIs) can be used to control assistive devices by patients with neurological disorders like amyotrophic lateral sclerosis (ALS) that limit speech and movement. For assistive control, it is desirable for BCI systems to be accurate and reliable, preferably with minimal setup time. In this study, a participant with severe dysarthria due to ALS operates computer applications with six intuitive speech commands via a chronic electrocorticographic (ECoG) implant over the ventral sensorimotor cortex. Speech commands are accurately detected and decoded (median accuracy: 90.59%) throughout a 3-month study period without model retraining or recalibration. Use of the BCI does not require exogenous timing cues, enabling the participant to issue self-paced commands at will. These results demonstrate that a chronically implanted ECoG-based speech BCI can reliably control assistive devices over long time periods with only initial model training and calibration, supporting the feasibility of unassisted home use.


Asunto(s)
Esclerosis Amiotrófica Lateral , Interfaces Cerebro-Computador , Humanos , Habla , Esclerosis Amiotrófica Lateral/complicaciones , Electrocorticografía
4.
medRxiv ; 2023 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-37425721

RESUMEN

Recent studies have shown that speech can be reconstructed and synthesized using only brain activity recorded with intracranial electrodes, but until now this has only been done using retrospective analyses of recordings from able-bodied patients temporarily implanted with electrodes for epilepsy surgery. Here, we report online synthesis of intelligible words using a chronically implanted brain-computer interface (BCI) in a clinical trial participant (ClinicalTrials.gov, NCT03567213) with dysarthria due to amyotrophic lateral sclerosis (ALS). We demonstrate a reliable BCI that synthesizes commands freely chosen and spoken by the user from a vocabulary of 6 keywords originally designed to allow intuitive selection of items on a communication board. Our results show for the first time that a speech-impaired individual with ALS can use a chronically implanted BCI to reliably produce synthesized words that are intelligible to human listeners while preserving the participants voice profile.

5.
bioRxiv ; 2023 Oct 24.
Artículo en Inglés | MEDLINE | ID: mdl-37066306

RESUMEN

Neurosurgical procedures that enable direct brain recordings in awake patients offer unique opportunities to explore the neurophysiology of human speech. The scarcity of these opportunities and the altruism of participating patients compel us to apply the highest rigor to signal analysis. Intracranial electroencephalography (iEEG) signals recorded during overt speech can contain a speech artifact that tracks the fundamental frequency (F0) of the participant's voice, involving the same high-gamma frequencies that are modulated during speech production and perception. To address this artifact, we developed a spatial-filtering approach to identify and remove acoustic-induced contaminations of the recorded signal. We found that traditional reference schemes jeopardized signal quality, whereas our data-driven method denoised the recordings while preserving underlying neural activity.

6.
Neurotherapeutics ; 19(1): 263-273, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-35099768

RESUMEN

Damage or degeneration of motor pathways necessary for speech and other movements, as in brainstem strokes or amyotrophic lateral sclerosis (ALS), can interfere with efficient communication without affecting brain structures responsible for language or cognition. In the worst-case scenario, this can result in the locked in syndrome (LIS), a condition in which individuals cannot initiate communication and can only express themselves by answering yes/no questions with eye blinks or other rudimentary movements. Existing augmentative and alternative communication (AAC) devices that rely on eye tracking can improve the quality of life for people with this condition, but brain-computer interfaces (BCIs) are also increasingly being investigated as AAC devices, particularly when eye tracking is too slow or unreliable. Moreover, with recent and ongoing advances in machine learning and neural recording technologies, BCIs may offer the only means to go beyond cursor control and text generation on a computer, to allow real-time synthesis of speech, which would arguably offer the most efficient and expressive channel for communication. The potential for BCI speech synthesis has only recently been realized because of seminal studies of the neuroanatomical and neurophysiological underpinnings of speech production using intracranial electrocorticographic (ECoG) recordings in patients undergoing epilepsy surgery. These studies have shown that cortical areas responsible for vocalization and articulation are distributed over a large area of ventral sensorimotor cortex, and that it is possible to decode speech and reconstruct its acoustics from ECoG if these areas are recorded with sufficiently dense and comprehensive electrode arrays. In this article, we review these advances, including the latest neural decoding strategies that range from deep learning models to the direct concatenation of speech units. We also discuss state-of-the-art vocoders that are integral in constructing natural-sounding audio waveforms for speech BCIs. Finally, this review outlines some of the challenges ahead in directly synthesizing speech for patients with LIS.


Asunto(s)
Interfaces Cerebro-Computador , Comunicación , Electrocorticografía , Humanos , Calidad de Vida , Habla/fisiología
7.
Ann Clin Transl Neurol ; 6(7): 1142-1150, 2019 07.
Artículo en Inglés | MEDLINE | ID: mdl-31353863

RESUMEN

BACKGROUND: The selection of optimal deep brain stimulation (DBS) parameters is time-consuming, experience-dependent, and best suited when acute effects of stimulation can be observed (e.g., tremor reduction). OBJECTIVES: To test the hypothesis that optimal stimulation location can be estimated based on the cortical connections of DBS contacts. METHODS: We analyzed a cohort of 38 patients with Parkinson's disease (24 training, and 14 test cohort). Using whole-brain probabilistic tractography, we first mapped the cortical regions associated with stimulation-induced efficacy (rigidity, bradykinesia, and tremor improvement) and side effects (paresthesia, motor contractions, and visual disturbances). We then trained a support vector machine classifier to categorize DBS contacts into efficacious, defined by a therapeutic window ≥2 V (threshold for side effect minus threshold for efficacy), based on their connections with cortical regions associated with efficacy versus side effects. The connectivity-based classifications were then compared with actual stimulation contacts using receiver-operating characteristics (ROC) curves. RESULTS: Unique cortical clusters were associated with stimulation-induced efficacy and side effects. In the training dataset, 42 of the 47 stimulation contacts were accurately classified as efficacious, with a therapeutic window of ≥3 V in 31 (66%) and between 2 and 2.9 V in 11 (24%) electrodes. This connectivity-based estimation was successfully replicated in the test cohort with similar accuracy (area under ROC = 0.83). CONCLUSIONS: Cortical connections can predict the efficacy of DBS contacts and potentially facilitate DBS programming. The clinical utility of this paradigm in optimizing DBS outcomes should be prospectively tested, especially for directional electrodes.


Asunto(s)
Estimulación Encefálica Profunda/métodos , Enfermedad de Parkinson/terapia , Anciano , Encéfalo/diagnóstico por imagen , Estimulación Encefálica Profunda/efectos adversos , Estudios de Factibilidad , Humanos , Hipocinesia/diagnóstico por imagen , Hipocinesia/terapia , Persona de Mediana Edad , Enfermedad de Parkinson/diagnóstico por imagen , Temblor/diagnóstico por imagen , Temblor/terapia
8.
Front Neurosci ; 13: 60, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-30837823

RESUMEN

Neural keyword spotting could form the basis of a speech brain-computer-interface for menu-navigation if it can be done with low latency and high specificity comparable to the "wake-word" functionality of modern voice-activated AI assistant technologies. This study investigated neural keyword spotting using motor representations of speech via invasively-recorded electrocorticographic signals as a proof-of-concept. Neural matched filters were created from monosyllabic consonant-vowel utterances: one keyword utterance, and 11 similar non-keyword utterances. These filters were used in an analog to the acoustic keyword spotting problem, applied for the first time to neural data. The filter templates were cross-correlated with the neural signal, capturing temporal dynamics of neural activation across cortical sites. Neural vocal activity detection (VAD) was used to identify utterance times and a discriminative classifier was used to determine if these utterances were the keyword or non-keyword speech. Model performance appeared to be highly related to electrode placement and spatial density. Vowel height (/a/ vs /i/) was poorly discriminated in recordings from sensorimotor cortex, but was highly discriminable using neural features from superior temporal gyrus during self-monitoring. The best performing neural keyword detection (5 keyword detections with two false-positives across 60 utterances) and neural VAD (100% sensitivity, ~1 false detection per 10 utterances) came from high-density (2 mm electrode diameter and 5 mm pitch) recordings from ventral sensorimotor cortex, suggesting the spatial fidelity and extent of high-density ECoG arrays may be sufficient for the purpose of speech brain-computer-interfaces.

9.
Neurotherapeutics ; 16(1): 144-165, 2019 01.
Artículo en Inglés | MEDLINE | ID: mdl-30617653

RESUMEN

A brain-computer interface (BCI) is a technology that uses neural features to restore or augment the capabilities of its user. A BCI for speech would enable communication in real time via neural correlates of attempted or imagined speech. Such a technology would potentially restore communication and improve quality of life for locked-in patients and other patients with severe communication disorders. There have been many recent developments in neural decoders, neural feature extraction, and brain recording modalities facilitating BCI for the control of prosthetics and in automatic speech recognition (ASR). Indeed, ASR and related fields have developed significantly over the past years, and many lend many insights into the requirements, goals, and strategies for speech BCI. Neural speech decoding is a comparatively new field but has shown much promise with recent studies demonstrating semantic, auditory, and articulatory decoding using electrocorticography (ECoG) and other neural recording modalities. Because the neural representations for speech and language are widely distributed over cortical regions spanning the frontal, parietal, and temporal lobes, the mesoscopic scale of population activity captured by ECoG surface electrode arrays may have distinct advantages for speech BCI, in contrast to the advantages of microelectrode arrays for upper-limb BCI. Nevertheless, there remain many challenges for the translation of speech BCIs to clinical populations. This review discusses and outlines the current state-of-the-art for speech BCI and explores what a speech BCI using chronic ECoG might entail.


Asunto(s)
Interfaces Cerebro-Computador , Electrocorticografía , Habla , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...