Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
bioRxiv ; 2024 Apr 26.
Artigo em Inglês | MEDLINE | ID: mdl-38712189

RESUMO

Keyboard typing with finger movements is a versatile digital interface for users with diverse skills, needs, and preferences. Currently, such an interface does not exist for people with paralysis. We developed an intracortical brain-computer interface (BCI) for typing with attempted flexion/extension movements of three finger groups on the right hand, or both hands, and demonstrated its flexibility in two dominant typing paradigms. The first paradigm is "point-and-click" typing, where a BCI user selects one key at a time using continuous real-time control, allowing selection of arbitrary sequences of symbols. During cued character selection with this paradigm, a human research participant with paralysis achieved 30-40 selections per minute with nearly 90% accuracy. The second paradigm is "keystroke" typing, where the BCI user selects each character by a discrete movement without real-time feedback, often giving a faster speed for natural language sentences. With 90 cued characters per minute, decoding attempted finger movements and correcting errors using a language model resulted in more than 90% accuracy. Notably, both paradigms matched the state-of-the-art for BCI performance and enabled further flexibility by the simultaneous selection of multiple characters as well as efficient decoder estimation across paradigms. Overall, the high-performance interface is a step towards the wider accessibility of BCI technology by addressing unmet user needs for flexibility.

2.
bioRxiv ; 2024 Feb 08.
Artigo em Inglês | MEDLINE | ID: mdl-38370697

RESUMO

People with paralysis express unmet needs for peer support, leisure activities, and sporting activities. Many within the general population rely on social media and massively multiplayer video games to address these needs. We developed a high-performance finger brain-computer-interface system allowing continuous control of 3 independent finger groups with 2D thumb movements. The system was tested in a human research participant over sequential trials requiring fingers to reach and hold on targets, with an average acquisition rate of 76 targets/minute and completion time of 1.58 ± 0.06 seconds. Performance compared favorably to previous animal studies, despite a 2-fold increase in the decoded degrees-of-freedom (DOF). Finger positions were then used for 4-DOF velocity control of a virtual quadcopter, demonstrating functionality over both fixed and random obstacle courses. This approach shows promise for controlling multiple-DOF end-effectors, such as robotic fingers or digital interfaces for work, entertainment, and socialization.

3.
Sci Rep ; 14(1): 1598, 2024 01 18.
Artigo em Inglês | MEDLINE | ID: mdl-38238386

RESUMO

Brain-computer interfaces have so far focused largely on enabling the control of a single effector, for example a single computer cursor or robotic arm. Restoring multi-effector motion could unlock greater functionality for people with paralysis (e.g., bimanual movement). However, it may prove challenging to decode the simultaneous motion of multiple effectors, as we recently found that a compositional neural code links movements across all limbs and that neural tuning changes nonlinearly during dual-effector motion. Here, we demonstrate the feasibility of high-quality bimanual control of two cursors via neural network (NN) decoders. Through simulations, we show that NNs leverage a neural 'laterality' dimension to distinguish between left and right-hand movements as neural tuning to both hands become increasingly correlated. In training recurrent neural networks (RNNs) for two-cursor control, we developed a method that alters the temporal structure of the training data by dilating/compressing it in time and re-ordering it, which we show helps RNNs successfully generalize to the online setting. With this method, we demonstrate that a person with paralysis can control two computer cursors simultaneously. Our results suggest that neural network decoders may be advantageous for multi-effector decoding, provided they are designed to transfer to the online setting.


Assuntos
Interfaces Cérebro-Computador , Redes Neurais de Computação , Humanos , Movimento , Lateralidade Funcional , Mãos , Paralisia , Encéfalo
4.
ArXiv ; 2023 Nov 06.
Artigo em Inglês | MEDLINE | ID: mdl-37986728

RESUMO

Intracortical brain-computer interfaces (iBCIs) have shown promise for restoring rapid communication to people with neurological disorders such as amyotrophic lateral sclerosis (ALS). However, to maintain high performance over time, iBCIs typically need frequent recalibration to combat changes in the neural recordings that accrue over days. This requires iBCI users to stop using the iBCI and engage in supervised data collection, making the iBCI system hard to use. In this paper, we propose a method that enables self-recalibration of communication iBCIs without interrupting the user. Our method leverages large language models (LMs) to automatically correct errors in iBCI outputs. The self-recalibration process uses these corrected outputs ("pseudo-labels") to continually update the iBCI decoder online. Over a period of more than one year (403 days), we evaluated our Continual Online Recalibration with Pseudo-labels (CORP) framework with one clinical trial participant. CORP achieved a stable decoding accuracy of 93.84% in an online handwriting iBCI task, significantly outperforming other baseline methods. Notably, this is the longest-running iBCI stability demonstration involving a human participant. Our results provide the first evidence for long-term stabilization of a plug-and-play, high-performance communication iBCI, addressing a major barrier for the clinical translation of iBCIs.

5.
bioRxiv ; 2023 Oct 12.
Artigo em Inglês | MEDLINE | ID: mdl-37873182

RESUMO

How does the motor cortex combine simple movements (such as single finger flexion/extension) into complex movements (such hand gestures or playing piano)? Motor cortical activity was recorded using intracortical multi-electrode arrays in two people with tetraplegia as they attempted single, pairwise and higher order finger movements. Neural activity for simultaneous movements was largely aligned with linear summation of corresponding single finger movement activities, with two violations. First, the neural activity was normalized, preventing a large magnitude with an increasing number of moving fingers. Second, the neural tuning direction of weakly represented fingers (e.g. middle) changed significantly as a result of the movement of other fingers. These deviations from linearity resulted in non-linear methods outperforming linear methods for neural decoding. Overall, simultaneous finger movements are thus represented by the combination of individual finger movements by pseudo-linear summation.

6.
Nature ; 620(7976): 1031-1036, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37612500

RESUMO

Speech brain-computer interfaces (BCIs) have the potential to restore rapid communication to people with paralysis by decoding neural activity evoked by attempted speech into text1,2 or sound3,4. Early demonstrations, although promising, have not yet achieved accuracies sufficiently high for communication of unconstrained sentences from a large vocabulary1-7. Here we demonstrate a speech-to-text BCI that records spiking activity from intracortical microelectrode arrays. Enabled by these high-resolution recordings, our study participant-who can no longer speak intelligibly owing to amyotrophic lateral sclerosis-achieved a 9.1% word error rate on a 50-word vocabulary (2.7 times fewer errors than the previous state-of-the-art speech BCI2) and a 23.8% word error rate on a 125,000-word vocabulary (the first successful demonstration, to our knowledge, of large-vocabulary decoding). Our participant's attempted speech was decoded  at 62 words per minute, which is 3.4 times as fast as the previous record8 and begins to approach the speed of natural conversation (160 words per minute9). Finally, we highlight two aspects of the neural code for speech that are encouraging for speech BCIs: spatially intermixed tuning to speech articulators that makes accurate decoding possible from only a small region of cortex, and a detailed articulatory representation of phonemes that persists years after paralysis. These results show a feasible path forward for restoring rapid communication to people with paralysis who can no longer speak.


Assuntos
Interfaces Cérebro-Computador , Próteses Neurais , Paralisia , Fala , Humanos , Esclerose Lateral Amiotrófica/fisiopatologia , Esclerose Lateral Amiotrófica/reabilitação , Córtex Cerebral/fisiologia , Microeletrodos , Paralisia/fisiopatologia , Paralisia/reabilitação , Vocabulário
7.
Artigo em Inglês | MEDLINE | ID: mdl-37465143

RESUMO

Intracortical brain computer interfaces (iBCIs) decode neural activity from the cortex and enable motor and communication prostheses, such as cursor control, handwriting and speech, for people with paralysis. This paper introduces a new iBCI communication prosthesis using a 3D keyboard interface for typing using continuous, closed loop movement of multiple fingers. A participant-specific BCI keyboard prototype was developed for a BrainGate2 clinical trial participant (T5) using neural recordings from the hand-knob area of the left premotor cortex. We assessed the relative decoding accuracy of flexion/extension movements of individual single fingers (5 degrees of freedom (DOF)) vs. three groups of fingers (thumb, index-middle, and ring-small fingers, 3 DOF). Neural decoding using 3 independent DOF was more accurate (95%) than that using 5 DOF (76%). A virtual keyboard was then developed where each finger group moved along a flexion-extension arc to acquire targets that corresponded to English letters and symbols. The locations of these letter/symbols were optimized using natural language statistics, resulting in an approximately a 2× reduction in distance traveled by fingers on average compared to a random keyboard layout. This keyboard was tested using a simple real-time closed loop decoder enabling T5 to type with 31 symbols at 90% accuracy and approximately 2.3 sec/symbol (excluding a 2 second hold time) on average.

8.
bioRxiv ; 2023 Apr 21.
Artigo em Inglês | MEDLINE | ID: mdl-37131830

RESUMO

Advances in deep learning have given rise to neural network models of the relationship between movement and brain activity that appear to far outperform prior approaches. Brain-computer interfaces (BCIs) that enable people with paralysis to control external devices, such as robotic arms or computer cursors, might stand to benefit greatly from these advances. We tested recurrent neural networks (RNNs) on a challenging nonlinear BCI problem: decoding continuous bimanual movement of two computer cursors. Surprisingly, we found that although RNNs appeared to perform well in offline settings, they did so by overfitting to the temporal structure of the training data and failed to generalize to real-time neuroprosthetic control. In response, we developed a method that alters the temporal structure of the training data by dilating/compressing it in time and re-ordering it, which we show helps RNNs successfully generalize to the online setting. With this method, we demonstrate that a person with paralysis can control two computer cursors simultaneously, far outperforming standard linear methods. Our results provide evidence that preventing models from overfitting to temporal structure in training data may, in principle, aid in translating deep learning advances to the BCI setting, unlocking improved performance for challenging applications.

9.
bioRxiv ; 2023 Feb 04.
Artigo em Inglês | MEDLINE | ID: mdl-36778458

RESUMO

Intracortical brain-computer interfaces (iBCIs) require frequent recalibration to maintain robust performance due to changes in neural activity that accumulate over time. Compensating for this nonstationarity would enable consistently high performance without the need for supervised recalibration periods, where users cannot engage in free use of their device. Here we introduce a hidden Markov model (HMM) to infer what targets users are moving toward during iBCI use. We then retrain the system using these inferred targets, enabling unsupervised adaptation to changing neural activity. Our approach outperforms the state of the art in large-scale, closed-loop simulations over two months and in closed-loop with a human iBCI user over one month. Leveraging an offline dataset spanning five years of iBCI recordings, we further show how recently proposed data distribution-matching approaches to recalibration fail over long time scales; only target-inference methods appear capable of enabling long-term unsupervised recalibration. Our results demonstrate how task structure can be used to bootstrap a noisy decoder into a highly-performant one, thereby overcoming one of the major barriers to clinically translating BCIs.

10.
bioRxiv ; 2023 Apr 25.
Artigo em Inglês | MEDLINE | ID: mdl-36711591

RESUMO

Speech brain-computer interfaces (BCIs) have the potential to restore rapid communication to people with paralysis by decoding neural activity evoked by attempted speaking movements into text or sound. Early demonstrations, while promising, have not yet achieved accuracies high enough for communication of unconstrainted sentences from a large vocabulary. Here, we demonstrate the first speech-to-text BCI that records spiking activity from intracortical microelectrode arrays. Enabled by these high-resolution recordings, our study participant, who can no longer speak intelligibly due amyotrophic lateral sclerosis (ALS), achieved a 9.1% word error rate on a 50 word vocabulary (2.7 times fewer errors than the prior state of the art speech BCI2) and a 23.8% word error rate on a 125,000 word vocabulary (the first successful demonstration of large-vocabulary decoding). Our BCI decoded speech at 62 words per minute, which is 3.4 times faster than the prior record for any kind of BCI and begins to approach the speed of natural conversation (160 words per minute). Finally, we highlight two aspects of the neural code for speech that are encouraging for speech BCIs: spatially intermixed tuning to speech articulators that makes accurate decoding possible from only a small region of cortex, and a detailed articulatory representation of phonemes that persists years after paralysis. These results show a feasible path forward for using intracortical speech BCIs to restore rapid communication to people with paralysis who can no longer speak.

11.
Adv Neural Inf Process Syst ; 36: 42258-42270, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-38738213

RESUMO

Intracortical brain-computer interfaces (iBCIs) have shown promise for restoring rapid communication to people with neurological disorders such as amyotrophic lateral sclerosis (ALS). However, to maintain high performance over time, iBCIs typically need frequent recalibration to combat changes in the neural recordings that accrue over days. This requires iBCI users to stop using the iBCI and engage in supervised data collection, making the iBCI system hard to use. In this paper, we propose a method that enables self-recalibration of communication iBCIs without interrupting the user. Our method leverages large language models (LMs) to automatically correct errors in iBCI outputs. The self-recalibration process uses these corrected outputs ("pseudo-labels") to continually update the iBCI decoder online. Over a period of more than one year (403 days), we evaluated our Continual Online Recalibration with Pseudo-labels (CORP) framework with one clinical trial participant. CORP achieved a stable decoding accuracy of 93.84% in an online handwriting iBCI task, significantly outperforming other baseline methods. Notably, this is the longest-running iBCI stability demonstration involving a human participant. Our results provide the first evidence for long-term stabilization of a plug-and-play, high-performance communication iBCI, addressing a major barrier for the clinical translation of iBCIs.

12.
Nature ; 593(7858): 249-254, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33981047

RESUMO

Brain-computer interfaces (BCIs) can restore communication to people who have lost the ability to move or speak. So far, a major focus of BCI research has been on restoring gross motor skills, such as reaching and grasping1-5 or point-and-click typing with a computer cursor6,7. However, rapid sequences of highly dexterous behaviours, such as handwriting or touch typing, might enable faster rates of communication. Here we developed an intracortical BCI that decodes attempted handwriting movements from neural activity in the motor cortex and translates it to text in real time, using a recurrent neural network decoding approach. With this BCI, our study participant, whose hand was paralysed from spinal cord injury, achieved typing speeds of 90 characters per minute with 94.1% raw accuracy online, and greater than 99% accuracy offline with a general-purpose autocorrect. To our knowledge, these typing speeds exceed those reported for any other BCI, and are comparable to typical smartphone typing speeds of individuals in the age group of our participant (115 characters per minute)8. Finally, theoretical considerations explain why temporally complex movements, such as handwriting, may be fundamentally easier to decode than point-to-point movements. Our results open a new approach for BCIs and demonstrate the feasibility of accurately decoding rapid, dexterous movements years after paralysis.


Assuntos
Interfaces Cérebro-Computador , Encéfalo/fisiologia , Comunicação , Escrita Manual , Humanos , Redes Neurais de Computação , Traumatismos da Medula Espinal , Fatores de Tempo
13.
J Neural Eng ; 17(6): 066007, 2020 11 25.
Artigo em Inglês | MEDLINE | ID: mdl-33236720

RESUMO

OBJECTIVE: To evaluate the potential of intracortical electrode array signals for brain-computer interfaces (BCIs) to restore lost speech, we measured the performance of decoders trained to discriminate a comprehensive basis set of 39 English phonemes and to synthesize speech sounds via a neural pattern matching method. We decoded neural correlates of spoken-out-loud words in the 'hand knob' area of precentral gyrus, a step toward the eventual goal of decoding attempted speech from ventral speech areas in patients who are unable to speak. APPROACH: Neural and audio data were recorded while two BrainGate2 pilot clinical trial participants, each with two chronically-implanted 96-electrode arrays, spoke 420 different words that broadly sampled English phonemes. Phoneme onsets were identified from audio recordings, and their identities were then classified from neural features consisting of each electrode's binned action potential counts or high-frequency local field potential power. Speech synthesis was performed using the 'Brain-to-Speech' pattern matching method. We also examined two potential confounds specific to decoding overt speech: acoustic contamination of neural signals and systematic differences in labeling different phonemes' onset times. MAIN RESULTS: A linear decoder achieved up to 29.3% classification accuracy (chance = 6%) across 39 phonemes, while an RNN classifier achieved 33.9% accuracy. Parameter sweeps indicated that performance did not saturate when adding more electrodes or more training data, and that accuracy improved when utilizing time-varying structure in the data. Microphonic contamination and phoneme onset differences modestly increased decoding accuracy, but could be mitigated by acoustic artifact subtraction and using a neural speech onset marker, respectively. Speech synthesis achieved r = 0.523 correlation between true and reconstructed audio. SIGNIFICANCE: The ability to decode speech using intracortical electrode array signals from a nontraditional speech area suggests that placing electrode arrays in ventral speech areas is a promising direction for speech BCIs.


Assuntos
Interfaces Cérebro-Computador , Fala , Eletrodos , Mãos , Humanos , Idioma
14.
Cell ; 181(2): 396-409.e26, 2020 04 16.
Artigo em Inglês | MEDLINE | ID: mdl-32220308

RESUMO

Decades after the motor homunculus was first proposed, it is still unknown how different body parts are intermixed and interrelated in human motor cortical areas at single-neuron resolution. Using multi-unit recordings, we studied how face, head, arm, and leg movements are represented in the hand knob area of premotor cortex (precentral gyrus) in people with tetraplegia. Contrary to traditional expectations, we found strong representation of all movements and a partially "compositional" neural code that linked together all four limbs. The code consisted of (1) a limb-coding component representing the limb to be moved and (2) a movement-coding component where analogous movements from each limb (e.g., hand grasp and toe curl) were represented similarly. Compositional coding might facilitate skill transfer across limbs, and it provides a useful framework for thinking about how the motor system constructs movement. Finally, we leveraged these results to create a whole-body intracortical brain-computer interface that spreads targets across all limbs.


Assuntos
Lobo Frontal/fisiologia , Córtex Motor/anatomia & histologia , Córtex Motor/fisiologia , Adulto , Mapeamento Encefálico , Lobo Frontal/anatomia & histologia , Corpo Humano , Humanos , Córtex Motor/metabolismo , Movimento/fisiologia
15.
J Neural Eng ; 17(1): 016049, 2020 02 05.
Artigo em Inglês | MEDLINE | ID: mdl-32023225

RESUMO

OBJECTIVE: Speech-related neural modulation was recently reported in 'arm/hand' area of human dorsal motor cortex that is used as a signal source for intracortical brain-computer interfaces (iBCIs). This raises the concern that speech-related modulation might deleteriously affect the decoding of arm movement intentions, for instance by affecting velocity command outputs. This study sought to clarify whether or not speaking would interfere with ongoing iBCI use. APPROACH: A participant in the BrainGate2 iBCI clinical trial used an iBCI to control a computer cursor; spoke short words in a stand-alone speech task; and spoke short words during ongoing iBCI use. We examined neural activity in all three behaviors and compared iBCI performance with and without concurrent speech. MAIN RESULTS: Dorsal motor cortex firing rates modulated strongly during stand-alone speech, but this activity was largely attenuated when speaking occurred during iBCI cursor control using attempted arm movements. 'Decoder-potent' projections of the attenuated speech-related neural activity were small, explaining why cursor task performance was similar between iBCI use with and without concurrent speaking. SIGNIFICANCE: These findings indicate that speaking does not directly interfere with iBCIs that decode attempted arm movements. This suggests that patients who are able to speak will be able to use motor cortical-driven computer interfaces or prostheses without needing to forgo speaking while using these devices.


Assuntos
Interfaces Cérebro-Computador , Córtex Motor/fisiologia , Desempenho Psicomotor/fisiologia , Fala/fisiologia , Traumatismos da Medula Espinal/reabilitação , Idoso , Interfaces Cérebro-Computador/tendências , Vértebras Cervicais/lesões , Humanos , Masculino , Movimento/fisiologia , Projetos Piloto , Traumatismos da Medula Espinal/fisiopatologia
16.
Elife ; 82019 12 10.
Artigo em Inglês | MEDLINE | ID: mdl-31820736

RESUMO

Speaking is a sensorimotor behavior whose neural basis is difficult to study with single neuron resolution due to the scarcity of human intracortical measurements. We used electrode arrays to record from the motor cortex 'hand knob' in two people with tetraplegia, an area not previously implicated in speech. Neurons modulated during speaking and during non-speaking movements of the tongue, lips, and jaw. This challenges whether the conventional model of a 'motor homunculus' division by major body regions extends to the single-neuron scale. Spoken words and syllables could be decoded from single trials, demonstrating the potential of intracortical recordings for brain-computer interfaces to restore speech. Two neural population dynamics features previously reported for arm movements were also present during speaking: a component that was mostly invariant across initiating different words, followed by rotatory dynamics during speaking. This suggests that common neural dynamical motifs may underlie movement of arm and speech articulators.


Assuntos
Córtex Motor/fisiopatologia , Rede Nervosa/fisiopatologia , Quadriplegia/fisiopatologia , Fala/fisiologia , Algoritmos , Braço/fisiopatologia , Interfaces Cérebro-Computador , Eletrocorticografia , Mãos/fisiopatologia , Humanos , Lábio/fisiopatologia , Modelos Neurológicos , Movimento/fisiologia , Córtex Sensório-Motor/fisiopatologia , Língua/fisiopatologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA