Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
Add more filters










Publication year range
1.
bioRxiv ; 2024 Apr 26.
Article in English | MEDLINE | ID: mdl-38712189

ABSTRACT

Keyboard typing with finger movements is a versatile digital interface for users with diverse skills, needs, and preferences. Currently, such an interface does not exist for people with paralysis. We developed an intracortical brain-computer interface (BCI) for typing with attempted flexion/extension movements of three finger groups on the right hand, or both hands, and demonstrated its flexibility in two dominant typing paradigms. The first paradigm is "point-and-click" typing, where a BCI user selects one key at a time using continuous real-time control, allowing selection of arbitrary sequences of symbols. During cued character selection with this paradigm, a human research participant with paralysis achieved 30-40 selections per minute with nearly 90% accuracy. The second paradigm is "keystroke" typing, where the BCI user selects each character by a discrete movement without real-time feedback, often giving a faster speed for natural language sentences. With 90 cued characters per minute, decoding attempted finger movements and correcting errors using a language model resulted in more than 90% accuracy. Notably, both paradigms matched the state-of-the-art for BCI performance and enabled further flexibility by the simultaneous selection of multiple characters as well as efficient decoder estimation across paradigms. Overall, the high-performance interface is a step towards the wider accessibility of BCI technology by addressing unmet user needs for flexibility.

2.
bioRxiv ; 2024 Feb 08.
Article in English | MEDLINE | ID: mdl-38370697

ABSTRACT

People with paralysis express unmet needs for peer support, leisure activities, and sporting activities. Many within the general population rely on social media and massively multiplayer video games to address these needs. We developed a high-performance finger brain-computer-interface system allowing continuous control of 3 independent finger groups with 2D thumb movements. The system was tested in a human research participant over sequential trials requiring fingers to reach and hold on targets, with an average acquisition rate of 76 targets/minute and completion time of 1.58 ± 0.06 seconds. Performance compared favorably to previous animal studies, despite a 2-fold increase in the decoded degrees-of-freedom (DOF). Finger positions were then used for 4-DOF velocity control of a virtual quadcopter, demonstrating functionality over both fixed and random obstacle courses. This approach shows promise for controlling multiple-DOF end-effectors, such as robotic fingers or digital interfaces for work, entertainment, and socialization.

3.
Sci Rep ; 14(1): 1598, 2024 01 18.
Article in English | MEDLINE | ID: mdl-38238386

ABSTRACT

Brain-computer interfaces have so far focused largely on enabling the control of a single effector, for example a single computer cursor or robotic arm. Restoring multi-effector motion could unlock greater functionality for people with paralysis (e.g., bimanual movement). However, it may prove challenging to decode the simultaneous motion of multiple effectors, as we recently found that a compositional neural code links movements across all limbs and that neural tuning changes nonlinearly during dual-effector motion. Here, we demonstrate the feasibility of high-quality bimanual control of two cursors via neural network (NN) decoders. Through simulations, we show that NNs leverage a neural 'laterality' dimension to distinguish between left and right-hand movements as neural tuning to both hands become increasingly correlated. In training recurrent neural networks (RNNs) for two-cursor control, we developed a method that alters the temporal structure of the training data by dilating/compressing it in time and re-ordering it, which we show helps RNNs successfully generalize to the online setting. With this method, we demonstrate that a person with paralysis can control two computer cursors simultaneously. Our results suggest that neural network decoders may be advantageous for multi-effector decoding, provided they are designed to transfer to the online setting.


Subject(s)
Brain-Computer Interfaces , Neural Networks, Computer , Humans , Movement , Functional Laterality , Hand , Paralysis , Brain
4.
Nature ; 620(7976): 1031-1036, 2023 Aug.
Article in English | MEDLINE | ID: mdl-37612500

ABSTRACT

Speech brain-computer interfaces (BCIs) have the potential to restore rapid communication to people with paralysis by decoding neural activity evoked by attempted speech into text1,2 or sound3,4. Early demonstrations, although promising, have not yet achieved accuracies sufficiently high for communication of unconstrained sentences from a large vocabulary1-7. Here we demonstrate a speech-to-text BCI that records spiking activity from intracortical microelectrode arrays. Enabled by these high-resolution recordings, our study participant-who can no longer speak intelligibly owing to amyotrophic lateral sclerosis-achieved a 9.1% word error rate on a 50-word vocabulary (2.7 times fewer errors than the previous state-of-the-art speech BCI2) and a 23.8% word error rate on a 125,000-word vocabulary (the first successful demonstration, to our knowledge, of large-vocabulary decoding). Our participant's attempted speech was decoded  at 62 words per minute, which is 3.4 times as fast as the previous record8 and begins to approach the speed of natural conversation (160 words per minute9). Finally, we highlight two aspects of the neural code for speech that are encouraging for speech BCIs: spatially intermixed tuning to speech articulators that makes accurate decoding possible from only a small region of cortex, and a detailed articulatory representation of phonemes that persists years after paralysis. These results show a feasible path forward for restoring rapid communication to people with paralysis who can no longer speak.


Subject(s)
Brain-Computer Interfaces , Neural Prostheses , Paralysis , Speech , Humans , Amyotrophic Lateral Sclerosis/physiopathology , Amyotrophic Lateral Sclerosis/rehabilitation , Cerebral Cortex/physiology , Microelectrodes , Paralysis/physiopathology , Paralysis/rehabilitation , Vocabulary
5.
Article in English | MEDLINE | ID: mdl-37465143

ABSTRACT

Intracortical brain computer interfaces (iBCIs) decode neural activity from the cortex and enable motor and communication prostheses, such as cursor control, handwriting and speech, for people with paralysis. This paper introduces a new iBCI communication prosthesis using a 3D keyboard interface for typing using continuous, closed loop movement of multiple fingers. A participant-specific BCI keyboard prototype was developed for a BrainGate2 clinical trial participant (T5) using neural recordings from the hand-knob area of the left premotor cortex. We assessed the relative decoding accuracy of flexion/extension movements of individual single fingers (5 degrees of freedom (DOF)) vs. three groups of fingers (thumb, index-middle, and ring-small fingers, 3 DOF). Neural decoding using 3 independent DOF was more accurate (95%) than that using 5 DOF (76%). A virtual keyboard was then developed where each finger group moved along a flexion-extension arc to acquire targets that corresponded to English letters and symbols. The locations of these letter/symbols were optimized using natural language statistics, resulting in an approximately a 2× reduction in distance traveled by fingers on average compared to a random keyboard layout. This keyboard was tested using a simple real-time closed loop decoder enabling T5 to type with 31 symbols at 90% accuracy and approximately 2.3 sec/symbol (excluding a 2 second hold time) on average.

6.
bioRxiv ; 2023 Apr 21.
Article in English | MEDLINE | ID: mdl-37131830

ABSTRACT

Advances in deep learning have given rise to neural network models of the relationship between movement and brain activity that appear to far outperform prior approaches. Brain-computer interfaces (BCIs) that enable people with paralysis to control external devices, such as robotic arms or computer cursors, might stand to benefit greatly from these advances. We tested recurrent neural networks (RNNs) on a challenging nonlinear BCI problem: decoding continuous bimanual movement of two computer cursors. Surprisingly, we found that although RNNs appeared to perform well in offline settings, they did so by overfitting to the temporal structure of the training data and failed to generalize to real-time neuroprosthetic control. In response, we developed a method that alters the temporal structure of the training data by dilating/compressing it in time and re-ordering it, which we show helps RNNs successfully generalize to the online setting. With this method, we demonstrate that a person with paralysis can control two computer cursors simultaneously, far outperforming standard linear methods. Our results provide evidence that preventing models from overfitting to temporal structure in training data may, in principle, aid in translating deep learning advances to the BCI setting, unlocking improved performance for challenging applications.

7.
bioRxiv ; 2023 Feb 04.
Article in English | MEDLINE | ID: mdl-36778458

ABSTRACT

Intracortical brain-computer interfaces (iBCIs) require frequent recalibration to maintain robust performance due to changes in neural activity that accumulate over time. Compensating for this nonstationarity would enable consistently high performance without the need for supervised recalibration periods, where users cannot engage in free use of their device. Here we introduce a hidden Markov model (HMM) to infer what targets users are moving toward during iBCI use. We then retrain the system using these inferred targets, enabling unsupervised adaptation to changing neural activity. Our approach outperforms the state of the art in large-scale, closed-loop simulations over two months and in closed-loop with a human iBCI user over one month. Leveraging an offline dataset spanning five years of iBCI recordings, we further show how recently proposed data distribution-matching approaches to recalibration fail over long time scales; only target-inference methods appear capable of enabling long-term unsupervised recalibration. Our results demonstrate how task structure can be used to bootstrap a noisy decoder into a highly-performant one, thereby overcoming one of the major barriers to clinically translating BCIs.

8.
Nature ; 593(7858): 249-254, 2021 05.
Article in English | MEDLINE | ID: mdl-33981047

ABSTRACT

Brain-computer interfaces (BCIs) can restore communication to people who have lost the ability to move or speak. So far, a major focus of BCI research has been on restoring gross motor skills, such as reaching and grasping1-5 or point-and-click typing with a computer cursor6,7. However, rapid sequences of highly dexterous behaviours, such as handwriting or touch typing, might enable faster rates of communication. Here we developed an intracortical BCI that decodes attempted handwriting movements from neural activity in the motor cortex and translates it to text in real time, using a recurrent neural network decoding approach. With this BCI, our study participant, whose hand was paralysed from spinal cord injury, achieved typing speeds of 90 characters per minute with 94.1% raw accuracy online, and greater than 99% accuracy offline with a general-purpose autocorrect. To our knowledge, these typing speeds exceed those reported for any other BCI, and are comparable to typical smartphone typing speeds of individuals in the age group of our participant (115 characters per minute)8. Finally, theoretical considerations explain why temporally complex movements, such as handwriting, may be fundamentally easier to decode than point-to-point movements. Our results open a new approach for BCIs and demonstrate the feasibility of accurately decoding rapid, dexterous movements years after paralysis.


Subject(s)
Brain-Computer Interfaces , Brain/physiology , Communication , Handwriting , Humans , Neural Networks, Computer , Spinal Cord Injuries , Time Factors
9.
J Neural Eng ; 17(6): 066007, 2020 11 25.
Article in English | MEDLINE | ID: mdl-33236720

ABSTRACT

OBJECTIVE: To evaluate the potential of intracortical electrode array signals for brain-computer interfaces (BCIs) to restore lost speech, we measured the performance of decoders trained to discriminate a comprehensive basis set of 39 English phonemes and to synthesize speech sounds via a neural pattern matching method. We decoded neural correlates of spoken-out-loud words in the 'hand knob' area of precentral gyrus, a step toward the eventual goal of decoding attempted speech from ventral speech areas in patients who are unable to speak. APPROACH: Neural and audio data were recorded while two BrainGate2 pilot clinical trial participants, each with two chronically-implanted 96-electrode arrays, spoke 420 different words that broadly sampled English phonemes. Phoneme onsets were identified from audio recordings, and their identities were then classified from neural features consisting of each electrode's binned action potential counts or high-frequency local field potential power. Speech synthesis was performed using the 'Brain-to-Speech' pattern matching method. We also examined two potential confounds specific to decoding overt speech: acoustic contamination of neural signals and systematic differences in labeling different phonemes' onset times. MAIN RESULTS: A linear decoder achieved up to 29.3% classification accuracy (chance = 6%) across 39 phonemes, while an RNN classifier achieved 33.9% accuracy. Parameter sweeps indicated that performance did not saturate when adding more electrodes or more training data, and that accuracy improved when utilizing time-varying structure in the data. Microphonic contamination and phoneme onset differences modestly increased decoding accuracy, but could be mitigated by acoustic artifact subtraction and using a neural speech onset marker, respectively. Speech synthesis achieved r = 0.523 correlation between true and reconstructed audio. SIGNIFICANCE: The ability to decode speech using intracortical electrode array signals from a nontraditional speech area suggests that placing electrode arrays in ventral speech areas is a promising direction for speech BCIs.


Subject(s)
Brain-Computer Interfaces , Speech , Electrodes , Hand , Humans , Language
10.
Cell ; 181(2): 396-409.e26, 2020 04 16.
Article in English | MEDLINE | ID: mdl-32220308

ABSTRACT

Decades after the motor homunculus was first proposed, it is still unknown how different body parts are intermixed and interrelated in human motor cortical areas at single-neuron resolution. Using multi-unit recordings, we studied how face, head, arm, and leg movements are represented in the hand knob area of premotor cortex (precentral gyrus) in people with tetraplegia. Contrary to traditional expectations, we found strong representation of all movements and a partially "compositional" neural code that linked together all four limbs. The code consisted of (1) a limb-coding component representing the limb to be moved and (2) a movement-coding component where analogous movements from each limb (e.g., hand grasp and toe curl) were represented similarly. Compositional coding might facilitate skill transfer across limbs, and it provides a useful framework for thinking about how the motor system constructs movement. Finally, we leveraged these results to create a whole-body intracortical brain-computer interface that spreads targets across all limbs.


Subject(s)
Frontal Lobe/physiology , Motor Cortex/anatomy & histology , Motor Cortex/physiology , Adult , Brain Mapping , Frontal Lobe/anatomy & histology , Human Body , Humans , Motor Cortex/metabolism , Movement/physiology
11.
J Neural Eng ; 17(1): 016049, 2020 02 05.
Article in English | MEDLINE | ID: mdl-32023225

ABSTRACT

OBJECTIVE: Speech-related neural modulation was recently reported in 'arm/hand' area of human dorsal motor cortex that is used as a signal source for intracortical brain-computer interfaces (iBCIs). This raises the concern that speech-related modulation might deleteriously affect the decoding of arm movement intentions, for instance by affecting velocity command outputs. This study sought to clarify whether or not speaking would interfere with ongoing iBCI use. APPROACH: A participant in the BrainGate2 iBCI clinical trial used an iBCI to control a computer cursor; spoke short words in a stand-alone speech task; and spoke short words during ongoing iBCI use. We examined neural activity in all three behaviors and compared iBCI performance with and without concurrent speech. MAIN RESULTS: Dorsal motor cortex firing rates modulated strongly during stand-alone speech, but this activity was largely attenuated when speaking occurred during iBCI cursor control using attempted arm movements. 'Decoder-potent' projections of the attenuated speech-related neural activity were small, explaining why cursor task performance was similar between iBCI use with and without concurrent speaking. SIGNIFICANCE: These findings indicate that speaking does not directly interfere with iBCIs that decode attempted arm movements. This suggests that patients who are able to speak will be able to use motor cortical-driven computer interfaces or prostheses without needing to forgo speaking while using these devices.


Subject(s)
Brain-Computer Interfaces , Motor Cortex/physiology , Psychomotor Performance/physiology , Speech/physiology , Spinal Cord Injuries/rehabilitation , Aged , Brain-Computer Interfaces/trends , Cervical Vertebrae/injuries , Humans , Male , Movement/physiology , Pilot Projects , Spinal Cord Injuries/physiopathology
12.
Elife ; 82019 12 10.
Article in English | MEDLINE | ID: mdl-31820736

ABSTRACT

Speaking is a sensorimotor behavior whose neural basis is difficult to study with single neuron resolution due to the scarcity of human intracortical measurements. We used electrode arrays to record from the motor cortex 'hand knob' in two people with tetraplegia, an area not previously implicated in speech. Neurons modulated during speaking and during non-speaking movements of the tongue, lips, and jaw. This challenges whether the conventional model of a 'motor homunculus' division by major body regions extends to the single-neuron scale. Spoken words and syllables could be decoded from single trials, demonstrating the potential of intracortical recordings for brain-computer interfaces to restore speech. Two neural population dynamics features previously reported for arm movements were also present during speaking: a component that was mostly invariant across initiating different words, followed by rotatory dynamics during speaking. This suggests that common neural dynamical motifs may underlie movement of arm and speech articulators.


Subject(s)
Motor Cortex/physiopathology , Nerve Net/physiopathology , Quadriplegia/physiopathology , Speech/physiology , Algorithms , Arm/physiopathology , Brain-Computer Interfaces , Electrocorticography , Hand/physiopathology , Humans , Lip/physiopathology , Models, Neurological , Movement/physiology , Sensorimotor Cortex/physiopathology , Tongue/physiopathology
SELECTION OF CITATIONS
SEARCH DETAIL
...