Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 33
Filter
Add more filters










Publication year range
1.
bioRxiv ; 2024 Apr 26.
Article in English | MEDLINE | ID: mdl-38712189

ABSTRACT

Keyboard typing with finger movements is a versatile digital interface for users with diverse skills, needs, and preferences. Currently, such an interface does not exist for people with paralysis. We developed an intracortical brain-computer interface (BCI) for typing with attempted flexion/extension movements of three finger groups on the right hand, or both hands, and demonstrated its flexibility in two dominant typing paradigms. The first paradigm is "point-and-click" typing, where a BCI user selects one key at a time using continuous real-time control, allowing selection of arbitrary sequences of symbols. During cued character selection with this paradigm, a human research participant with paralysis achieved 30-40 selections per minute with nearly 90% accuracy. The second paradigm is "keystroke" typing, where the BCI user selects each character by a discrete movement without real-time feedback, often giving a faster speed for natural language sentences. With 90 cued characters per minute, decoding attempted finger movements and correcting errors using a language model resulted in more than 90% accuracy. Notably, both paradigms matched the state-of-the-art for BCI performance and enabled further flexibility by the simultaneous selection of multiple characters as well as efficient decoder estimation across paradigms. Overall, the high-performance interface is a step towards the wider accessibility of BCI technology by addressing unmet user needs for flexibility.

2.
medRxiv ; 2024 Apr 10.
Article in English | MEDLINE | ID: mdl-38645254

ABSTRACT

Brain-computer interfaces can enable rapid, intuitive communication for people with paralysis by transforming the cortical activity associated with attempted speech into text on a computer screen. Despite recent advances, communication with brain-computer interfaces has been restricted by extensive training data requirements and inaccurate word output. A man in his 40's with ALS with tetraparesis and severe dysarthria (ALSFRS-R = 23) was enrolled into the BrainGate2 clinical trial. He underwent surgical implantation of four microelectrode arrays into his left precentral gyrus, which recorded neural activity from 256 intracortical electrodes. We report a speech neuroprosthesis that decoded his neural activity as he attempted to speak in both prompted and unstructured conversational settings. Decoded words were displayed on a screen, then vocalized using text-to-speech software designed to sound like his pre-ALS voice. On the first day of system use, following 30 minutes of attempted speech training data, the neuroprosthesis achieved 99.6% accuracy with a 50-word vocabulary. On the second day, the size of the possible output vocabulary increased to 125,000 words, and, after 1.4 additional hours of training data, the neuroprosthesis achieved 90.2% accuracy. With further training data, the neuroprosthesis sustained 97.5% accuracy beyond eight months after surgical implantation. The participant has used the neuroprosthesis to communicate in self-paced conversations for over 248 hours. In an individual with ALS and severe dysarthria, an intracortical speech neuroprosthesis reached a level of performance suitable to restore naturalistic communication after a brief training period.

3.
bioRxiv ; 2024 Feb 08.
Article in English | MEDLINE | ID: mdl-38370697

ABSTRACT

People with paralysis express unmet needs for peer support, leisure activities, and sporting activities. Many within the general population rely on social media and massively multiplayer video games to address these needs. We developed a high-performance finger brain-computer-interface system allowing continuous control of 3 independent finger groups with 2D thumb movements. The system was tested in a human research participant over sequential trials requiring fingers to reach and hold on targets, with an average acquisition rate of 76 targets/minute and completion time of 1.58 ± 0.06 seconds. Performance compared favorably to previous animal studies, despite a 2-fold increase in the decoded degrees-of-freedom (DOF). Finger positions were then used for 4-DOF velocity control of a virtual quadcopter, demonstrating functionality over both fixed and random obstacle courses. This approach shows promise for controlling multiple-DOF end-effectors, such as robotic fingers or digital interfaces for work, entertainment, and socialization.

4.
Sci Rep ; 14(1): 1598, 2024 01 18.
Article in English | MEDLINE | ID: mdl-38238386

ABSTRACT

Brain-computer interfaces have so far focused largely on enabling the control of a single effector, for example a single computer cursor or robotic arm. Restoring multi-effector motion could unlock greater functionality for people with paralysis (e.g., bimanual movement). However, it may prove challenging to decode the simultaneous motion of multiple effectors, as we recently found that a compositional neural code links movements across all limbs and that neural tuning changes nonlinearly during dual-effector motion. Here, we demonstrate the feasibility of high-quality bimanual control of two cursors via neural network (NN) decoders. Through simulations, we show that NNs leverage a neural 'laterality' dimension to distinguish between left and right-hand movements as neural tuning to both hands become increasingly correlated. In training recurrent neural networks (RNNs) for two-cursor control, we developed a method that alters the temporal structure of the training data by dilating/compressing it in time and re-ordering it, which we show helps RNNs successfully generalize to the online setting. With this method, we demonstrate that a person with paralysis can control two computer cursors simultaneously. Our results suggest that neural network decoders may be advantageous for multi-effector decoding, provided they are designed to transfer to the online setting.


Subject(s)
Brain-Computer Interfaces , Neural Networks, Computer , Humans , Movement , Functional Laterality , Hand , Paralysis , Brain
5.
ArXiv ; 2023 Nov 06.
Article in English | MEDLINE | ID: mdl-37986728

ABSTRACT

Intracortical brain-computer interfaces (iBCIs) have shown promise for restoring rapid communication to people with neurological disorders such as amyotrophic lateral sclerosis (ALS). However, to maintain high performance over time, iBCIs typically need frequent recalibration to combat changes in the neural recordings that accrue over days. This requires iBCI users to stop using the iBCI and engage in supervised data collection, making the iBCI system hard to use. In this paper, we propose a method that enables self-recalibration of communication iBCIs without interrupting the user. Our method leverages large language models (LMs) to automatically correct errors in iBCI outputs. The self-recalibration process uses these corrected outputs ("pseudo-labels") to continually update the iBCI decoder online. Over a period of more than one year (403 days), we evaluated our Continual Online Recalibration with Pseudo-labels (CORP) framework with one clinical trial participant. CORP achieved a stable decoding accuracy of 93.84% in an online handwriting iBCI task, significantly outperforming other baseline methods. Notably, this is the longest-running iBCI stability demonstration involving a human participant. Our results provide the first evidence for long-term stabilization of a plug-and-play, high-performance communication iBCI, addressing a major barrier for the clinical translation of iBCIs.

6.
bioRxiv ; 2023 Oct 12.
Article in English | MEDLINE | ID: mdl-37873182

ABSTRACT

How does the motor cortex combine simple movements (such as single finger flexion/extension) into complex movements (such hand gestures or playing piano)? Motor cortical activity was recorded using intracortical multi-electrode arrays in two people with tetraplegia as they attempted single, pairwise and higher order finger movements. Neural activity for simultaneous movements was largely aligned with linear summation of corresponding single finger movement activities, with two violations. First, the neural activity was normalized, preventing a large magnitude with an increasing number of moving fingers. Second, the neural tuning direction of weakly represented fingers (e.g. middle) changed significantly as a result of the movement of other fingers. These deviations from linearity resulted in non-linear methods outperforming linear methods for neural decoding. Overall, simultaneous finger movements are thus represented by the combination of individual finger movements by pseudo-linear summation.

7.
Nature ; 620(7976): 1031-1036, 2023 Aug.
Article in English | MEDLINE | ID: mdl-37612500

ABSTRACT

Speech brain-computer interfaces (BCIs) have the potential to restore rapid communication to people with paralysis by decoding neural activity evoked by attempted speech into text1,2 or sound3,4. Early demonstrations, although promising, have not yet achieved accuracies sufficiently high for communication of unconstrained sentences from a large vocabulary1-7. Here we demonstrate a speech-to-text BCI that records spiking activity from intracortical microelectrode arrays. Enabled by these high-resolution recordings, our study participant-who can no longer speak intelligibly owing to amyotrophic lateral sclerosis-achieved a 9.1% word error rate on a 50-word vocabulary (2.7 times fewer errors than the previous state-of-the-art speech BCI2) and a 23.8% word error rate on a 125,000-word vocabulary (the first successful demonstration, to our knowledge, of large-vocabulary decoding). Our participant's attempted speech was decoded  at 62 words per minute, which is 3.4 times as fast as the previous record8 and begins to approach the speed of natural conversation (160 words per minute9). Finally, we highlight two aspects of the neural code for speech that are encouraging for speech BCIs: spatially intermixed tuning to speech articulators that makes accurate decoding possible from only a small region of cortex, and a detailed articulatory representation of phonemes that persists years after paralysis. These results show a feasible path forward for restoring rapid communication to people with paralysis who can no longer speak.


Subject(s)
Brain-Computer Interfaces , Neural Prostheses , Paralysis , Speech , Humans , Amyotrophic Lateral Sclerosis/physiopathology , Amyotrophic Lateral Sclerosis/rehabilitation , Cerebral Cortex/physiology , Microelectrodes , Paralysis/physiopathology , Paralysis/rehabilitation , Vocabulary
8.
bioRxiv ; 2023 Apr 21.
Article in English | MEDLINE | ID: mdl-37131830

ABSTRACT

Advances in deep learning have given rise to neural network models of the relationship between movement and brain activity that appear to far outperform prior approaches. Brain-computer interfaces (BCIs) that enable people with paralysis to control external devices, such as robotic arms or computer cursors, might stand to benefit greatly from these advances. We tested recurrent neural networks (RNNs) on a challenging nonlinear BCI problem: decoding continuous bimanual movement of two computer cursors. Surprisingly, we found that although RNNs appeared to perform well in offline settings, they did so by overfitting to the temporal structure of the training data and failed to generalize to real-time neuroprosthetic control. In response, we developed a method that alters the temporal structure of the training data by dilating/compressing it in time and re-ordering it, which we show helps RNNs successfully generalize to the online setting. With this method, we demonstrate that a person with paralysis can control two computer cursors simultaneously, far outperforming standard linear methods. Our results provide evidence that preventing models from overfitting to temporal structure in training data may, in principle, aid in translating deep learning advances to the BCI setting, unlocking improved performance for challenging applications.

9.
bioRxiv ; 2023 Feb 04.
Article in English | MEDLINE | ID: mdl-36778458

ABSTRACT

Intracortical brain-computer interfaces (iBCIs) require frequent recalibration to maintain robust performance due to changes in neural activity that accumulate over time. Compensating for this nonstationarity would enable consistently high performance without the need for supervised recalibration periods, where users cannot engage in free use of their device. Here we introduce a hidden Markov model (HMM) to infer what targets users are moving toward during iBCI use. We then retrain the system using these inferred targets, enabling unsupervised adaptation to changing neural activity. Our approach outperforms the state of the art in large-scale, closed-loop simulations over two months and in closed-loop with a human iBCI user over one month. Leveraging an offline dataset spanning five years of iBCI recordings, we further show how recently proposed data distribution-matching approaches to recalibration fail over long time scales; only target-inference methods appear capable of enabling long-term unsupervised recalibration. Our results demonstrate how task structure can be used to bootstrap a noisy decoder into a highly-performant one, thereby overcoming one of the major barriers to clinically translating BCIs.

10.
bioRxiv ; 2023 Apr 25.
Article in English | MEDLINE | ID: mdl-36711591

ABSTRACT

Speech brain-computer interfaces (BCIs) have the potential to restore rapid communication to people with paralysis by decoding neural activity evoked by attempted speaking movements into text or sound. Early demonstrations, while promising, have not yet achieved accuracies high enough for communication of unconstrainted sentences from a large vocabulary. Here, we demonstrate the first speech-to-text BCI that records spiking activity from intracortical microelectrode arrays. Enabled by these high-resolution recordings, our study participant, who can no longer speak intelligibly due amyotrophic lateral sclerosis (ALS), achieved a 9.1% word error rate on a 50 word vocabulary (2.7 times fewer errors than the prior state of the art speech BCI2) and a 23.8% word error rate on a 125,000 word vocabulary (the first successful demonstration of large-vocabulary decoding). Our BCI decoded speech at 62 words per minute, which is 3.4 times faster than the prior record for any kind of BCI and begins to approach the speed of natural conversation (160 words per minute). Finally, we highlight two aspects of the neural code for speech that are encouraging for speech BCIs: spatially intermixed tuning to speech articulators that makes accurate decoding possible from only a small region of cortex, and a detailed articulatory representation of phonemes that persists years after paralysis. These results show a feasible path forward for using intracortical speech BCIs to restore rapid communication to people with paralysis who can no longer speak.

11.
Adv Neural Inf Process Syst ; 36: 42258-42270, 2023 Dec.
Article in English | MEDLINE | ID: mdl-38738213

ABSTRACT

Intracortical brain-computer interfaces (iBCIs) have shown promise for restoring rapid communication to people with neurological disorders such as amyotrophic lateral sclerosis (ALS). However, to maintain high performance over time, iBCIs typically need frequent recalibration to combat changes in the neural recordings that accrue over days. This requires iBCI users to stop using the iBCI and engage in supervised data collection, making the iBCI system hard to use. In this paper, we propose a method that enables self-recalibration of communication iBCIs without interrupting the user. Our method leverages large language models (LMs) to automatically correct errors in iBCI outputs. The self-recalibration process uses these corrected outputs ("pseudo-labels") to continually update the iBCI decoder online. Over a period of more than one year (403 days), we evaluated our Continual Online Recalibration with Pseudo-labels (CORP) framework with one clinical trial participant. CORP achieved a stable decoding accuracy of 93.84% in an online handwriting iBCI task, significantly outperforming other baseline methods. Notably, this is the longest-running iBCI stability demonstration involving a human participant. Our results provide the first evidence for long-term stabilization of a plug-and-play, high-performance communication iBCI, addressing a major barrier for the clinical translation of iBCIs.

12.
J Neurosci ; 42(25): 5007-5020, 2022 06 22.
Article in English | MEDLINE | ID: mdl-35589391

ABSTRACT

Consolidation of memory is believed to involve offline replay of neural activity. While amply demonstrated in rodents, evidence for replay in humans, particularly regarding motor memory, is less compelling. To determine whether replay occurs after motor learning, we sought to record from motor cortex during a novel motor task and subsequent overnight sleep. A 36-year-old man with tetraplegia secondary to cervical spinal cord injury enrolled in the ongoing BrainGate brain-computer interface pilot clinical trial had two 96-channel intracortical microelectrode arrays placed chronically into left precentral gyrus. Single- and multi-unit activity was recorded while he played a color/sound sequence matching memory game. Intended movements were decoded from motor cortical neuronal activity by a real-time steady-state Kalman filter that allowed the participant to control a neurally driven cursor on the screen. Intracortical neural activity from precentral gyrus and 2-lead scalp EEG were recorded overnight as he slept. When decoded using the same steady-state Kalman filter parameters, intracortical neural signals recorded overnight replayed the target sequence from the memory game at intervals throughout at a frequency significantly greater than expected by chance. Replay events occurred at speeds ranging from 1 to 4 times as fast as initial task execution and were most frequently observed during slow-wave sleep. These results demonstrate that recent visuomotor skill acquisition in humans may be accompanied by replay of the corresponding motor cortex neural activity during sleep.SIGNIFICANCE STATEMENT Within cortex, the acquisition of information is often followed by the offline recapitulation of specific sequences of neural firing. Replay of recent activity is enriched during sleep and may support the consolidation of learning and memory. Using an intracortical brain-computer interface, we recorded and decoded activity from motor cortex as a human research participant performed a novel motor task. By decoding neural activity throughout subsequent sleep, we find that neural sequences underlying the recently practiced motor task are repeated throughout the night, providing direct evidence of replay in human motor cortex during sleep. This approach, using an optimized brain-computer interface decoder to characterize neural activity during sleep, provides a framework for future studies exploring replay, learning, and memory.


Subject(s)
Learning/physiology , Motor Cortex/physiology , Sleep/physiology , Adult , Brain-Computer Interfaces , Cervical Vertebrae , Electroencephalography/methods , Humans , Male , Pilot Projects , Quadriplegia/etiology , Quadriplegia/physiopathology , Spinal Cord Injuries/complications , Spinal Cord Injuries/physiopathology
13.
Nature ; 593(7858): 249-254, 2021 05.
Article in English | MEDLINE | ID: mdl-33981047

ABSTRACT

Brain-computer interfaces (BCIs) can restore communication to people who have lost the ability to move or speak. So far, a major focus of BCI research has been on restoring gross motor skills, such as reaching and grasping1-5 or point-and-click typing with a computer cursor6,7. However, rapid sequences of highly dexterous behaviours, such as handwriting or touch typing, might enable faster rates of communication. Here we developed an intracortical BCI that decodes attempted handwriting movements from neural activity in the motor cortex and translates it to text in real time, using a recurrent neural network decoding approach. With this BCI, our study participant, whose hand was paralysed from spinal cord injury, achieved typing speeds of 90 characters per minute with 94.1% raw accuracy online, and greater than 99% accuracy offline with a general-purpose autocorrect. To our knowledge, these typing speeds exceed those reported for any other BCI, and are comparable to typical smartphone typing speeds of individuals in the age group of our participant (115 characters per minute)8. Finally, theoretical considerations explain why temporally complex movements, such as handwriting, may be fundamentally easier to decode than point-to-point movements. Our results open a new approach for BCIs and demonstrate the feasibility of accurately decoding rapid, dexterous movements years after paralysis.


Subject(s)
Brain-Computer Interfaces , Brain/physiology , Communication , Handwriting , Humans , Neural Networks, Computer , Spinal Cord Injuries , Time Factors
14.
eNeuro ; 8(1)2021.
Article in English | MEDLINE | ID: mdl-33495242

ABSTRACT

Intracortical brain-computer interfaces (iBCIs) have the potential to restore hand grasping and object interaction to individuals with tetraplegia. Optimal grasping and object interaction require simultaneous production of both force and grasp outputs. However, since overlapping neural populations are modulated by both parameters, grasp type could affect how well forces are decoded from motor cortex in a closed-loop force iBCI. Therefore, this work quantified the neural representation and offline decoding performance of discrete hand grasps and force levels in two human participants with tetraplegia. Participants attempted to produce three discrete forces (light, medium, hard) using up to five hand grasp configurations. A two-way Welch ANOVA was implemented on multiunit neural features to assess their modulation to force and grasp Demixed principal component analysis (dPCA) was used to assess for population-level tuning to force and grasp and to predict these parameters from neural activity. Three major findings emerged from this work: (1) force information was neurally represented and could be decoded across multiple hand grasps (and, in one participant, across attempted elbow extension as well); (2) grasp type affected force representation within multiunit neural features and offline force classification accuracy; and (3) grasp was classified more accurately and had greater population-level representation than force. These findings suggest that force and grasp have both independent and interacting representations within cortex, and that incorporating force control into real-time iBCI systems is feasible across multiple hand grasps if the decoder also accounts for grasp type.


Subject(s)
Motor Cortex , Hand , Hand Strength , Humans , Quadriplegia
15.
J Neural Eng ; 17(6): 066007, 2020 11 25.
Article in English | MEDLINE | ID: mdl-33236720

ABSTRACT

OBJECTIVE: To evaluate the potential of intracortical electrode array signals for brain-computer interfaces (BCIs) to restore lost speech, we measured the performance of decoders trained to discriminate a comprehensive basis set of 39 English phonemes and to synthesize speech sounds via a neural pattern matching method. We decoded neural correlates of spoken-out-loud words in the 'hand knob' area of precentral gyrus, a step toward the eventual goal of decoding attempted speech from ventral speech areas in patients who are unable to speak. APPROACH: Neural and audio data were recorded while two BrainGate2 pilot clinical trial participants, each with two chronically-implanted 96-electrode arrays, spoke 420 different words that broadly sampled English phonemes. Phoneme onsets were identified from audio recordings, and their identities were then classified from neural features consisting of each electrode's binned action potential counts or high-frequency local field potential power. Speech synthesis was performed using the 'Brain-to-Speech' pattern matching method. We also examined two potential confounds specific to decoding overt speech: acoustic contamination of neural signals and systematic differences in labeling different phonemes' onset times. MAIN RESULTS: A linear decoder achieved up to 29.3% classification accuracy (chance = 6%) across 39 phonemes, while an RNN classifier achieved 33.9% accuracy. Parameter sweeps indicated that performance did not saturate when adding more electrodes or more training data, and that accuracy improved when utilizing time-varying structure in the data. Microphonic contamination and phoneme onset differences modestly increased decoding accuracy, but could be mitigated by acoustic artifact subtraction and using a neural speech onset marker, respectively. Speech synthesis achieved r = 0.523 correlation between true and reconstructed audio. SIGNIFICANCE: The ability to decode speech using intracortical electrode array signals from a nontraditional speech area suggests that placing electrode arrays in ventral speech areas is a promising direction for speech BCIs.


Subject(s)
Brain-Computer Interfaces , Speech , Electrodes , Hand , Humans , Language
16.
Cell ; 181(2): 396-409.e26, 2020 04 16.
Article in English | MEDLINE | ID: mdl-32220308

ABSTRACT

Decades after the motor homunculus was first proposed, it is still unknown how different body parts are intermixed and interrelated in human motor cortical areas at single-neuron resolution. Using multi-unit recordings, we studied how face, head, arm, and leg movements are represented in the hand knob area of premotor cortex (precentral gyrus) in people with tetraplegia. Contrary to traditional expectations, we found strong representation of all movements and a partially "compositional" neural code that linked together all four limbs. The code consisted of (1) a limb-coding component representing the limb to be moved and (2) a movement-coding component where analogous movements from each limb (e.g., hand grasp and toe curl) were represented similarly. Compositional coding might facilitate skill transfer across limbs, and it provides a useful framework for thinking about how the motor system constructs movement. Finally, we leveraged these results to create a whole-body intracortical brain-computer interface that spreads targets across all limbs.


Subject(s)
Frontal Lobe/physiology , Motor Cortex/anatomy & histology , Motor Cortex/physiology , Adult , Brain Mapping , Frontal Lobe/anatomy & histology , Human Body , Humans , Motor Cortex/metabolism , Movement/physiology
17.
J Neural Eng ; 17(1): 016049, 2020 02 05.
Article in English | MEDLINE | ID: mdl-32023225

ABSTRACT

OBJECTIVE: Speech-related neural modulation was recently reported in 'arm/hand' area of human dorsal motor cortex that is used as a signal source for intracortical brain-computer interfaces (iBCIs). This raises the concern that speech-related modulation might deleteriously affect the decoding of arm movement intentions, for instance by affecting velocity command outputs. This study sought to clarify whether or not speaking would interfere with ongoing iBCI use. APPROACH: A participant in the BrainGate2 iBCI clinical trial used an iBCI to control a computer cursor; spoke short words in a stand-alone speech task; and spoke short words during ongoing iBCI use. We examined neural activity in all three behaviors and compared iBCI performance with and without concurrent speech. MAIN RESULTS: Dorsal motor cortex firing rates modulated strongly during stand-alone speech, but this activity was largely attenuated when speaking occurred during iBCI cursor control using attempted arm movements. 'Decoder-potent' projections of the attenuated speech-related neural activity were small, explaining why cursor task performance was similar between iBCI use with and without concurrent speaking. SIGNIFICANCE: These findings indicate that speaking does not directly interfere with iBCIs that decode attempted arm movements. This suggests that patients who are able to speak will be able to use motor cortical-driven computer interfaces or prostheses without needing to forgo speaking while using these devices.


Subject(s)
Brain-Computer Interfaces , Motor Cortex/physiology , Psychomotor Performance/physiology , Speech/physiology , Spinal Cord Injuries/rehabilitation , Aged , Brain-Computer Interfaces/trends , Cervical Vertebrae/injuries , Humans , Male , Movement/physiology , Pilot Projects , Spinal Cord Injuries/physiopathology
18.
Sci Rep ; 10(1): 1429, 2020 01 29.
Article in English | MEDLINE | ID: mdl-31996696

ABSTRACT

Hybrid kinetic and kinematic intracortical brain-computer interfaces (iBCIs) have the potential to restore functional grasping and object interaction capabilities in individuals with tetraplegia. This requires an understanding of how kinetic information is represented in neural activity, and how this representation is affected by non-motor parameters such as volitional state (VoS), namely, whether one observes, imagines, or attempts an action. To this end, this work investigates how motor cortical neural activity changes when three human participants with tetraplegia observe, imagine, and attempt to produce three discrete hand grasping forces with the dominant hand. We show that force representation follows the same VoS-related trends as previously shown for directional arm movements; namely, that attempted force production recruits more neural activity compared to observed or imagined force production. Additionally, VoS-modulated neural activity to a greater extent than grasping force. Neural representation of forces was lower than expected, possibly due to compromised somatosensory pathways in individuals with tetraplegia, which have been shown to influence motor cortical activity. Nevertheless, attempted forces (but not always observed or imagined forces) could be decoded significantly above chance, thereby potentially providing relevant information towards the development of a hybrid kinetic and kinematic iBCI.


Subject(s)
Motor Cortex/physiology , Neural Prostheses , Quadriplegia/therapy , Volition/physiology , Biomechanical Phenomena , Biomedical Engineering , Brain-Computer Interfaces , Chronic Disease , Hand Strength , Humans , Imagination , Male , Microelectrodes , Middle Aged , Motor Cortex/surgery , Recovery of Function , Synaptic Transmission
19.
Elife ; 82019 12 10.
Article in English | MEDLINE | ID: mdl-31820736

ABSTRACT

Speaking is a sensorimotor behavior whose neural basis is difficult to study with single neuron resolution due to the scarcity of human intracortical measurements. We used electrode arrays to record from the motor cortex 'hand knob' in two people with tetraplegia, an area not previously implicated in speech. Neurons modulated during speaking and during non-speaking movements of the tongue, lips, and jaw. This challenges whether the conventional model of a 'motor homunculus' division by major body regions extends to the single-neuron scale. Spoken words and syllables could be decoded from single trials, demonstrating the potential of intracortical recordings for brain-computer interfaces to restore speech. Two neural population dynamics features previously reported for arm movements were also present during speaking: a component that was mostly invariant across initiating different words, followed by rotatory dynamics during speaking. This suggests that common neural dynamical motifs may underlie movement of arm and speech articulators.


Subject(s)
Motor Cortex/physiopathology , Nerve Net/physiopathology , Quadriplegia/physiopathology , Speech/physiology , Algorithms , Arm/physiopathology , Brain-Computer Interfaces , Electrocorticography , Hand/physiopathology , Humans , Lip/physiopathology , Models, Neurological , Movement/physiology , Sensorimotor Cortex/physiopathology , Tongue/physiopathology
20.
Sci Rep ; 9(1): 8881, 2019 06 20.
Article in English | MEDLINE | ID: mdl-31222030

ABSTRACT

Decoders optimized offline to reconstruct intended movements from neural recordings sometimes fail to achieve optimal performance online when they are used in closed-loop as part of an intracortical brain-computer interface (iBCI). This is because typical decoder calibration routines do not model the emergent interactions between the decoder, the user, and the task parameters (e.g. target size). Here, we investigated the feasibility of simulating online performance to better guide decoder parameter selection and design. Three participants in the BrainGate2 pilot clinical trial controlled a computer cursor using a linear velocity decoder under different gain (speed scaling) and temporal smoothing parameters and acquired targets with different radii and distances. We show that a user-specific iBCI feedback control model can predict how performance changes under these different decoder and task parameters in held-out data. We also used the model to optimize a nonlinear speed scaling function for the decoder. When used online with two participants, it increased the dynamic range of decoded speeds and decreased the time taken to acquire targets (compared to an optimized standard decoder). These results suggest that it is feasible to simulate iBCI performance accurately enough to be useful for quantitative decoder optimization and design.


Subject(s)
Biofeedback, Psychology , Brain-Computer Interfaces , Models, Neurological , Algorithms , Calibration , Humans , Psychomotor Performance
SELECTION OF CITATIONS
SEARCH DETAIL
...