Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros

Banco de datos
Tipo de estudio
Tipo del documento
Publication year range
1.
Cell ; 181(2): 396-409.e26, 2020 04 16.
Artículo en Inglés | MEDLINE | ID: mdl-32220308

RESUMEN

Decades after the motor homunculus was first proposed, it is still unknown how different body parts are intermixed and interrelated in human motor cortical areas at single-neuron resolution. Using multi-unit recordings, we studied how face, head, arm, and leg movements are represented in the hand knob area of premotor cortex (precentral gyrus) in people with tetraplegia. Contrary to traditional expectations, we found strong representation of all movements and a partially "compositional" neural code that linked together all four limbs. The code consisted of (1) a limb-coding component representing the limb to be moved and (2) a movement-coding component where analogous movements from each limb (e.g., hand grasp and toe curl) were represented similarly. Compositional coding might facilitate skill transfer across limbs, and it provides a useful framework for thinking about how the motor system constructs movement. Finally, we leveraged these results to create a whole-body intracortical brain-computer interface that spreads targets across all limbs.


Asunto(s)
Lóbulo Frontal/fisiología , Corteza Motora/anatomía & histología , Corteza Motora/fisiología , Adulto , Mapeo Encefálico , Lóbulo Frontal/anatomía & histología , Cuerpo Humano , Humanos , Corteza Motora/metabolismo , Movimiento/fisiología
2.
Sci Rep ; 14(1): 1598, 2024 01 18.
Artículo en Inglés | MEDLINE | ID: mdl-38238386

RESUMEN

Brain-computer interfaces have so far focused largely on enabling the control of a single effector, for example a single computer cursor or robotic arm. Restoring multi-effector motion could unlock greater functionality for people with paralysis (e.g., bimanual movement). However, it may prove challenging to decode the simultaneous motion of multiple effectors, as we recently found that a compositional neural code links movements across all limbs and that neural tuning changes nonlinearly during dual-effector motion. Here, we demonstrate the feasibility of high-quality bimanual control of two cursors via neural network (NN) decoders. Through simulations, we show that NNs leverage a neural 'laterality' dimension to distinguish between left and right-hand movements as neural tuning to both hands become increasingly correlated. In training recurrent neural networks (RNNs) for two-cursor control, we developed a method that alters the temporal structure of the training data by dilating/compressing it in time and re-ordering it, which we show helps RNNs successfully generalize to the online setting. With this method, we demonstrate that a person with paralysis can control two computer cursors simultaneously. Our results suggest that neural network decoders may be advantageous for multi-effector decoding, provided they are designed to transfer to the online setting.


Asunto(s)
Interfaces Cerebro-Computador , Redes Neurales de la Computación , Humanos , Movimiento , Lateralidad Funcional , Mano , Parálisis , Encéfalo
3.
medRxiv ; 2024 Apr 10.
Artículo en Inglés | MEDLINE | ID: mdl-38645254

RESUMEN

Brain-computer interfaces can enable rapid, intuitive communication for people with paralysis by transforming the cortical activity associated with attempted speech into text on a computer screen. Despite recent advances, communication with brain-computer interfaces has been restricted by extensive training data requirements and inaccurate word output. A man in his 40's with ALS with tetraparesis and severe dysarthria (ALSFRS-R = 23) was enrolled into the BrainGate2 clinical trial. He underwent surgical implantation of four microelectrode arrays into his left precentral gyrus, which recorded neural activity from 256 intracortical electrodes. We report a speech neuroprosthesis that decoded his neural activity as he attempted to speak in both prompted and unstructured conversational settings. Decoded words were displayed on a screen, then vocalized using text-to-speech software designed to sound like his pre-ALS voice. On the first day of system use, following 30 minutes of attempted speech training data, the neuroprosthesis achieved 99.6% accuracy with a 50-word vocabulary. On the second day, the size of the possible output vocabulary increased to 125,000 words, and, after 1.4 additional hours of training data, the neuroprosthesis achieved 90.2% accuracy. With further training data, the neuroprosthesis sustained 97.5% accuracy beyond eight months after surgical implantation. The participant has used the neuroprosthesis to communicate in self-paced conversations for over 248 hours. In an individual with ALS and severe dysarthria, an intracortical speech neuroprosthesis reached a level of performance suitable to restore naturalistic communication after a brief training period.

4.
bioRxiv ; 2023 Apr 21.
Artículo en Inglés | MEDLINE | ID: mdl-37131830

RESUMEN

Advances in deep learning have given rise to neural network models of the relationship between movement and brain activity that appear to far outperform prior approaches. Brain-computer interfaces (BCIs) that enable people with paralysis to control external devices, such as robotic arms or computer cursors, might stand to benefit greatly from these advances. We tested recurrent neural networks (RNNs) on a challenging nonlinear BCI problem: decoding continuous bimanual movement of two computer cursors. Surprisingly, we found that although RNNs appeared to perform well in offline settings, they did so by overfitting to the temporal structure of the training data and failed to generalize to real-time neuroprosthetic control. In response, we developed a method that alters the temporal structure of the training data by dilating/compressing it in time and re-ordering it, which we show helps RNNs successfully generalize to the online setting. With this method, we demonstrate that a person with paralysis can control two computer cursors simultaneously, far outperforming standard linear methods. Our results provide evidence that preventing models from overfitting to temporal structure in training data may, in principle, aid in translating deep learning advances to the BCI setting, unlocking improved performance for challenging applications.

5.
IEEE Trans Haptics ; 14(4): 762-775, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33844633

RESUMEN

Intracortical brain-computer interfaces (iBCIs) provide people with paralysis a means to control devices with signals decoded from brain activity. Despite recent impressive advances, these devices still cannot approach able-bodied levels of control. To achieve naturalistic control and improved performance of neural prostheses, iBCIs will likely need to include proprioceptive feedback. With the goal of providing proprioceptive feedback via mechanical haptic stimulation, we aim to understand how haptic stimulation affects motor cortical neurons and ultimately, iBCI control. We provided skin shear haptic stimulation as a substitute for proprioception to the back of the neck of a person with tetraplegia. The neck location was determined via assessment of touch sensitivity using a monofilament test kit. The participant was able to correctly report skin shear at the back of the neck in 8 unique directions with 65% accuracy. We found motor cortical units that exhibited sensory responses to shear stimuli, some of which were strongly tuned to the stimuli and well modeled by cosine-shaped functions. In this article, we also demonstrated online iBCI cursor control with continuous skin-shear feedback driven by decoded command signals. Cursor control performance increased slightly but significantly when the participant was given haptic feedback, compared to the purely visual feedback condition.


Asunto(s)
Interfaces Cerebro-Computador , Corteza Motora , Retroalimentación , Retroalimentación Sensorial , Tecnología Háptica , Humanos , Cuadriplejía
SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda