Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 37
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
bioRxiv ; 2024 Sep 16.
Artículo en Inglés | MEDLINE | ID: mdl-39345372

RESUMEN

Understanding how the body is represented in motor cortex is key to understanding how the brain controls movement. The precentral gyrus (PCG) has long been thought to contain largely distinct regions for the arm, leg and face (represented by the "motor homunculus"). However, mounting evidence has begun to reveal a more intermixed, interrelated and broadly tuned motor map. Here, we revisit the motor homunculus using microelectrode array recordings from 20 arrays that broadly sample PCG across 8 individuals, creating a comprehensive map of human motor cortex at single neuron resolution. We found whole-body representations throughout all sampled points of PCG, contradicting traditional leg/arm/face boundaries. We also found two speech-preferential areas with a broadly tuned, orofacial-dominant area in between them, previously unaccounted for by the homunculus. Throughout PCG, movement representations of the four limbs were interlinked, with homologous movements of different limbs (e.g., toe curl and hand close) having correlated representations. Our findings indicate that, while the classic homunculus aligns with each area's preferred body region at a coarse level, at a finer scale, PCG may be better described as a mosaic of functional zones, each with its own whole-body representation.

2.
bioRxiv ; 2024 Sep 20.
Artículo en Inglés | MEDLINE | ID: mdl-39229047

RESUMEN

Brain computer interfaces (BCIs) have the potential to restore communication to people who have lost the ability to speak due to neurological disease or injury. BCIs have been used to translate the neural correlates of attempted speech into text1-3. However, text communication fails to capture the nuances of human speech such as prosody, intonation and immediately hearing one's own voice. Here, we demonstrate a "brain-to-voice" neuroprosthesis that instantaneously synthesizes voice with closed-loop audio feedback by decoding neural activity from 256 microelectrodes implanted into the ventral precentral gyrus of a man with amyotrophic lateral sclerosis and severe dysarthria. We overcame the challenge of lacking ground-truth speech for training the neural decoder and were able to accurately synthesize his voice. Along with phonemic content, we were also able to decode paralinguistic features from intracortical activity, enabling the participant to modulate his BCI-synthesized voice in real-time to change intonation, emphasize words, and sing short melodies. These results demonstrate the feasibility of enabling people with paralysis to speak intelligibly and expressively through a BCI.

3.
N Engl J Med ; 391(7): 609-618, 2024 Aug 15.
Artículo en Inglés | MEDLINE | ID: mdl-39141853

RESUMEN

BACKGROUND: Brain-computer interfaces can enable communication for people with paralysis by transforming cortical activity associated with attempted speech into text on a computer screen. Communication with brain-computer interfaces has been restricted by extensive training requirements and limited accuracy. METHODS: A 45-year-old man with amyotrophic lateral sclerosis (ALS) with tetraparesis and severe dysarthria underwent surgical implantation of four microelectrode arrays into his left ventral precentral gyrus 5 years after the onset of the illness; these arrays recorded neural activity from 256 intracortical electrodes. We report the results of decoding his cortical neural activity as he attempted to speak in both prompted and unstructured conversational contexts. Decoded words were displayed on a screen and then vocalized with the use of text-to-speech software designed to sound like his pre-ALS voice. RESULTS: On the first day of use (25 days after surgery), the neuroprosthesis achieved 99.6% accuracy with a 50-word vocabulary. Calibration of the neuroprosthesis required 30 minutes of cortical recordings while the participant attempted to speak, followed by subsequent processing. On the second day, after 1.4 additional hours of system training, the neuroprosthesis achieved 90.2% accuracy using a 125,000-word vocabulary. With further training data, the neuroprosthesis sustained 97.5% accuracy over a period of 8.4 months after surgical implantation, and the participant used it to communicate in self-paced conversations at a rate of approximately 32 words per minute for more than 248 cumulative hours. CONCLUSIONS: In a person with ALS and severe dysarthria, an intracortical speech neuroprosthesis reached a level of performance suitable to restore conversational communication after brief training. (Funded by the Office of the Assistant Secretary of Defense for Health Affairs and others; BrainGate2 ClinicalTrials.gov number, NCT00912041.).


Asunto(s)
Esclerosis Amiotrófica Lateral , Interfaces Cerebro-Computador , Disartria , Habla , Humanos , Masculino , Persona de Mediana Edad , Esclerosis Amiotrófica Lateral/complicaciones , Esclerosis Amiotrófica Lateral/rehabilitación , Calibración , Equipos de Comunicación para Personas con Discapacidad , Disartria/rehabilitación , Disartria/etiología , Electrodos Implantados , Microelectrodos , Cuadriplejía/etiología , Cuadriplejía/rehabilitación
4.
medRxiv ; 2024 Apr 10.
Artículo en Inglés | MEDLINE | ID: mdl-38645254

RESUMEN

Brain-computer interfaces can enable rapid, intuitive communication for people with paralysis by transforming the cortical activity associated with attempted speech into text on a computer screen. Despite recent advances, communication with brain-computer interfaces has been restricted by extensive training data requirements and inaccurate word output. A man in his 40's with ALS with tetraparesis and severe dysarthria (ALSFRS-R = 23) was enrolled into the BrainGate2 clinical trial. He underwent surgical implantation of four microelectrode arrays into his left precentral gyrus, which recorded neural activity from 256 intracortical electrodes. We report a speech neuroprosthesis that decoded his neural activity as he attempted to speak in both prompted and unstructured conversational settings. Decoded words were displayed on a screen, then vocalized using text-to-speech software designed to sound like his pre-ALS voice. On the first day of system use, following 30 minutes of attempted speech training data, the neuroprosthesis achieved 99.6% accuracy with a 50-word vocabulary. On the second day, the size of the possible output vocabulary increased to 125,000 words, and, after 1.4 additional hours of training data, the neuroprosthesis achieved 90.2% accuracy. With further training data, the neuroprosthesis sustained 97.5% accuracy beyond eight months after surgical implantation. The participant has used the neuroprosthesis to communicate in self-paced conversations for over 248 hours. In an individual with ALS and severe dysarthria, an intracortical speech neuroprosthesis reached a level of performance suitable to restore naturalistic communication after a brief training period.

5.
J Neural Eng ; 21(2)2024 Apr 17.
Artículo en Inglés | MEDLINE | ID: mdl-38579696

RESUMEN

Objective.Artificial neural networks (ANNs) are state-of-the-art tools for modeling and decoding neural activity, but deploying them in closed-loop experiments with tight timing constraints is challenging due to their limited support in existing real-time frameworks. Researchers need a platform that fully supports high-level languages for running ANNs (e.g. Python and Julia) while maintaining support for languages that are critical for low-latency data acquisition and processing (e.g. C and C++).Approach.To address these needs, we introduce the Backend for Realtime Asynchronous Neural Decoding (BRAND). BRAND comprises Linux processes, termednodes, which communicate with each other in agraphvia streams of data. Its asynchronous design allows for acquisition, control, and analysis to be executed in parallel on streams of data that may operate at different timescales. BRAND uses Redis, an in-memory database, to send data between nodes, which enables fast inter-process communication and supports 54 different programming languages. Thus, developers can easily deploy existing ANN models in BRAND with minimal implementation changes.Main results.In our tests, BRAND achieved <600 microsecond latency between processes when sending large quantities of data (1024 channels of 30 kHz neural data in 1 ms chunks). BRAND runs a brain-computer interface with a recurrent neural network (RNN) decoder with less than 8 ms of latency from neural data input to decoder prediction. In a real-world demonstration of the system, participant T11 in the BrainGate2 clinical trial (ClinicalTrials.gov Identifier: NCT00912041) performed a standard cursor control task, in which 30 kHz signal processing, RNN decoding, task control, and graphics were all executed in BRAND. This system also supports real-time inference with complex latent variable models like Latent Factor Analysis via Dynamical Systems.Significance.By providing a framework that is fast, modular, and language-agnostic, BRAND lowers the barriers to integrating the latest tools in neuroscience and machine learning into closed-loop experiments.


Asunto(s)
Interfaces Cerebro-Computador , Neurociencias , Humanos , Redes Neurales de la Computación
6.
Nat Protoc ; 18(10): 2927-2953, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37697108

RESUMEN

Neuropixels are silicon-based electrophysiology-recording probes with high channel count and recording-site density. These probes offer a turnkey platform for measuring neural activity with single-cell resolution and at a scale that is beyond the capabilities of current clinically approved devices. Our team demonstrated the first-in-human use of these probes during resection surgery for epilepsy or tumors and deep brain stimulation electrode placement in patients with Parkinson's disease. Here, we provide a better understanding of the capabilities and challenges of using Neuropixels as a research tool to study human neurophysiology, with the hope that this information may inform future efforts toward regulatory approval of Neuropixels probes as research devices. In perioperative procedures, the major concerns are the initial sterility of the device, maintaining a sterile field during surgery, having multiple referencing and grounding schemes available to de-noise recordings (if necessary), protecting the silicon probe from accidental contact before insertion and obtaining high-quality action potential and local field potential recordings. The research team ensures that the device is fully operational while coordinating with the surgical team to remove sources of electrical noise that could otherwise substantially affect the signals recorded by the sensitive hardware. Prior preparation using the equipment and training in human clinical research and working in operating rooms maximize effective communication within and between the teams, ensuring high recording quality and minimizing the time added to the surgery. The perioperative procedure requires ~4 h, and the entire protocol requires multiple weeks.


Asunto(s)
Quirófanos , Silicio , Humanos , Electrodos , Neurofisiología , Potenciales de Acción/fisiología , Electrodos Implantados
7.
bioRxiv ; 2023 Aug 12.
Artículo en Inglés | MEDLINE | ID: mdl-37609167

RESUMEN

Artificial neural networks (ANNs) are state-of-the-art tools for modeling and decoding neural activity, but deploying them in closed-loop experiments with tight timing constraints is challenging due to their limited support in existing real-time frameworks. Researchers need a platform that fully supports high-level languages for running ANNs (e.g., Python and Julia) while maintaining support for languages that are critical for low-latency data acquisition and processing (e.g., C and C++). To address these needs, we introduce the Backend for Realtime Asynchronous Neural Decoding (BRAND). BRAND comprises Linux processes, termed nodes , which communicate with each other in a graph via streams of data. Its asynchronous design allows for acquisition, control, and analysis to be executed in parallel on streams of data that may operate at different timescales. BRAND uses Redis to send data between nodes, which enables fast inter-process communication and supports 54 different programming languages. Thus, developers can easily deploy existing ANN models in BRAND with minimal implementation changes. In our tests, BRAND achieved <600 microsecond latency between processes when sending large quantities of data (1024 channels of 30 kHz neural data in 1-millisecond chunks). BRAND runs a brain-computer interface with a recurrent neural network (RNN) decoder with less than 8 milliseconds of latency from neural data input to decoder prediction. In a real-world demonstration of the system, participant T11 in the BrainGate2 clinical trial performed a standard cursor control task, in which 30 kHz signal processing, RNN decoding, task control, and graphics were all executed in BRAND. This system also supports real-time inference with complex latent variable models like Latent Factor Analysis via Dynamical Systems. By providing a framework that is fast, modular, and language-agnostic, BRAND lowers the barriers to integrating the latest tools in neuroscience and machine learning into closed-loop experiments.

8.
Artículo en Inglés | MEDLINE | ID: mdl-39323876

RESUMEN

Brain-computer interfaces (BCIs) can potentially restore lost function in patients with neurological injury. A promising new application of BCI technology has focused on speech restoration. One approach is to synthesize speech from the neural correlates of a person who cannot speak, as they attempt to do so. However, there is no established gold-standard for quantifying the quality of BCI-synthesized speech. Quantitative metrics, such as applying correlation coefficients between true and decoded speech, are not applicable to anarthric users and fail to capture intelligibility by actual human listeners; by contrast, methods involving people completing forced-choice multiple-choice questionnaires are imprecise, not practical at scale, and cannot be used as cost functions for improving speech decoding algorithms. Here, we present a deep learning-based "AI Listener" that can be used to evaluate BCI speech intelligibility objectively, rapidly, and automatically. We begin by adapting several leading Automatic Speech Recognition (ASR) deep learning models - Deepspeech, Wav2vec 2.0, and Kaldi - to suit our application. We then evaluate the performance of these ASRs on multiple speech datasets with varying levels of intelligibility, including: healthy speech, speech from people with dysarthria, and synthesized BCI speech. Our results demonstrate that the multiple-language ASR model XLSR-Wav2vec 2.0, trained to output phonemes, yields superior performance in terms of speech transcription accuracy. Notably, the AI Listener reports that several previously published BCI output datasets are not intelligible, which is consistent with human listeners.

9.
Nat Neurosci ; 25(2): 252-263, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-35102333

RESUMEN

Recent advances in multi-electrode array technology have made it possible to monitor large neuronal ensembles at cellular resolution in animal models. In humans, however, current approaches restrict recordings to a few neurons per penetrating electrode or combine the signals of thousands of neurons in local field potential (LFP) recordings. Here we describe a new probe variant and set of techniques that enable simultaneous recording from over 200 well-isolated cortical single units in human participants during intraoperative neurosurgical procedures using silicon Neuropixels probes. We characterized a diversity of extracellular waveforms with eight separable single-unit classes, with differing firing rates, locations along the length of the electrode array, waveform spatial spread and modulation by LFP events such as inter-ictal discharges and burst suppression. Although some challenges remain in creating a turnkey recording system, high-density silicon arrays provide a path for studying human-specific cognitive processes and their dysfunction at unprecedented spatiotemporal resolution.


Asunto(s)
Corteza Cerebral , Neuronas , Animales , Electrodos , Humanos , Neuronas/fisiología , Silicio
10.
eNeuro ; 8(1)2021.
Artículo en Inglés | MEDLINE | ID: mdl-33495242

RESUMEN

Intracortical brain-computer interfaces (iBCIs) have the potential to restore hand grasping and object interaction to individuals with tetraplegia. Optimal grasping and object interaction require simultaneous production of both force and grasp outputs. However, since overlapping neural populations are modulated by both parameters, grasp type could affect how well forces are decoded from motor cortex in a closed-loop force iBCI. Therefore, this work quantified the neural representation and offline decoding performance of discrete hand grasps and force levels in two human participants with tetraplegia. Participants attempted to produce three discrete forces (light, medium, hard) using up to five hand grasp configurations. A two-way Welch ANOVA was implemented on multiunit neural features to assess their modulation to force and grasp Demixed principal component analysis (dPCA) was used to assess for population-level tuning to force and grasp and to predict these parameters from neural activity. Three major findings emerged from this work: (1) force information was neurally represented and could be decoded across multiple hand grasps (and, in one participant, across attempted elbow extension as well); (2) grasp type affected force representation within multiunit neural features and offline force classification accuracy; and (3) grasp was classified more accurately and had greater population-level representation than force. These findings suggest that force and grasp have both independent and interacting representations within cortex, and that incorporating force control into real-time iBCI systems is feasible across multiple hand grasps if the decoder also accounts for grasp type.


Asunto(s)
Corteza Motora , Mano , Fuerza de la Mano , Humanos , Cuadriplejía
11.
J Neural Eng ; 17(6): 066007, 2020 11 25.
Artículo en Inglés | MEDLINE | ID: mdl-33236720

RESUMEN

OBJECTIVE: To evaluate the potential of intracortical electrode array signals for brain-computer interfaces (BCIs) to restore lost speech, we measured the performance of decoders trained to discriminate a comprehensive basis set of 39 English phonemes and to synthesize speech sounds via a neural pattern matching method. We decoded neural correlates of spoken-out-loud words in the 'hand knob' area of precentral gyrus, a step toward the eventual goal of decoding attempted speech from ventral speech areas in patients who are unable to speak. APPROACH: Neural and audio data were recorded while two BrainGate2 pilot clinical trial participants, each with two chronically-implanted 96-electrode arrays, spoke 420 different words that broadly sampled English phonemes. Phoneme onsets were identified from audio recordings, and their identities were then classified from neural features consisting of each electrode's binned action potential counts or high-frequency local field potential power. Speech synthesis was performed using the 'Brain-to-Speech' pattern matching method. We also examined two potential confounds specific to decoding overt speech: acoustic contamination of neural signals and systematic differences in labeling different phonemes' onset times. MAIN RESULTS: A linear decoder achieved up to 29.3% classification accuracy (chance = 6%) across 39 phonemes, while an RNN classifier achieved 33.9% accuracy. Parameter sweeps indicated that performance did not saturate when adding more electrodes or more training data, and that accuracy improved when utilizing time-varying structure in the data. Microphonic contamination and phoneme onset differences modestly increased decoding accuracy, but could be mitigated by acoustic artifact subtraction and using a neural speech onset marker, respectively. Speech synthesis achieved r = 0.523 correlation between true and reconstructed audio. SIGNIFICANCE: The ability to decode speech using intracortical electrode array signals from a nontraditional speech area suggests that placing electrode arrays in ventral speech areas is a promising direction for speech BCIs.


Asunto(s)
Interfaces Cerebro-Computador , Habla , Electrodos , Mano , Humanos , Lenguaje
12.
Nat Biomed Eng ; 4(10): 984-996, 2020 10.
Artículo en Inglés | MEDLINE | ID: mdl-32747834

RESUMEN

The efficacy of wireless intracortical brain-computer interfaces (iBCIs) is limited in part by the number of recording channels, which is constrained by the power budget of the implantable system. Designing wireless iBCIs that provide the high-quality recordings of today's wired neural interfaces may lead to inadvertent over-design at the expense of power consumption and scalability. Here, we report analyses of neural signals collected from experimental iBCI measurements in rhesus macaques and from a clinical-trial participant with implanted 96-channel Utah multielectrode arrays to understand the trade-offs between signal quality and decoder performance. Moreover, we propose an efficient hardware design for clinically viable iBCIs, and suggest that the circuit design parameters of current recording iBCIs can be relaxed considerably without loss of performance. The proposed design may allow for an order-of-magnitude power savings and lead to clinically viable iBCIs with a higher channel count.


Asunto(s)
Interfaces Cerebro-Computador , Tecnología Inalámbrica/instrumentación , Animales , Suministros de Energía Eléctrica , Electrodos Implantados , Diseño de Equipo , Mano , Humanos , Macaca mulatta , Masculino , Persona de Mediana Edad
13.
J Neural Eng ; 17(1): 016049, 2020 02 05.
Artículo en Inglés | MEDLINE | ID: mdl-32023225

RESUMEN

OBJECTIVE: Speech-related neural modulation was recently reported in 'arm/hand' area of human dorsal motor cortex that is used as a signal source for intracortical brain-computer interfaces (iBCIs). This raises the concern that speech-related modulation might deleteriously affect the decoding of arm movement intentions, for instance by affecting velocity command outputs. This study sought to clarify whether or not speaking would interfere with ongoing iBCI use. APPROACH: A participant in the BrainGate2 iBCI clinical trial used an iBCI to control a computer cursor; spoke short words in a stand-alone speech task; and spoke short words during ongoing iBCI use. We examined neural activity in all three behaviors and compared iBCI performance with and without concurrent speech. MAIN RESULTS: Dorsal motor cortex firing rates modulated strongly during stand-alone speech, but this activity was largely attenuated when speaking occurred during iBCI cursor control using attempted arm movements. 'Decoder-potent' projections of the attenuated speech-related neural activity were small, explaining why cursor task performance was similar between iBCI use with and without concurrent speaking. SIGNIFICANCE: These findings indicate that speaking does not directly interfere with iBCIs that decode attempted arm movements. This suggests that patients who are able to speak will be able to use motor cortical-driven computer interfaces or prostheses without needing to forgo speaking while using these devices.


Asunto(s)
Interfaces Cerebro-Computador , Corteza Motora/fisiología , Desempeño Psicomotor/fisiología , Habla/fisiología , Traumatismos de la Médula Espinal/rehabilitación , Anciano , Interfaces Cerebro-Computador/tendencias , Vértebras Cervicales/lesiones , Humanos , Masculino , Movimiento/fisiología , Proyectos Piloto , Traumatismos de la Médula Espinal/fisiopatología
14.
Sci Rep ; 10(1): 1429, 2020 01 29.
Artículo en Inglés | MEDLINE | ID: mdl-31996696

RESUMEN

Hybrid kinetic and kinematic intracortical brain-computer interfaces (iBCIs) have the potential to restore functional grasping and object interaction capabilities in individuals with tetraplegia. This requires an understanding of how kinetic information is represented in neural activity, and how this representation is affected by non-motor parameters such as volitional state (VoS), namely, whether one observes, imagines, or attempts an action. To this end, this work investigates how motor cortical neural activity changes when three human participants with tetraplegia observe, imagine, and attempt to produce three discrete hand grasping forces with the dominant hand. We show that force representation follows the same VoS-related trends as previously shown for directional arm movements; namely, that attempted force production recruits more neural activity compared to observed or imagined force production. Additionally, VoS-modulated neural activity to a greater extent than grasping force. Neural representation of forces was lower than expected, possibly due to compromised somatosensory pathways in individuals with tetraplegia, which have been shown to influence motor cortical activity. Nevertheless, attempted forces (but not always observed or imagined forces) could be decoded significantly above chance, thereby potentially providing relevant information towards the development of a hybrid kinetic and kinematic iBCI.


Asunto(s)
Corteza Motora/fisiología , Prótesis Neurales , Cuadriplejía/terapia , Volición/fisiología , Fenómenos Biomecánicos , Ingeniería Biomédica , Interfaces Cerebro-Computador , Enfermedad Crónica , Fuerza de la Mano , Humanos , Imaginación , Masculino , Microelectrodos , Persona de Mediana Edad , Corteza Motora/cirugía , Recuperación de la Función , Transmisión Sináptica
15.
Elife ; 82019 12 10.
Artículo en Inglés | MEDLINE | ID: mdl-31820736

RESUMEN

Speaking is a sensorimotor behavior whose neural basis is difficult to study with single neuron resolution due to the scarcity of human intracortical measurements. We used electrode arrays to record from the motor cortex 'hand knob' in two people with tetraplegia, an area not previously implicated in speech. Neurons modulated during speaking and during non-speaking movements of the tongue, lips, and jaw. This challenges whether the conventional model of a 'motor homunculus' division by major body regions extends to the single-neuron scale. Spoken words and syllables could be decoded from single trials, demonstrating the potential of intracortical recordings for brain-computer interfaces to restore speech. Two neural population dynamics features previously reported for arm movements were also present during speaking: a component that was mostly invariant across initiating different words, followed by rotatory dynamics during speaking. This suggests that common neural dynamical motifs may underlie movement of arm and speech articulators.


Asunto(s)
Corteza Motora/fisiopatología , Red Nerviosa/fisiopatología , Cuadriplejía/fisiopatología , Habla/fisiología , Algoritmos , Brazo/fisiopatología , Interfaces Cerebro-Computador , Electrocorticografía , Mano/fisiopatología , Humanos , Labio/fisiopatología , Modelos Neurológicos , Movimiento/fisiología , Corteza Sensoriomotora/fisiopatología , Lengua/fisiopatología
16.
Sci Rep ; 9(1): 8881, 2019 06 20.
Artículo en Inglés | MEDLINE | ID: mdl-31222030

RESUMEN

Decoders optimized offline to reconstruct intended movements from neural recordings sometimes fail to achieve optimal performance online when they are used in closed-loop as part of an intracortical brain-computer interface (iBCI). This is because typical decoder calibration routines do not model the emergent interactions between the decoder, the user, and the task parameters (e.g. target size). Here, we investigated the feasibility of simulating online performance to better guide decoder parameter selection and design. Three participants in the BrainGate2 pilot clinical trial controlled a computer cursor using a linear velocity decoder under different gain (speed scaling) and temporal smoothing parameters and acquired targets with different radii and distances. We show that a user-specific iBCI feedback control model can predict how performance changes under these different decoder and task parameters in held-out data. We also used the model to optimize a nonlinear speed scaling function for the decoder. When used online with two participants, it increased the dynamic range of decoded speeds and decreased the time taken to acquire targets (compared to an optimized standard decoder). These results suggest that it is feasible to simulate iBCI performance accurately enough to be useful for quantitative decoder optimization and design.


Asunto(s)
Biorretroalimentación Psicológica , Interfaces Cerebro-Computador , Modelos Neurológicos , Algoritmos , Calibración , Humanos , Desempeño Psicomotor
17.
Neuron ; 103(2): 292-308.e4, 2019 07 17.
Artículo en Inglés | MEDLINE | ID: mdl-31171448

RESUMEN

A central goal of systems neuroscience is to relate an organism's neural activity to behavior. Neural population analyses often reduce the data dimensionality to focus on relevant activity patterns. A major hurdle to data analysis is spike sorting, and this problem is growing as the number of recorded neurons increases. Here, we investigate whether spike sorting is necessary to estimate neural population dynamics. The theory of random projections suggests that we can accurately estimate the geometry of low-dimensional manifolds from a small number of linear projections of the data. We recorded data using Neuropixels probes in motor cortex of nonhuman primates and reanalyzed data from three previous studies and found that neural dynamics and scientific conclusions are quite similar using multiunit threshold crossings rather than sorted neurons. This finding unlocks existing data for new analyses and informs the design and use of new electrode arrays for laboratory and clinical use.


Asunto(s)
Potenciales de Acción/fisiología , Modelos Neurológicos , Corteza Motora/citología , Neuronas/fisiología , Dinámicas no Lineales , Algoritmos , Animales , Simulación por Computador , Macaca mulatta , Masculino
18.
Sci Rep ; 9(1): 5528, 2019 Mar 28.
Artículo en Inglés | MEDLINE | ID: mdl-30918269

RESUMEN

A correction to this article has been published and is linked from the HTML and PDF versions of this paper. The error has been fixed in the paper.

19.
Annu Int Conf IEEE Eng Med Biol Soc ; 2018: 93-97, 2018 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-30440349

RESUMEN

Neural prostheses are being developed to restore speech to people with neurological injury or disease. A key design consideration is where and how to access neural correlates of intended speech. Most prior work has examined cortical field potentials at a coarse resolution using electroencephalography (EEG) or medium resolution using electrocorticography (ECoG). The few studies of speech with single-neuron resolution recorded from ventral areas known to be part of the speech network. Here, we recorded from two 96- electrode arrays chronically implanted into the 'hand knob' area of motor cortex while a person with tetraplegia spoke. Despite being located in an area previously demonstrated to modulate during attempted arm movements, many electrodes' neuronal firing rates responded to speech production. In offline analyses, we could classify which of 9 phonemes (plus silence) was spoken with 81% single-trial accuracy using a combination of spike rate and local field potential (LFP) power. This suggests that high-fidelity speech prostheses may be possible using large-scale intracortical recordings in motor cortical areas involved in controlling speech articulators.


Asunto(s)
Electrocorticografía , Electroencefalografía , Habla , Brazo , Electrodos , Electrodos Implantados , Mano , Humanos , Corteza Motora/fisiología , Cuadriplejía , Habla/fisiología
20.
Sci Rep ; 8(1): 16357, 2018 11 05.
Artículo en Inglés | MEDLINE | ID: mdl-30397281

RESUMEN

Brain-machine interfaces (BMIs) that decode movement intentions should ignore neural modulation sources distinct from the intended command. However, neurophysiology and control theory suggest that motor cortex reflects the motor effector's position, which could be a nuisance variable. We investigated motor cortical correlates of BMI cursor position with or without concurrent arm movement. We show in two monkeys that subtracting away estimated neural correlates of position improves online BMI performance only if the animals were allowed to move their arm. To understand why, we compared the neural variance attributable to cursor position when the same task was performed using arm reaching, versus arms-restrained BMI use. Firing rates correlated with both BMI cursor and hand positions, but hand positional effects were greater. To examine whether BMI position influences decoding in people with paralysis, we analyzed data from two intracortical BMI clinical trial participants and performed an online decoder comparison in one participant. We found only small motor cortical correlates, which did not affect performance. These results suggest that arm movement and proprioception are the major contributors to position-related motor cortical correlates. Cursor position visual feedback is therefore unlikely to affect the performance of BMI-driven prosthetic systems being developed for people with paralysis.


Asunto(s)
Interfaces Cerebro-Computador , Corteza Motora/fisiología , Animales , Brazo/fisiología , Humanos , Macaca mulatta , Masculino , Corteza Motora/fisiopatología , Movimiento , Parálisis/fisiopatología , Factores de Tiempo
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA