Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 56
Filtrar
1.
J Neural Eng ; 21(2)2024 May 02.
Artículo en Inglés | MEDLINE | ID: mdl-38513289

RESUMEN

The detection of events in time-series data is a common signal-processing problem. When the data can be modeled as a known template signal with an unknown delay in Gaussian noise, detection of the template signal can be done with a traditional matched filter. However, in many applications, the event of interest is represented in multimodal data consisting of both Gaussian and point-process time series. Neuroscience experiments, for example, can simultaneously record multimodal neural signals such as local field potentials (LFPs), which can be modeled as Gaussian, and neuronal spikes, which can be modeled as point processes. Currently, no method exists for event detection from such multimodal data, and as such our objective in this work is to develop a method to meet this need. Here we address this challenge by developing the multimodal event detector (MED) algorithm which simultaneously estimates event times and classes. To do this, we write a multimodal likelihood function for Gaussian and point-process observations and derive the associated maximum likelihood estimator of simultaneous event times and classes. We additionally introduce a cross-modal scaling parameter to account for model mismatch in real datasets. We validate this method in extensive simulations as well as in a neural spike-LFP dataset recorded during an eye-movement task, where the events of interest are eye movements with unknown times and directions. We show that the MED can successfully detect eye movement onset and classify eye movement direction. Further, the MED successfully combines information across data modalities, with multimodal performance exceeding unimodal performance. This method can facilitate applications such as the discovery of latent events in multimodal neural population activity and the development of brain-computer interfaces for naturalistic settings without constrained tasks or prior knowledge of event times.


Asunto(s)
Algoritmos , Neuronas/fisiología , Distribución Normal , Animales , Modelos Neurológicos , Potenciales de Acción/fisiología , Simulación por Computador , Humanos
2.
Proc Natl Acad Sci U S A ; 121(7): e2212887121, 2024 Feb 13.
Artículo en Inglés | MEDLINE | ID: mdl-38335258

RESUMEN

Neural dynamics can reflect intrinsic dynamics or dynamic inputs, such as sensory inputs or inputs from other brain regions. To avoid misinterpreting temporally structured inputs as intrinsic dynamics, dynamical models of neural activity should account for measured inputs. However, incorporating measured inputs remains elusive in joint dynamical modeling of neural-behavioral data, which is important for studying neural computations of behavior. We first show how training dynamical models of neural activity while considering behavior but not input or input but not behavior may lead to misinterpretations. We then develop an analytical learning method for linear dynamical models that simultaneously accounts for neural activity, behavior, and measured inputs. The method provides the capability to prioritize the learning of intrinsic behaviorally relevant neural dynamics and dissociate them from both other intrinsic dynamics and measured input dynamics. In data from a simulated brain with fixed intrinsic dynamics that performs different tasks, the method correctly finds the same intrinsic dynamics regardless of the task while other methods can be influenced by the task. In neural datasets from three subjects performing two different motor tasks with task instruction sensory inputs, the method reveals low-dimensional intrinsic neural dynamics that are missed by other methods and are more predictive of behavior and/or neural activity. The method also uniquely finds that the intrinsic behaviorally relevant neural dynamics are largely similar across the different subjects and tasks, whereas the overall neural dynamics are not. These input-driven dynamical models of neural-behavioral data can uncover intrinsic dynamics that may otherwise be missed.


Asunto(s)
Encéfalo , Neuronas , Humanos , Aprendizaje , Modelos Neurológicos
3.
Nat Biomed Eng ; 8(1): 85-108, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38082181

RESUMEN

Modelling the spatiotemporal dynamics in the activity of neural populations while also enabling their flexible inference is hindered by the complexity and noisiness of neural observations. Here we show that the lower-dimensional nonlinear latent factors and latent structures can be computationally modelled in a manner that allows for flexible inference causally, non-causally and in the presence of missing neural observations. To enable flexible inference, we developed a neural network that separates the model into jointly trained manifold and dynamic latent factors such that nonlinearity is captured through the manifold factors and the dynamics can be modelled in tractable linear form on this nonlinear manifold. We show that the model, which we named 'DFINE' (for 'dynamical flexible inference for nonlinear embeddings') achieves flexible inference in simulations of nonlinear dynamics and across neural datasets representing a diversity of brain regions and behaviours. Compared with earlier neural-network models, DFINE enables flexible inference, better predicts neural activity and behaviour, and better captures the latent neural manifold structure. DFINE may advance the development of neurotechnology and investigations in neuroscience.


Asunto(s)
Encéfalo , Neurociencias , Redes Neurales de la Computación , Dinámicas no Lineales
4.
J Neural Eng ; 21(2)2024 Mar 01.
Artículo en Inglés | MEDLINE | ID: mdl-38016450

RESUMEN

Objective.Learning dynamical latent state models for multimodal spiking and field potential activity can reveal their collective low-dimensional dynamics and enable better decoding of behavior through multimodal fusion. Toward this goal, developing unsupervised learning methods that are computationally efficient is important, especially for real-time learning applications such as brain-machine interfaces (BMIs). However, efficient learning remains elusive for multimodal spike-field data due to their heterogeneous discrete-continuous distributions and different timescales.Approach.Here, we develop a multiscale subspace identification (multiscale SID) algorithm that enables computationally efficient learning for modeling and dimensionality reduction for multimodal discrete-continuous spike-field data. We describe the spike-field activity as combined Poisson and Gaussian observations, for which we derive a new analytical SID method. Importantly, we also introduce a novel constrained optimization approach to learn valid noise statistics, which is critical for multimodal statistical inference of the latent state, neural activity, and behavior. We validate the method using numerical simulations and with spiking and local field potential population activity recorded during a naturalistic reach and grasp behavior.Main results.We find that multiscale SID accurately learned dynamical models of spike-field signals and extracted low-dimensional dynamics from these multimodal signals. Further, it fused multimodal information, thus better identifying the dynamical modes and predicting behavior compared to using a single modality. Finally, compared to existing multiscale expectation-maximization learning for Poisson-Gaussian observations, multiscale SID had a much lower training time while being better in identifying the dynamical modes and having a better or similar accuracy in predicting neural activity and behavior.Significance.Overall, multiscale SID is an accurate learning method that is particularly beneficial when efficient learning is of interest, such as for online adaptive BMIs to track non-stationary dynamics or for reducing offline training time in neuroscience investigations.


Asunto(s)
Interfaces Cerebro-Computador , Neurociencias , Algoritmos , Distribución Normal
5.
J Neural Eng ; 20(6)2023 12 12.
Artículo en Inglés | MEDLINE | ID: mdl-38083862

RESUMEN

Objective. Investigating neural population dynamics underlying behavior requires learning accurate models of the recorded spiking activity, which can be modeled with a Poisson observation distribution. Switching dynamical system models can offer both explanatory power and interpretability by piecing together successive regimes of simpler dynamics to capture more complex ones. However, in many cases, reliable regime labels are not available, thus demanding accurate unsupervised learning methods for Poisson observations. Existing learning methods, however, rely on inference of latent states in neural activity using the Laplace approximation, which may not capture the broader properties of densities and may lead to inaccurate learning. Thus, there is a need for new inference methods that can enable accurate model learning.Approach. To achieve accurate model learning, we derive a novel inference method based on deterministic sampling for Poisson observations called the Poisson Cubature Filter (PCF) and embed it in an unsupervised learning framework. This method takes a minimum mean squared error approach to estimation. Terms that are difficult to find analytically for Poisson observations are approximated in a novel way with deterministic sampling based on numerical integration and cubature rules.Main results. PCF enabled accurate unsupervised learning in both stationary and switching dynamical systems and largely outperformed prior Laplace approximation-based learning methods in both simulations and motor cortical spiking data recorded during a reaching task. These improvements were larger for smaller data sizes, showing that PCF-based learning was more data efficient and enabled more reliable regime identification. In experimental data and unsupervised with respect to behavior, PCF-based learning uncovered interpretable behavior-relevant regimes unlike prior learning methods.Significance. The developed unsupervised learning methods for switching dynamical systems can accurately uncover latent regimes and states in population spiking activity, with important applications in both basic neuroscience and neurotechnology.


Asunto(s)
Corteza Motora , Aprendizaje Automático no Supervisado , Distribución de Poisson
6.
bioRxiv ; 2023 May 30.
Artículo en Inglés | MEDLINE | ID: mdl-37398400

RESUMEN

Learning dynamical latent state models for multimodal spiking and field potential activity can reveal their collective low-dimensional dynamics and enable better decoding of behavior through multimodal fusion. Toward this goal, developing unsupervised learning methods that are computationally efficient is important, especially for real-time learning applications such as brain-machine interfaces (BMIs). However, efficient learning remains elusive for multimodal spike-field data due to their heterogeneous discrete-continuous distributions and different timescales. Here, we develop a multiscale subspace identification (multiscale SID) algorithm that enables computationally efficient modeling and dimensionality reduction for multimodal discrete-continuous spike-field data. We describe the spike-field activity as combined Poisson and Gaussian observations, for which we derive a new analytical subspace identification method. Importantly, we also introduce a novel constrained optimization approach to learn valid noise statistics, which is critical for multimodal statistical inference of the latent state, neural activity, and behavior. We validate the method using numerical simulations and spike-LFP population activity recorded during a naturalistic reach and grasp behavior. We find that multiscale SID accurately learned dynamical models of spike-field signals and extracted low-dimensional dynamics from these multimodal signals. Further, it fused multimodal information, thus better identifying the dynamical modes and predicting behavior compared to using a single modality. Finally, compared to existing multiscale expectation-maximization learning for Poisson-Gaussian observations, multiscale SID had a much lower computational cost while being better in identifying the dynamical modes and having a better or similar accuracy in predicting neural activity. Overall, multiscale SID is an accurate learning method that is particularly beneficial when efficient learning is of interest.

7.
J Neural Eng ; 20(5)2023 09 19.
Artículo en Inglés | MEDLINE | ID: mdl-37524073

RESUMEN

Objective.When making decisions, humans can evaluate how likely they are to be correct. If this subjective confidence could be reliably decoded from brain activity, it would be possible to build a brain-computer interface (BCI) that improves decision performance by automatically providing more information to the user if needed based on their confidence. But this possibility depends on whether confidence can be decoded right after stimulus presentation and before the response so that a corrective action can be taken in time. Although prior work has shown that decision confidence is represented in brain signals, it is unclear if the representation is stimulus-locked or response-locked, and whether stimulus-locked pre-response decoding is sufficiently accurate for enabling such a BCI.Approach.We investigate the neural correlates of confidence by collecting high-density electroencephalography (EEG) during a perceptual decision task with realistic stimuli. Importantly, we design our task to include a post-stimulus gap that prevents the confounding of stimulus-locked activity by response-locked activity and vice versa, and then compare with a task without this gap.Main results.We perform event-related potential and source-localization analyses. Our analyses suggest that the neural correlates of confidence are stimulus-locked, and that an absence of a post-stimulus gap could cause these correlates to incorrectly appear as response-locked. By preventing response-locked activity from confounding stimulus-locked activity, we then show that confidence can be reliably decoded from single-trial stimulus-locked pre-response EEG alone. We also identify a high-performance classification algorithm by comparing a battery of algorithms. Lastly, we design a simulated BCI framework to show that the EEG classification is accurate enough to build a BCI and that the decoded confidence could be used to improve decision making performance particularly when the task difficulty and cost of errors are high.Significance.Our results show feasibility of non-invasive EEG-based BCIs to improve human decision making.


Asunto(s)
Interfaces Cerebro-Computador , Humanos , Electroencefalografía/métodos , Potenciales Evocados/fisiología , Encéfalo/fisiología , Toma de Decisiones/fisiología
8.
Brain Stimul ; 16(3): 867-878, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37217075

RESUMEN

OBJECTIVE: Despite advances in the treatment of psychiatric diseases, currently available therapies do not provide sufficient and durable relief for as many as 30-40% of patients. Neuromodulation, including deep brain stimulation (DBS), has emerged as a potential therapy for persistent disabling disease, however it has not yet gained widespread adoption. In 2016, the American Society for Stereotactic and Functional Neurosurgery (ASSFN) convened a meeting with leaders in the field to discuss a roadmap for the path forward. A follow-up meeting in 2022 aimed to review the current state of the field and to identify critical barriers and milestones for progress. DESIGN: The ASSFN convened a meeting on June 3, 2022 in Atlanta, Georgia and included leaders from the fields of neurology, neurosurgery, and psychiatry along with colleagues from industry, government, ethics, and law. The goal was to review the current state of the field, assess for advances or setbacks in the interim six years, and suggest a future path forward. The participants focused on five areas of interest: interdisciplinary engagement, regulatory pathways and trial design, disease biomarkers, ethics of psychiatric surgery, and resource allocation/prioritization. The proceedings are summarized here. CONCLUSION: The field of surgical psychiatry has made significant progress since our last expert meeting. Although weakness and threats to the development of novel surgical therapies exist, the identified strengths and opportunities promise to move the field through methodically rigorous and biologically-based approaches. The experts agree that ethics, law, patient engagement, and multidisciplinary teams will be critical to any potential growth in this area.


Asunto(s)
Estimulación Encefálica Profunda , Trastornos Mentales , Neurocirugia , Psicocirugía , Humanos , Estados Unidos , Procedimientos Neuroquirúrgicos , Trastornos Mentales/cirugía
9.
Nat Neurosci ; 26(6): 1090-1099, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-37217725

RESUMEN

Chronic pain syndromes are often refractory to treatment and cause substantial suffering and disability. Pain severity is often measured through subjective report, while objective biomarkers that may guide diagnosis and treatment are lacking. Also, which brain activity underlies chronic pain on clinically relevant timescales, or how this relates to acute pain, remains unclear. Here four individuals with refractory neuropathic pain were implanted with chronic intracranial electrodes in the anterior cingulate cortex and orbitofrontal cortex (OFC). Participants reported pain metrics coincident with ambulatory, direct neural recordings obtained multiple times daily over months. We successfully predicted intraindividual chronic pain severity scores from neural activity with high sensitivity using machine learning methods. Chronic pain decoding relied on sustained power changes from the OFC, which tended to differ from transient patterns of activity associated with acute, evoked pain states during a task. Thus, intracranial OFC signals can be used to predict spontaneous, chronic pain state in patients.


Asunto(s)
Dolor Crónico , Humanos , Dolor Crónico/diagnóstico , Electrodos Implantados , Corteza Prefrontal/fisiología , Giro del Cíngulo
10.
bioRxiv ; 2023 Mar 14.
Artículo en Inglés | MEDLINE | ID: mdl-36993213

RESUMEN

Neural dynamics can reflect intrinsic dynamics or dynamic inputs, such as sensory inputs or inputs from other regions. To avoid misinterpreting temporally-structured inputs as intrinsic dynamics, dynamical models of neural activity should account for measured inputs. However, incorporating measured inputs remains elusive in joint dynamical modeling of neural-behavioral data, which is important for studying neural computations of a specific behavior. We first show how training dynamical models of neural activity while considering behavior but not input, or input but not behavior may lead to misinterpretations. We then develop a novel analytical learning method that simultaneously accounts for neural activity, behavior, and measured inputs. The method provides the new capability to prioritize the learning of intrinsic behaviorally relevant neural dynamics and dissociate them from both other intrinsic dynamics and measured input dynamics. In data from a simulated brain with fixed intrinsic dynamics that performs different tasks, the method correctly finds the same intrinsic dynamics regardless of task while other methods can be influenced by the change in task. In neural datasets from three subjects performing two different motor tasks with task instruction sensory inputs, the method reveals low-dimensional intrinsic neural dynamics that are missed by other methods and are more predictive of behavior and/or neural activity. The method also uniquely finds that the intrinsic behaviorally relevant neural dynamics are largely similar across the three subjects and two tasks whereas the overall neural dynamics are not. These input-driven dynamical models of neural-behavioral data can uncover intrinsic dynamics that may otherwise be missed.

11.
bioRxiv ; 2023 Mar 14.
Artículo en Inglés | MEDLINE | ID: mdl-36993605

RESUMEN

Inferring complex spatiotemporal dynamics in neural population activity is critical for investigating neural mechanisms and developing neurotechnology. These activity patterns are noisy observations of lower-dimensional latent factors and their nonlinear dynamical structure. A major unaddressed challenge is to model this nonlinear structure, but in a manner that allows for flexible inference, whether causally, non-causally, or in the presence of missing neural observations. We address this challenge by developing DFINE, a new neural network that separates the model into dynamic and manifold latent factors, such that the dynamics can be modeled in tractable form. We show that DFINE achieves flexible nonlinear inference across diverse behaviors and brain regions. Further, despite enabling flexible inference unlike prior neural network models of population activity, DFINE also better predicts the behavior and neural activity, and better captures the latent neural manifold structure. DFINE can both enhance future neurotechnology and facilitate investigations across diverse domains of neuroscience.

12.
J Neural Eng ; 19(6)2022 11 28.
Artículo en Inglés | MEDLINE | ID: mdl-36261030

RESUMEN

Objective.Realizing neurotechnologies that enable long-term neural recordings across multiple spatial-temporal scales during naturalistic behaviors requires new modeling and inference methods that can simultaneously address two challenges. First, the methods should aggregate information across all activity scales from multiple recording sources such as spiking and field potentials. Second, the methods should detect changes in the regimes of behavior and/or neural dynamics during naturalistic scenarios and long-term recordings. Prior regime detection methods are developed for a single scale of activity rather than multiscale activity, and prior multiscale methods have not considered regime switching and are for stationary cases.Approach.Here, we address both challenges by developing a switching multiscale dynamical system model and the associated filtering and smoothing methods. This model describes the encoding of an unobserved brain state in multiscale spike-field activity. It also allows for regime-switching dynamics using an unobserved regime state that dictates the dynamical and encoding parameters at every time-step. We also design the associated switching multiscale inference methods that estimate both the unobserved regime and brain states from simultaneous spike-field activity.Main results.We validate the methods in both extensive numerical simulations and prefrontal spike-field data recorded in a monkey performing saccades for fluid rewards. We show that these methods can successfully combine the spiking and field potential observations to simultaneously track the regime and brain states accurately. Doing so, these methods lead to better state estimation compared with single-scale switching methods or stationary multiscale methods. Also, for single-scale linear Gaussian observations, the new switching smoother can better generalize to diverse system settings compared to prior switching smoothers.Significance.These modeling and inference methods effectively incorporate both regime-detection and multiscale observations. As such, they could facilitate investigation of latent switching neural population dynamics and improve future brain-machine interfaces by enabling inference in naturalistic scenarios where regime-dependent multiscale activity and behavior arise.


Asunto(s)
Interfaces Cerebro-Computador , Modelos Neurológicos , Algoritmos , Distribución Normal , Encéfalo
13.
J Neural Eng ; 19(2)2022 03 07.
Artículo en Inglés | MEDLINE | ID: mdl-35073530

RESUMEN

Objective.Brain recordings exhibit dynamics at multiple spatiotemporal scales, which are measured with spike trains and larger-scale field potential signals. To study neural processes, it is important to identify and model causal interactions not only at a single scale of activity, but also across multiple scales, i.e. between spike trains and field potential signals. Standard causality measures are not directly applicable here because spike trains are binary-valued but field potentials are continuous-valued. It is thus important to develop computational tools to recover multiscale neural causality during behavior, assess their performance on neural datasets, and study whether modeling multiscale causalities can improve the prediction of neural signals beyond what is possible with single-scale causality.Approach.We design a multiscale model-based Granger-like causality method based on directed information and evaluate its success both in realistic biophysical spike-field simulations and in motor cortical datasets from two non-human primates (NHP) performing a motor behavior. To compute multiscale causality, we learn point-process generalized linear models that predict the spike events at a given time based on the history of both spike trains and field potential signals. We also learn linear Gaussian models that predict the field potential signals at a given time based on their own history as well as either the history of binary spike events or that of latent firing rates.Main results.We find that our method reveals the true multiscale causality network structure in biophysical simulations despite the presence of model mismatch. Further, models with the identified multiscale causalities in the NHP neural datasets lead to better prediction of both spike trains and field potential signals compared to just modeling single-scale causalities. Finally, we find that latent firing rates are better predictors of field potential signals compared with the binary spike events in the NHP datasets.Significance.This multiscale causality method can reveal the directed functional interactions across spatiotemporal scales of brain activity to inform basic science investigations and neurotechnologies.


Asunto(s)
Modelos Neurológicos , Neuronas , Potenciales de Acción , Algoritmos , Animales , Causalidad , Modelos Lineales
14.
Elife ; 102021 12 21.
Artículo en Inglés | MEDLINE | ID: mdl-34932466

RESUMEN

Investigating how an artificial network of neurons controls a simulated arm suggests that rotational patterns of activity in the motor cortex may rely on sensory feedback from the moving limb.


Asunto(s)
Retroalimentación Sensorial , Corteza Motora , Neuronas
15.
J Neural Eng ; 18(1): 016011, 2021 02 24.
Artículo en Inglés | MEDLINE | ID: mdl-33624610

RESUMEN

OBJECTIVE: Extracting and modeling the low-dimensional dynamics of multi-site electrocorticogram (ECoG) network activity is important in studying brain functions and dysfunctions and for developing translational neurotechnologies. Dynamic latent state models can be used to describe the ECoG network dynamics with low-dimensional latent states. But so far, non-stationarity of ECoG network dynamics has largely not been addressed in these latent state models. Such non-stationarity can happen due to a change in brain state or recording instability over time. A critical question is whether adaptive tracking of ECoG network dynamics can lead to further dimensionality reduction and more parsimonious and precise modeling. This question is largely unaddressed. APPROACH: We investigate this question by employing an adaptive linear state-space model for ECoG network activity constructed from ECoG power feature time-series over tens of hours from 10 human subjects with epilepsy. We study how adaptive modeling affects the prediction and dimensionality reduction for ECoG network dynamics compared with prior non-adaptive models, which do not track non-stationarity. MAIN RESULTS: Across the 10 subjects, adaptive modeling significantly improved the prediction of ECoG network dynamics compared with non-adaptive modeling, especially for lower latent state dimensions. Also, compared with non-adaptive modeling, adaptive modeling allowed for additional dimensionality reduction without degrading prediction performance. Finally, these results suggested that ECoG network dynamics over our recording periods exhibit non-stationarity, which can be tracked with adaptive modeling. SIGNIFICANCE: These results have important implications for studying low-dimensional neural representations using ECoG, and for developing future adaptive neurotechnologies for more precise decoding and modulation of brain states in neurological and neuropsychiatric disorders.


Asunto(s)
Mapeo Encefálico , Electrocorticografía , Encéfalo , Humanos
16.
Nat Biomed Eng ; 5(4): 324-345, 2021 04.
Artículo en Inglés | MEDLINE | ID: mdl-33526909

RESUMEN

Direct electrical stimulation can modulate the activity of brain networks for the treatment of several neurological and neuropsychiatric disorders and for restoring lost function. However, precise neuromodulation in an individual requires the accurate modelling and prediction of the effects of stimulation on the activity of their large-scale brain networks. Here, we report the development of dynamic input-output models that predict multiregional dynamics of brain networks in response to temporally varying patterns of ongoing microstimulation. In experiments with two awake rhesus macaques, we show that the activities of brain networks are modulated by changes in both stimulation amplitude and frequency, that they exhibit damping and oscillatory response dynamics, and that variabilities in prediction accuracy and in estimated response strength across brain regions can be explained by an at-rest functional connectivity measure computed without stimulation. Input-output models of brain dynamics may enable precise neuromodulation for the treatment of disease and facilitate the investigation of the functional organization of large-scale brain networks.


Asunto(s)
Mapeo Encefálico , Encéfalo/fisiología , Modelos Neurológicos , Animales , Estimulación Eléctrica , Macaca mulatta , Procesamiento de Señales Asistido por Computador , Procesos Estocásticos
17.
Nat Commun ; 12(1): 607, 2021 01 27.
Artículo en Inglés | MEDLINE | ID: mdl-33504797

RESUMEN

Motor function depends on neural dynamics spanning multiple spatiotemporal scales of population activity, from spiking of neurons to larger-scale local field potentials (LFP). How multiple scales of low-dimensional population dynamics are related in control of movements remains unknown. Multiscale neural dynamics are especially important to study in naturalistic reach-and-grasp movements, which are relatively under-explored. We learn novel multiscale dynamical models for spike-LFP network activity in monkeys performing naturalistic reach-and-grasps. We show low-dimensional dynamics of spiking and LFP activity exhibited several principal modes, each with a unique decay-frequency characteristic. One principal mode dominantly predicted movements. Despite distinct principal modes existing at the two scales, this predictive mode was multiscale and shared between scales, and was shared across sessions and monkeys, yet did not simply replicate behavioral modes. Further, this multiscale mode's decay-frequency explained behavior. We propose that multiscale, low-dimensional motor cortical state dynamics reflect the neural control of naturalistic reach-and-grasp behaviors.


Asunto(s)
Conducta Animal/fisiología , Fuerza de la Mano/fisiología , Corteza Motora/fisiología , Potenciales de Acción/fisiología , Animales , Macaca mulatta , Modelos Neurológicos , Análisis y Desempeño de Tareas
18.
J Neural Eng ; 18(3)2021 03 09.
Artículo en Inglés | MEDLINE | ID: mdl-33254159

RESUMEN

Objective. Dynamic latent state models are widely used to characterize the dynamics of brain network activity for various neural signal types. To date, dynamic latent state models have largely been developed for stationary brain network dynamics. However, brain network dynamics can be non-stationary for example due to learning, plasticity or recording instability. To enable modeling these non-stationarities, two problems need to be resolved. First, novel methods should be developed that can adaptively update the parameters of latent state models, which is difficult due to the state being latent. Second, new methods are needed to optimize the adaptation learning rate, which specifies how fast new neural observations update the model parameters and can significantly influence adaptation accuracy.Approach. We develop a Rate Optimized-adaptive Linear State-Space Modeling (RO-adaptive LSSM) algorithm that solves these two problems. First, to enable adaptation, we derive a computation- and memory-efficient adaptive LSSM fitting algorithm that updates the LSSM parameters recursively and in real time in the presence of the latent state. Second, we develop a real-time learning rate optimization algorithm. We use comprehensive simulations of a broad range of non-stationary brain network dynamics to validate both algorithms, which together constitute the RO-adaptive LSSM.Main results. We show that the adaptive LSSM fitting algorithm can accurately track the broad simulated non-stationary brain network dynamics. We also find that the learning rate significantly affects the LSSM fitting accuracy. Finally, we show that the real-time learning rate optimization algorithm can run in parallel with the adaptive LSSM fitting algorithm. Doing so, the combined RO-adaptive LSSM algorithm rapidly converges to the optimal learning rate and accurately tracks non-stationarities.Significance. These algorithms can be used to study time-varying neural dynamics underlying various brain functions and enhance future neurotechnologies such as brain-machine interfaces and closed-loop brain stimulation systems.


Asunto(s)
Interfaces Cerebro-Computador , Encéfalo , Algoritmos , Encéfalo/fisiología , Aprendizaje , Técnicas Estereotáxicas
19.
Nat Neurosci ; 24(1): 140-149, 2021 01.
Artículo en Inglés | MEDLINE | ID: mdl-33169030

RESUMEN

Neural activity exhibits complex dynamics related to various brain functions, internal states and behaviors. Understanding how neural dynamics explain specific measured behaviors requires dissociating behaviorally relevant and irrelevant dynamics, which is not achieved with current neural dynamic models as they are learned without considering behavior. We develop preferential subspace identification (PSID), which is an algorithm that models neural activity while dissociating and prioritizing its behaviorally relevant dynamics. Modeling data in two monkeys performing three-dimensional reach and grasp tasks, PSID revealed that the behaviorally relevant dynamics are significantly lower-dimensional than otherwise implied. Moreover, PSID discovered distinct rotational dynamics that were more predictive of behavior. Furthermore, PSID more accurately learned behaviorally relevant dynamics for each joint and recording channel. Finally, modeling data in two monkeys performing saccades demonstrated the generalization of PSID across behaviors, brain regions and neural signal types. PSID provides a general new tool to reveal behaviorally relevant neural dynamics that can otherwise go unnoticed.


Asunto(s)
Conducta Animal/fisiología , Modelos Neurológicos , Percepción Espacial/fisiología , Algoritmos , Animales , Fenómenos Electrofisiológicos , Fuerza de la Mano/fisiología , Aprendizaje/fisiología , Macaca mulatta , Aprendizaje Automático , Corteza Motora/fisiología , Corteza Prefrontal/fisiología , Desempeño Psicomotor/fisiología , Rotación , Movimientos Sacádicos/fisiología
20.
Nat Neurosci ; 22(10): 1554-1564, 2019 10.
Artículo en Inglés | MEDLINE | ID: mdl-31551595

RESUMEN

Brain-machine interfaces (BMIs) create closed-loop control systems that interact with the brain by recording and modulating neural activity and aim to restore lost function, most commonly motor function in paralyzed patients. Moreover, by precisely manipulating the elements within the control loop, motor BMIs have emerged as new scientific tools for investigating the neural mechanisms underlying control and learning. Beyond motor BMIs, recent work highlights the opportunity to develop closed-loop mood BMIs for restoring lost emotional function in neuropsychiatric disorders and for probing the neural mechanisms of emotion regulation. Here we review significant advances toward functional restoration and scientific discovery in motor BMIs that have been guided by a closed-loop control view. By focusing on this unifying view of BMIs and reviewing recent work, we then provide a perspective on how BMIs could extend to the neuropsychiatric domain.


Asunto(s)
Afecto/fisiología , Interfaces Cerebro-Computador , Movimiento/fisiología , Animales , Humanos , Aprendizaje/fisiología , Trastornos Mentales/fisiopatología , Trastornos Mentales/psicología , Trastornos Mentales/terapia
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...