Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
bioRxiv ; 2024 Feb 17.
Artigo em Inglês | MEDLINE | ID: mdl-38405828

RESUMO

How does the motor cortex (MC) produce purposeful and generalizable movements from the complex musculoskeletal system in a dynamic environment? To elucidate the underlying neural dynamics, we use a goal-driven approach to model MC by considering its goal as a controller driving the musculoskeletal system through desired states to achieve movement. Specifically, we formulate the MC as a recurrent neural network (RNN) controller producing muscle commands while receiving sensory feedback from biologically accurate musculoskeletal models. Given this real-time simulated feedback implemented in advanced physics simulation engines, we use deep reinforcement learning to train the RNN to achieve desired movements under specified neural and musculoskeletal constraints. Activity of the trained model can accurately decode experimentally recorded neural population dynamics and single-unit MC activity, while generalizing well to testing conditions significantly different from training. Simultaneous goal- and data- driven modeling in which we use the recorded neural activity as observed states of the MC further enhances direct and generalizable single-unit decoding. Finally, we show that this framework elucidates computational principles of how neural dynamics enable flexible control of movement and make this framework easy-to-use for future experiments.

2.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 3350-3356, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-36086532

RESUMO

Goal-driven networks trained to perform a task analogous to that performed by biological neural populations are being increasingly utilized as insightful computational models of motor control. The resulting dynamics of the trained networks are then analyzed to uncover the neural strategies employed by the motor cortex to produce movements. However, these networks do not take into account the role of sensory feedback in producing movement, nor do they consider the complex biophysical underpinnings of the underlying musculoskeletal system. Moreover, these models can not be used in context of predictive neuromechanical simulations for hypothesis generation and prediction of neural strategies during novel movements. In this research, we adapt state-of-the-art deep reinforcement learning (DRL) algorithms to train a controller to drive a developed anatomically accurate monkey arm model to track experimentally recorded kinematics. We validate that the trained controller mimics biologically observed neural strategies to produce movement. The trained controller generalizes well to unobserved conditions as well as to perturbation analyses. The recorded firing rates of motor cortex neurons can be predicted from the controller activity with high accuracy even on unseen conditions. Finally, we validate that the trained controller outperforms existing goal-driven and representational models of motor cortex in single neuron decoding accuracy, thus showing the utility of the complex underpinnings of anatomically accurate models in shaping motor cortex neural activity during limb movements. The learned controller can be used for hypothesis generation and prediction of neural strategies during novel movements and unobserved conditions.


Assuntos
Córtex Motor , Sistema Musculoesquelético , Córtex Motor/fisiologia , Neurônios Motores , Movimento/fisiologia , Redes Neurais de Computação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA