Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Front Behav Neurosci ; 18: 1399394, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39188591

RESUMO

Learning to make adaptive decisions involves making choices, assessing their consequence, and leveraging this assessment to attain higher rewarding states. Despite vast literature on value-based decision-making, relatively little is known about the cognitive processes underlying decisions in highly uncertain contexts. Real world decisions are rarely accompanied by immediate feedback, explicit rewards, or complete knowledge of the environment. Being able to make informed decisions in such contexts requires significant knowledge about the environment, which can only be gained via exploration. Here we aim at understanding and formalizing the brain mechanisms underlying these processes. To this end, we first designed and performed an experimental task. Human participants had to learn to maximize reward while making sequences of decisions with only basic knowledge of the environment, and in the absence of explicit performance cues. Participants had to rely on their own internal assessment of performance to reveal a covert relationship between their choices and their subsequent consequences to find a strategy leading to the highest cumulative reward. Our results show that the participants' reaction times were longer whenever the decision involved a future consequence, suggesting greater introspection whenever a delayed value had to be considered. The learning time varied significantly across participants. Second, we formalized the neurocognitive processes underlying decision-making within this task, combining mean-field representations of competing neural populations with a reinforcement learning mechanism. This model provided a plausible characterization of the brain dynamics underlying these processes, and reproduced each aspect of the participants' behavior, from their reaction times and choices to their learning rates. In summary, both the experimental results and the model provide a principled explanation to how delayed value may be computed and incorporated into the neural dynamics of decision-making, and to how learning occurs in these uncertain scenarios.

2.
J Neurophysiol ; 127(5): 1348-1362, 2022 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-35171745

RESUMO

Nonhuman primate (NHP) movement kinematics have been decoded from spikes and local field potentials (LFPs) recorded during motor tasks. However, the potential of LFPs to provide network-like characterizations of neural dynamics during planning and execution of sequential movements requires further exploration. Is the aggregate nature of LFPs suitable to construct informative brain state descriptors of movement preparation and execution? To investigate this, we developed a framework to process LFPs based on machine-learning classifiers and analyzed LFP from a primate, implanted with several microelectrode arrays covering the premotor cortex in both hemispheres and the primary motor cortex on one side. The monkey performed a reach-to-grasp task, consisting of five consecutive states, starting from rest until a rewarding target (food) was attained. We use this five-state task to characterize neural activity within eight frequency bands, using spectral amplitude and pairwise correlations across electrodes as features. Our results show that we could best distinguish all five movement-related states using the highest frequency band (200-500 Hz), yielding an 87% accuracy with spectral amplitude, and 60% with pairwise electrode correlation. Further analyses characterized each movement-related state, showing differential neuronal population activity at above-γ frequencies during the various stages of movement. Furthermore, the topological distribution for the high-frequency LFPs allowed for a highly significant set of pairwise correlations, strongly suggesting a concerted distribution of movement planning and execution function is distributed across premotor and primary motor cortices in a specific fashion, and is most significant in the low ripple (100-150 Hz), high ripple (150-200 Hz), and multiunit frequency bands. In summary, our results show that the concerted use of novel machine-learning techniques with coarse grained queue broad signals such as LFPs may be successfully used to track and decode fine movement aspects involving preparation, reach, grasp, and reward retrieval across several brain regions.NEW & NOTEWORTHY Local field potentials (LFPs), despite lower spatial resolution compared to single-neuron recordings, can be used with machine learning classifiers to decode sequential movements involving motor preparation, execution, and reward retrieval. Our results revealed heterogeneity of neural activity on small spatial scales, further evidencing the utility of micro-electrode array recordings for complex movement decoding. With further advancement, high-dimensional LFPs may become the gold standard for brain-computer interfaces such as neural prostheses in the near future.


Assuntos
Interfaces Cérebro-Computador , Córtex Motor , Animais , Aprendizado de Máquina , Microeletrodos , Córtex Motor/fisiologia , Movimento/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA