Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 1 de 1
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Brain Struct Funct ; 226(5): 1553-1569, 2021 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-33839955

RESUMO

Reward prediction error, the difference between the expected and obtained reward, is known to act as a reinforcement learning neural signal. In the current study, we propose a model fitting approach that combines behavioral and neural data to fit computational models of reinforcement learning. Briefly, we penalized subject-specific fitted parameters that moved away too far from the group median, except when that deviation led to an improvement in the model's fit to neural responses. By means of a probabilistic monetary learning task and fMRI, we compared our approach with standard model fitting methods. Q-learning outperformed actor-critic at both behavioral and neural level, although the inclusion of neuroimaging data into model fitting improved the fit of actor-critic models. We observed both action-value and state-value prediction error signals in the striatum, while standard model fitting approaches failed to capture state-value signals. Finally, left ventral striatum correlated with reward prediction error while right ventral striatum with fictive prediction error, suggesting a functional hemispheric asymmetry regarding prediction-error driven learning.


Assuntos
Recompensa , Estriado Ventral , Aprendizagem , Imageamento por Ressonância Magnética , Reforço Psicológico , Estriado Ventral/diagnóstico por imagem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...