Your browser doesn't support javascript.
loading
Enhancing reinforcement learning models by including direct and indirect pathways improves performance on striatal dependent tasks.
Blackwell, Kim T; Doya, Kenji.
Afiliação
  • Blackwell KT; Department of Bioengineering, Volgenau School of Engineering, George Mason University, Fairfax, Virginia, United States of America.
  • Doya K; Neural Computation Unit, Okinawa Institute of Science and Technology Graduate University, Okinawa, Japan.
PLoS Comput Biol ; 19(8): e1011385, 2023 08.
Article em En | MEDLINE | ID: mdl-37594982
ABSTRACT
A major advance in understanding learning behavior stems from experiments showing that reward learning requires dopamine inputs to striatal neurons and arises from synaptic plasticity of cortico-striatal synapses. Numerous reinforcement learning models mimic this dopamine-dependent synaptic plasticity by using the reward prediction error, which resembles dopamine neuron firing, to learn the best action in response to a set of cues. Though these models can explain many facets of behavior, reproducing some types of goal-directed behavior, such as renewal and reversal, require additional model components. Here we present a reinforcement learning model, TD2Q, which better corresponds to the basal ganglia with two Q matrices, one representing direct pathway neurons (G) and another representing indirect pathway neurons (N). Unlike previous two-Q architectures, a novel and critical aspect of TD2Q is to update the G and N matrices utilizing the temporal difference reward prediction error. A best action is selected for N and G using a softmax with a reward-dependent adaptive exploration parameter, and then differences are resolved using a second selection step applied to the two action probabilities. The model is tested on a range of multi-step tasks including extinction, renewal, discrimination; switching reward probability learning; and sequence learning. Simulations show that TD2Q produces behaviors similar to rodents in choice and sequence learning tasks, and that use of the temporal difference reward prediction error is required to learn multi-step tasks. Blocking the update rule on the N matrix blocks discrimination learning, as observed experimentally. Performance in the sequence learning task is dramatically improved with two matrices. These results suggest that including additional aspects of basal ganglia physiology can improve the performance of reinforcement learning models, better reproduce animal behaviors, and provide insight as to the role of direct- and indirect-pathway striatal neurons.
Assuntos

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Dopamina / Aprendizagem Tipo de estudo: Prognostic_studies Limite: Animals Idioma: En Revista: PLoS Comput Biol Assunto da revista: BIOLOGIA / INFORMATICA MEDICA Ano de publicação: 2023 Tipo de documento: Article País de afiliação: Estados Unidos

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Dopamina / Aprendizagem Tipo de estudo: Prognostic_studies Limite: Animals Idioma: En Revista: PLoS Comput Biol Assunto da revista: BIOLOGIA / INFORMATICA MEDICA Ano de publicação: 2023 Tipo de documento: Article País de afiliação: Estados Unidos