Your browser doesn't support javascript.
loading
Reward-Mediated, Model-Free Reinforcement-Learning Mechanisms in Pavlovian and Instrumental Tasks Are Related.
Moin Afshar, Neema; Cinotti, François; Martin, David; Khamassi, Mehdi; Calu, Donna J; Taylor, Jane R; Groman, Stephanie M.
Afiliación
  • Moin Afshar N; Department of Psychiatry, Yale School of Medicine, New Haven, Connecticut 06511.
  • Cinotti F; Department of Experimental Psychology, University of Oxford, Oxford OX2 6GG, United Kingdom.
  • Martin D; Department of Anatomy and Neurobiology, University of Maryland School of Medicine, Baltimore, Maryland 21201.
  • Khamassi M; Institute of Intelligent Systems and Robotics, Centre National de la Recherche Scientifique, Sorbonne University, 75005 Paris, France.
  • Calu DJ; Department of Anatomy and Neurobiology, University of Maryland School of Medicine, Baltimore, Maryland 21201.
  • Taylor JR; Program in Neuroscience, University of Maryland School of Medicine, Baltimore, Maryland 21201.
  • Groman SM; Department of Psychiatry, Yale School of Medicine, New Haven, Connecticut 06511.
J Neurosci ; 43(3): 458-471, 2023 01 18.
Article en En | MEDLINE | ID: mdl-36216504
Model-free and model-based computations are argued to distinctly update action values that guide decision-making processes. It is not known, however, if these model-free and model-based reinforcement learning mechanisms recruited in operationally based instrumental tasks parallel those engaged by pavlovian-based behavioral procedures. Recently, computational work has suggested that individual differences in the attribution of incentive salience to reward predictive cues, that is, sign- and goal-tracking behaviors, are also governed by variations in model-free and model-based value representations that guide behavior. Moreover, it is not appreciated if these systems that are characterized computationally using model-free and model-based algorithms are conserved across tasks for individual animals. In the current study, we used a within-subject design to assess sign-tracking and goal-tracking behaviors using a pavlovian conditioned approach task and then characterized behavior using an instrumental multistage decision-making (MSDM) task in male rats. We hypothesized that both pavlovian and instrumental learning processes may be driven by common reinforcement-learning mechanisms. Our data confirm that sign-tracking behavior was associated with greater reward-mediated, model-free reinforcement learning and that it was also linked to model-free reinforcement learning in the MSDM task. Computational analyses revealed that pavlovian model-free updating was correlated with model-free reinforcement learning in the MSDM task. These data provide key insights into the computational mechanisms mediating associative learning that could have important implications for normal and abnormal states.SIGNIFICANCE STATEMENT Model-free and model-based computations that guide instrumental decision-making processes may also be recruited in pavlovian-based behavioral procedures. Here, we used a within-subject design to test the hypothesis that both pavlovian and instrumental learning processes were driven by common reinforcement-learning mechanisms. Sign-tracking and goal-tracking behaviors were assessed in rats using a pavlovian conditioned approach task, and then instrumental behavior was characterized using an MSDM task. We report that sign-tracking behavior was associated with greater model-free, but not model-based, learning in the MSDM task. These data suggest that pavlovian and instrumental behaviors may be driven by conserved reinforcement-learning mechanisms.
Asunto(s)
Palabras clave

Texto completo: 1 Banco de datos: MEDLINE Asunto principal: Refuerzo en Psicología / Recompensa Tipo de estudio: Prognostic_studies Límite: Animals Idioma: En Revista: J Neurosci Año: 2023 Tipo del documento: Article

Texto completo: 1 Banco de datos: MEDLINE Asunto principal: Refuerzo en Psicología / Recompensa Tipo de estudio: Prognostic_studies Límite: Animals Idioma: En Revista: J Neurosci Año: 2023 Tipo del documento: Article