A flexible and generalizable model of online latent-state learning.
PLoS Comput Biol
; 15(9): e1007331, 2019 09.
Article
en En
| MEDLINE
| ID: mdl-31525176
Many models of classical conditioning fail to describe important phenomena, notably the rapid return of fear after extinction. To address this shortfall, evidence converged on the idea that learning agents rely on latent-state inferences, i.e. an ability to index disparate associations from cues to rewards (or penalties) and infer which index (i.e. latent state) is presently active. Our goal was to develop a model of latent-state inferences that uses latent states to predict rewards from cues efficiently and that can describe behavior in a diverse set of experiments. The resulting model combines a Rescorla-Wagner rule, for which updates to associations are proportional to prediction error, with an approximate Bayesian rule, for which beliefs in latent states are proportional to prior beliefs and an approximate likelihood based on current associations. In simulation, we demonstrate the model's ability to reproduce learning effects both famously explained and not explained by the Rescorla-Wagner model, including rapid return of fear after extinction, the Hall-Pearce effect, partial reinforcement extinction effect, backwards blocking, and memory modification. Lastly, we derive our model as an online algorithm to maximum likelihood estimation, demonstrating it is an efficient approach to outcome prediction. Establishing such a framework is a key step towards quantifying normative and pathological ranges of latent-state inferences in various contexts.
Texto completo:
1
Banco de datos:
MEDLINE
Asunto principal:
Biología Computacional
/
Aprendizaje
/
Modelos Psicológicos
Tipo de estudio:
Prognostic_studies
Límite:
Humans
Idioma:
En
Revista:
PLoS Comput Biol
Asunto de la revista:
BIOLOGIA
/
INFORMATICA MEDICA
Año:
2019
Tipo del documento:
Article
País de afiliación:
Estados Unidos