Your browser doesn't support javascript.
loading
Inverse Rational Control with Partially Observable Continuous Nonlinear Dynamics.
Kwon, Minhae; Daptardar, Saurabh; Schrater, Paul; Pitkow, Xaq.
Afiliação
  • Kwon M; School of Electronic Engineering, Soongsil University, Seoul, Republic of Korea.
  • Daptardar S; Google Inc., Mountain View, CA, USA.
  • Schrater P; Department of Computer Science, University of Minnesota, Minnesota, IN, USA.
  • Pitkow X; Electrical and Computer Engineering, Rice University, Houston, TX, USA.
Adv Neural Inf Process Syst ; 33: 7898-7909, 2020 Dec.
Article em En | MEDLINE | ID: mdl-34712038
ABSTRACT
A fundamental question in neuroscience is how the brain creates an internal model of the world to guide actions using sequences of ambiguous sensory information. This is naturally formulated as a reinforcement learning problem under partial observations, where an agent must estimate relevant latent variables in the world from its evidence, anticipate possible future states, and choose actions that optimize total expected reward. This problem can be solved by control theory, which allows us to find the optimal actions for a given system dynamics and objective function. However, animals often appear to behave suboptimally. Why? We hypothesize that animals have their own flawed internal model of the world, and choose actions with the highest expected subjective reward according to that flawed model. We describe this behavior as rational but not optimal. The problem of Inverse Rational Control (IRC) aims to identify which internal model would best explain an agent's actions. Our contribution here generalizes past work on Inverse Rational Control which solved this problem for discrete control in partially observable Markov decision processes. Here we accommodate continuous nonlinear dynamics and continuous actions, and impute sensory observations corrupted by unknown noise that is private to the animal. We first build an optimal Bayesian agent that learns an optimal policy generalized over the entire model space of dynamics and subjective rewards using deep reinforcement learning. Crucially, this allows us to compute a likelihood over models for experimentally observable action trajectories acquired from a suboptimal agent. We then find the model parameters that maximize the likelihood using gradient ascent. Our method successfully recovers the true model of rational agents. This approach provides a foundation for interpreting the behavioral and neural dynamics of animal brains during complex tasks.

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Tipo de estudo: Prognostic_studies Idioma: En Ano de publicação: 2020 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Tipo de estudo: Prognostic_studies Idioma: En Ano de publicação: 2020 Tipo de documento: Article