Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 381
Filtrar
2.
Nature ; 634(8035): 763-764, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39394347
4.
Nature ; 2023 Mar 21.
Artículo en Inglés | MEDLINE | ID: mdl-36944775
8.
Nature ; 617(7960): 242-243, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-37165245
9.
Nature ; 2022 Oct 10.
Artículo en Inglés | MEDLINE | ID: mdl-36216951
12.
Nature ; 2021 Jul 23.
Artículo en Inglés | MEDLINE | ID: mdl-34302154
13.
Nature ; 595(7868): 479-480, 2021 07.
Artículo en Inglés | MEDLINE | ID: mdl-34262196
14.
Nature ; 599(7885): 362-366, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-34789909
15.
Nature ; 2021 Oct 13.
Artículo en Inglés | MEDLINE | ID: mdl-34646027
17.
18.
Nature ; 2021 Nov 29.
Artículo en Inglés | MEDLINE | ID: mdl-34845382
19.
Neural Comput ; 33(3): 674-712, 2021 03.
Artículo en Inglés | MEDLINE | ID: mdl-33400903

RESUMEN

Active inference is a first principle account of how autonomous agents operate in dynamic, nonstationary environments. This problem is also considered in reinforcement learning, but limited work exists on comparing the two approaches on the same discrete-state environments. In this letter, we provide (1) an accessible overview of the discrete-state formulation of active inference, highlighting natural behaviors in active inference that are generally engineered in reinforcement learning, and (2) an explicit discrete-state comparison between active inference and reinforcement learning on an OpenAI gym baseline. We begin by providing a condensed overview of the active inference literature, in particular viewing the various natural behaviors of active inference agents through the lens of reinforcement learning. We show that by operating in a pure belief-based setting, active inference agents can carry out epistemic exploration-and account for uncertainty about their environment-in a Bayes-optimal fashion. Furthermore, we show that the reliance on an explicit reward signal in reinforcement learning is removed in active inference, where reward can simply be treated as another observation we have a preference over; even in the total absence of rewards, agent behaviors are learned through preference learning. We make these properties explicit by showing two scenarios in which active inference agents can infer behaviors in reward-free environments compared to both Q-learning and Bayesian model-based reinforcement learning agents and by placing zero prior preferences over rewards and learning the prior preferences over the observations corresponding to reward. We conclude by noting that this formalism can be applied to more complex settings (e.g., robotic arm movement, Atari games) if appropriate generative models can be formulated. In short, we aim to demystify the behavior of active inference agents by presenting an accessible discrete state-space and time formulation and demonstrate these behaviors in a OpenAI gym environment, alongside reinforcement learning agents.

20.
Nature ; 588(7838): 380, 2020 12.
Artículo en Inglés | MEDLINE | ID: mdl-33273711
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA