Your browser doesn't support javascript.
loading
Evolutionary reinforcement learning of dynamical large deviations.
Whitelam, Stephen; Jacobson, Daniel; Tamblyn, Isaac.
Afiliação
  • Whitelam S; Molecular Foundry, Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, California 94720, USA.
  • Jacobson D; Division of Chemistry and Chemical Engineering, California Institute of Technology, Pasadena, California 91125, USA.
  • Tamblyn I; National Research Council of Canada, Ottawa, Ontario K1N 5A2, Canada.
J Chem Phys ; 153(4): 044113, 2020 Jul 28.
Article em En | MEDLINE | ID: mdl-32752661
We show how to bound and calculate the likelihood of dynamical large deviations using evolutionary reinforcement learning. An agent, a stochastic model, propagates a continuous-time Monte Carlo trajectory and receives a reward conditioned upon the values of certain path-extensive quantities. Evolution produces progressively fitter agents, potentially allowing the calculation of a piece of a large-deviation rate function for a particular model and path-extensive quantity. For models with small state spaces, the evolutionary process acts directly on rates, and for models with large state spaces, the process acts on the weights of a neural network that parameterizes the model's rates. This approach shows how path-extensive physics problems can be considered within a framework widely used in machine learning.

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2020 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2020 Tipo de documento: Article