Your browser doesn't support javascript.
loading
Designing optimal behavioral experiments using machine learning.
Valentin, Simon; Kleinegesse, Steven; Bramley, Neil R; Seriès, Peggy; Gutmann, Michael U; Lucas, Christopher G.
Afiliação
  • Valentin S; School of Informatics, University of Edinburgh, Edinburgh, United Kingdom.
  • Kleinegesse S; School of Informatics, University of Edinburgh, Edinburgh, United Kingdom.
  • Bramley NR; Department of Psychology, University of Edinburgh, Edinburgh, United Kingdom.
  • Seriès P; School of Informatics, University of Edinburgh, Edinburgh, United Kingdom.
  • Gutmann MU; School of Informatics, University of Edinburgh, Edinburgh, United Kingdom.
  • Lucas CG; School of Informatics, University of Edinburgh, Edinburgh, United Kingdom.
Elife ; 132024 Jan 23.
Article em En | MEDLINE | ID: mdl-38261382
ABSTRACT
Computational models are powerful tools for understanding human cognition and behavior. They let us express our theories clearly and precisely and offer predictions that can be subtle and often counter-intuitive. However, this same richness and ability to surprise means our scientific intuitions and traditional tools are ill-suited to designing experiments to test and compare these models. To avoid these pitfalls and realize the full potential of computational modeling, we require tools to design experiments that provide clear answers about what models explain human behavior and the auxiliary assumptions those models must make. Bayesian optimal experimental design (BOED) formalizes the search for optimal experimental designs by identifying experiments that are expected to yield informative data. In this work, we provide a tutorial on leveraging recent advances in BOED and machine learning to find optimal experiments for any kind of model that we can simulate data from, and show how by-products of this procedure allow for quick and straightforward evaluation of models and their parameters against real experimental data. As a case study, we consider theories of how people balance exploration and exploitation in multi-armed bandit decision-making tasks. We validate the presented approach using simulations and a real-world experiment. As compared to experimental designs commonly used in the literature, we show that our optimal designs more efficiently determine which of a set of models best account for individual human behavior, and more efficiently characterize behavior given a preferred model. At the same time, formalizing a scientific question such that it can be adequately addressed with BOED can be challenging and we discuss several potential caveats and pitfalls that practitioners should be aware of. We provide code to replicate all analyses as well as tutorial notebooks and pointers to adapt the methodology to different experimental settings.
Assuntos
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Cognição / Aprendizado de Máquina Tipo de estudo: Prognostic_studies Limite: Humans Idioma: En Revista: Elife Ano de publicação: 2024 Tipo de documento: Article País de afiliação: Reino Unido

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Cognição / Aprendizado de Máquina Tipo de estudo: Prognostic_studies Limite: Humans Idioma: En Revista: Elife Ano de publicação: 2024 Tipo de documento: Article País de afiliação: Reino Unido