Your browser doesn't support javascript.
loading
Did we personalize? Assessing personalization by an online reinforcement learning algorithm using resampling.
Ghosh, Susobhan; Kim, Raphael; Chhabria, Prasidh; Dwivedi, Raaz; Klasnja, Predrag; Liao, Peng; Zhang, Kelly; Murphy, Susan.
Affiliation
  • Ghosh S; Department of Computer Science, Harvard University.
  • Kim R; Department of Biostatistics, Harvard University.
  • Chhabria P; Work done by these authors while they were at Harvard University.
  • Dwivedi R; Department of Computer Science, Harvard University.
  • Klasnja P; Department of Statistics, Harvard University.
  • Liao P; Department of Electrical Engineering and Computer Science, MIT.
  • Zhang K; School of Information, University of Michigan.
  • Murphy S; Work done by these authors while they were at Harvard University.
Mach Learn ; 113(7): 3961-3997, 2024 Jul.
Article de En | MEDLINE | ID: mdl-39221170
ABSTRACT
There is a growing interest in using reinforcement learning (RL) to personalize sequences of treatments in digital health to support users in adopting healthier behaviors. Such sequential decision-making problems involve decisions about when to treat and how to treat based on the user's context (e.g., prior activity level, location, etc.). Online RL is a promising datadriven approach for this problem as it learns based on each user's historical responses and uses that knowledge to personalize these decisions. However, to decide whether the RL algorithm should be included in an "optimized" intervention for real-world deployment, we must assess the data evidence indicating that the RL algorithm is actually personalizing the treatments to its users. Due to the stochasticity in the RL algorithm, one may get a false impression that it is learning in certain states and using this learning to provide specific treatments. We use a working definition of personalization and introduce a resampling-based methodology for investigating whether the personalization exhibited by the RL algorithm is an artifact of the RL algorithm stochasticity. We illustrate our methodology with a case study by analyzing the data from a physical activity clinical trial called HeartSteps, which included the use of an online RL algorithm. We demonstrate how our approach enhances data-driven truth-in-advertising of algorithm personalization both across all users as well as within specific users in the study.
Mots clés

Texte intégral: 1 Collection: 01-internacional Base de données: MEDLINE Langue: En Journal: Mach Learn Année: 2024 Type de document: Article Pays de publication: États-Unis d'Amérique

Texte intégral: 1 Collection: 01-internacional Base de données: MEDLINE Langue: En Journal: Mach Learn Année: 2024 Type de document: Article Pays de publication: États-Unis d'Amérique