Your browser doesn't support javascript.
loading
A proof of concept reinforcement learning based tool for non parametric population pharmacokinetics workflow optimization.
Otalvaro, J D; Yamada, W M; Hernandez, A M; Zuluaga, A F; Chen, R; Neely, M N.
Afiliação
  • Otalvaro JD; Laboratory of Applied Pharmacokinetics and Bioinformatics, Department of Infectious Diseases, Children's Hospital Los Angeles, Los Angeles, CA, USA.
  • Yamada WM; Bioinstrumentation and Clinical Engineering Research Group, Engineering Department, University of Antioquia, Medellín, Colombia.
  • Hernandez AM; Laboratory of Integrated and Specialized Medicine, Medical School, University of Antioquia, Medellín, Colombia.
  • Zuluaga AF; Laboratory of Applied Pharmacokinetics and Bioinformatics, Department of Infectious Diseases, Children's Hospital Los Angeles, Los Angeles, CA, USA.
  • Chen R; Bioinstrumentation and Clinical Engineering Research Group, Engineering Department, University of Antioquia, Medellín, Colombia.
  • Neely MN; Laboratory of Integrated and Specialized Medicine, Medical School, University of Antioquia, Medellín, Colombia.
J Pharmacokinet Pharmacodyn ; 50(1): 33-43, 2023 02.
Article em En | MEDLINE | ID: mdl-36478350
The building of population pharmacokinetic models can be described as an iterative process in which given a model and a dataset, the pharmacometrician introduces some changes to the model specification, then perform an evaluation and based on the predictions obtained performs further optimization. This process (perform an action, witness a result, optimize your knowledge) is a perfect scenario for the implementation of Reinforcement Learning algorithms. In this paper we present the conceptual background and a implementation of one of those algorithms aiming to show pharmacometricians how to automate (to a certain point) the iterative model building process.We present the selected discretization for the action and the state space. SARSA (State-Action-Reward-State-Action) was selected as the RL algorithm to use, configured with a window of 1000 episodes with and a limit of 30 actions per episode. SARSA was configured to control an interface to the Non-Parametric Optimal Design algorithm, that was actually performing the parameter optimization.The Reinforcement Learning (RL) based agent managed to obtain the same likelihood and number of support points, with a distribution similar to the reported in the original paper. The total amount of time used by the train the agent was 5.5 h although we think this time can be further improved. It is possible to automatically find the structural model that maximizes the final likelihood for an specific pharmacokinetic dataset by using RL algorithm. The framework provided could allow the integration of even more actions i.e: add/remove covariates, non-linear compartments or the execution of secondary analysis. Many limitations were found while performing this study but we hope to address them all in future studies.
Assuntos
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Reforço Psicológico / Algoritmos Tipo de estudo: Prognostic_studies Idioma: En Revista: J Pharmacokinet Pharmacodyn Assunto da revista: FARMACOLOGIA Ano de publicação: 2023 Tipo de documento: Article País de afiliação: Estados Unidos

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Reforço Psicológico / Algoritmos Tipo de estudo: Prognostic_studies Idioma: En Revista: J Pharmacokinet Pharmacodyn Assunto da revista: FARMACOLOGIA Ano de publicação: 2023 Tipo de documento: Article País de afiliação: Estados Unidos