Your browser doesn't support javascript.
loading
Deep Q-learning for the selection of optimal isocratic scouting runs in liquid chromatography.
Kensert, Alexander; Collaerts, Gilles; Efthymiadis, Kyriakos; Desmet, Gert; Cabooter, Deirdre.
Afiliación
  • Kensert A; University of Leuven (KU Leuven), Department for Pharmaceutical and Pharmacological Sciences, Pharmaceutical Analysis, Herestraat 49, 3000 Leuven, Belgium.
  • Collaerts G; University of Leuven (KU Leuven), Department for Pharmaceutical and Pharmacological Sciences, Pharmaceutical Analysis, Herestraat 49, 3000 Leuven, Belgium.
  • Efthymiadis K; Vrije Universiteit Brussel, Department of Computer Science, Artificial Intelligence Lab, Pleinlaan 9, 1050 Brussel, Belgium.
  • Desmet G; Vrije Universiteit Brussel, Department of Chemical Engineering, Pleinlaan 2, 1050 Brussel, Belgium.
  • Cabooter D; University of Leuven (KU Leuven), Department for Pharmaceutical and Pharmacological Sciences, Pharmaceutical Analysis, Herestraat 49, 3000 Leuven, Belgium. Electronic address: deirdre.cabooter@kuleuven.be.
J Chromatogr A ; 1638: 461900, 2021 Feb 08.
Article en En | MEDLINE | ID: mdl-33485027
An important challenge in chromatography is the development of adequate separation methods. Accurate retention models can significantly simplify and expedite the development of adequate separation methods for complex mixtures. The purpose of this study was to introduce reinforcement learning to chromatographic method development, by training a double deep Q-learning algorithm to select optimal isocratic scouting runs to generate accurate retention models. These scouting runs were fit to the Neue-Kuss retention model, which was then used to predict retention factors both under isocratic and gradient conditions. The quality of these predictions was compared to experimental data points, by computing a mean relative percentage error (MRPE) between the predicted and actual retention factors. By providing the reinforcement learning algorithm with a reward whenever the scouting runs led to accurate retention models and a penalty when the analysis time of a selected scouting run was too high (> 1h); it was hypothesized that the reinforcement learning algorithm should by time learn to select good scouting runs for compounds displaying a variety of characteristics. The reinforcement learning algorithm developed in this work was first trained on simulated data, and then evaluated on experimental data for 57 small molecules - each run at 10 different fractions of organic modifier (0.05 to 0.90) and four different linear gradients. The results showed that the MRPE of these retention models (3.77% for isocratic runs and 1.93% for gradient runs), mostly obtained via 3 isocratic scouting runs for each compound, were comparable in performance to retention models obtained by fitting the Neue-Kuss model to all (10) available isocratic datapoints (3.26% for isocratic runs and 4.97% for gradient runs) and retention models obtained via a "chromatographer's selection" of three scouting runs (3.86% for isocratic runs and 6.66% for gradient runs). It was therefore concluded that the reinforcement learning algorithm learned to select optimal scouting runs for retention modeling, by selecting 3 (out of 10) isocratic scouting runs per compound, that were informative enough to successfully capture the retention behavior of each compound.
Asunto(s)
Palabras clave

Texto completo: 1 Banco de datos: MEDLINE Asunto principal: Cromatografía Liquida Tipo de estudio: Prognostic_studies Idioma: En Revista: J Chromatogr A Año: 2021 Tipo del documento: Article País de afiliación: Bélgica

Texto completo: 1 Banco de datos: MEDLINE Asunto principal: Cromatografía Liquida Tipo de estudio: Prognostic_studies Idioma: En Revista: J Chromatogr A Año: 2021 Tipo del documento: Article País de afiliación: Bélgica