Your browser doesn't support javascript.
loading
A perspective on the use of deep deterministic policy gradient reinforcement learning for retention time modeling in reversed-phase liquid chromatography.
Kensert, Alexander; Desmet, Gert; Cabooter, Deirdre.
Afiliação
  • Kensert A; University of Leuven (KU Leuven), Department for Pharmaceutical and Pharmacological Sciences, Pharmaceutical Analysis, Herestraat 49, 3000 Leuven, Belgium; Vrije Universiteit Brussel, Department of Chemical Engineering, Pleinlaan 2, 1050 Brussel, Belgium.
  • Desmet G; Vrije Universiteit Brussel, Department of Chemical Engineering, Pleinlaan 2, 1050 Brussel, Belgium.
  • Cabooter D; University of Leuven (KU Leuven), Department for Pharmaceutical and Pharmacological Sciences, Pharmaceutical Analysis, Herestraat 49, 3000 Leuven, Belgium. Electronic address: deirdre.cabooter@kuleuven.be.
J Chromatogr A ; 1713: 464570, 2024 Jan 04.
Article em En | MEDLINE | ID: mdl-38101304
ABSTRACT
Artificial intelligence and machine learning techniques are increasingly used for different tasks related to method development in liquid chromatography. In this study, the possibilities of a reinforcement learning algorithm, more specifically a deep deterministic policy gradient algorithm, are evaluated for the selection of scouting runs for retention time modeling. As a theoretical exercise, it is investigated whether such an algorithm can be trained to select scouting runs for any compound of interest allowing to retrieve its correct retention parameters for the three-parameter Neue-Kuss retention model. It is observed that three scouting runs are generally sufficient to retrieve the retention parameters with an accuracy (mean relative percentage error MRPE) of 1 % or less. When given the opportunity to select additional scouting runs, this does not lead to a significantly improved accuracy. It is also observed that the agent tends to give preference to isocratic scouting runs for retention time modeling, and is only motivated towards selecting gradient scouting runs when penalized (strongly) for large analysis/gradient times. This seems to reinforce the general power and usefulness of isocratic scouting runs for retention time modeling. Finally, the best results (lowest MRPE) are obtained when the agent manages to retrieve retention time data for % ACN at elution of the compound under consideration that spread the entire relevant range of ACN (5 % ACN to 95 % ACN) as well as possible, i.e., resulting in retention data at a low, intermediate and high % ACN. Based on the obtained results, we believe reinforcement learning holds great potential to automate and rationalize method development in liquid chromatography in the future.
Assuntos
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Inteligência Artificial / Cromatografia de Fase Reversa Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Inteligência Artificial / Cromatografia de Fase Reversa Idioma: En Ano de publicação: 2024 Tipo de documento: Article