Your browser doesn't support javascript.
loading
Performance of ChatGPT on Factual Knowledge Questions Regarding Clinical Pharmacy.
van Nuland, Merel; Erdogan, Abdullah; Aςar, Cenkay; Contrucci, Ramon; Hilbrants, Sven; Maanach, Lamyae; Egberts, Toine; van der Linden, Paul D.
Afiliación
  • van Nuland M; Department of Clinical Pharmacy, Tergooi Medical Center, Hilversum, The Netherlands.
  • Erdogan A; Department of Clinical Pharmacy, Tergooi Medical Center, Hilversum, The Netherlands.
  • Aςar C; Department of Clinical Pharmacy, Tergooi Medical Center, Hilversum, The Netherlands.
  • Contrucci R; Department of Clinical Pharmacy, Amphia Hospital, Breda, The Netherlands.
  • Hilbrants S; Department of Clinical Pharmacy, Leeuwarden Medical Center, Leeuwarden, The Netherlands.
  • Maanach L; Department of Clinical Pharmacy, Haga Hospital, The Hague, The Netherlands.
  • Egberts T; Department of Clinical Pharmacy, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands.
  • van der Linden PD; Division of Pharmacoepidemiology and Clinical Pharmacology, Department of Pharmaceutical Sciences, Faculty of Science, Utrecht Institute for Pharmaceutical Sciences (UIPS), Utrecht University, Utrecht, The Netherlands.
J Clin Pharmacol ; 2024 Apr 16.
Article en En | MEDLINE | ID: mdl-38623909
ABSTRACT
ChatGPT is a language model that was trained on a large dataset including medical literature. Several studies have described the performance of ChatGPT on medical exams. In this study, we examine its performance in answering factual knowledge questions regarding clinical pharmacy. Questions were obtained from a Dutch application that features multiple-choice questions to maintain a basic knowledge level for clinical pharmacists. In total, 264 clinical pharmacy-related questions were presented to ChatGPT and responses were evaluated for accuracy, concordance, quality of the substantiation, and reproducibility. Accuracy was defined as the correctness of the answer, and results were compared to the overall score by pharmacists over 2022. Responses were marked concordant if no contradictions were present. The quality of the substantiation was graded by two independent pharmacists using a 4-point scale. Reproducibility was established by presenting questions multiple times and on various days. ChatGPT yielded accurate responses for 79% of the questions, surpassing pharmacists' accuracy of 66%. Concordance was 95%, and the quality of the substantiation was deemed good or excellent for 73% of the questions. Reproducibility was consistently high, both within day and between days (>92%), as well as across different users. ChatGPT demonstrated a higher accuracy and reproducibility to factual knowledge questions related to clinical pharmacy practice than pharmacists. Consequently, we posit that ChatGPT could serve as a valuable resource to pharmacists. We hope the technology will further improve, which may lead to enhanced future performance.
Palabras clave

Texto completo: 1 Banco de datos: MEDLINE Idioma: En Revista: J Clin Pharmacol Año: 2024 Tipo del documento: Article País de afiliación: Países Bajos

Texto completo: 1 Banco de datos: MEDLINE Idioma: En Revista: J Clin Pharmacol Año: 2024 Tipo del documento: Article País de afiliación: Países Bajos