Your browser doesn't support javascript.
loading
Large Language Models and the North American Pharmacist Licensure Examination (NAPLEX) Practice Questions.
Ehlert, Alexa; Ehlert, Benjamin; Cao, Binxin; Morbitzer, Kathryn.
Afiliação
  • Ehlert A; University of North Carolina at Chapel Hill, UNC Eshelman School of Pharmacy, Division of Pharmaceutical Outcomes and Policy, Chapel Hill, NC, USA. Electronic address: lexyehlert@unc.edu.
  • Ehlert B; Stanford University School of Medicine, Department of Biomedical Data Science, Stanford, CA, USA.
  • Cao B; University of North Carolina at Chapel Hill, UNC Eshelman School of Pharmacy, Division of Pharmaceutical Outcomes and Policy, Chapel Hill, NC, USA.
  • Morbitzer K; University of North Carolina at Chapel Hill, UNC Eshelman School of Pharmacy, Chapel Hill, NC, USA; University of North Carolina at Chapel Hill, UNC Eshelman School of Pharmacy, Center for Innovative Pharmacy Education and Research, Chapel Hill, NC, USA.
Am J Pharm Educ ; 88(11): 101294, 2024 Nov.
Article em En | MEDLINE | ID: mdl-39307190
ABSTRACT

OBJECTIVE:

This study aims to test the accuracy of large language models (LLMs) in answering standardized pharmacy examination practice questions.

METHODS:

The performance of 3 LLMs (generative pretrained transformer [GPT]-3.5, GPT-4, and Chatsonic) was evaluated on 2 independent North American Pharmacist Licensure Examination practice question sets sourced from McGraw Hill and RxPrep. These question sets were further classified into binary question categories of adverse drug reaction (ADR) questions, scenario questions, treatment questions, and select-all questions. Python was used to run χ2 tests to compare model and question-type accuracy.

RESULTS:

Of the 3 LLMs tested, GPT-4 achieved the highest accuracy, with 87% accuracy on the McGraw Hill question set and 83.5% accuracy on the RxPrep question set. In comparison, GPT-3.5 had 68.0% and 60.0% accuracy on those question sets, respectively, and Chatsonic had 60.5% and 62.5% accuracy on those question sets, respectively. All models performed worse on select-all questions compared with non-select-all questions (GPT-3 42.3% vs 66.2%; GPT-4 73.1 vs 87.2%; Chatsonic 36.5% vs 71.6%). GPT-4 had statistically higher accuracy in answering ADR questions (96.1%) compared with non-ADR questions (83.9%).

CONCLUSION:

Our study found that GPT-4 outperformed GPT-3.5 and Chatsonic in answering North American Pharmacist Licensure Examination pharmacy licensure examination practice questions, particularly excelling in answering questions related to ADRs. These results suggest that advanced LLMs such as GPT-4 could be used for applications in pharmacy education.
Assuntos
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Farmacêuticos / Educação em Farmácia / Avaliação Educacional / Licenciamento em Farmácia Limite: Humans País/Região como assunto: America do norte Idioma: En Revista: Am J Pharm Educ Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Farmacêuticos / Educação em Farmácia / Avaliação Educacional / Licenciamento em Farmácia Limite: Humans País/Região como assunto: America do norte Idioma: En Revista: Am J Pharm Educ Ano de publicação: 2024 Tipo de documento: Article