Your browser doesn't support javascript.
loading
Pure Wisdom or Potemkin Villages? A Comparison of ChatGPT 3.5 and ChatGPT 4 on USMLE Step 3 Style Questions: Quantitative Analysis.
Knoedler, Leonard; Alfertshofer, Michael; Knoedler, Samuel; Hoch, Cosima C; Funk, Paul F; Cotofana, Sebastian; Maheta, Bhagvat; Frank, Konstantin; Brébant, Vanessa; Prantl, Lukas; Lamby, Philipp.
Afiliación
  • Knoedler L; Department of Plastic, Hand and Reconstructive Surgery, University Hospital Regensburg, Regensburg, Germany.
  • Alfertshofer M; Division of Hand, Plastic and Aesthetic Surgery, Ludwig-Maximilians University Munich, Munich, Germany.
  • Knoedler S; Department of Plastic, Hand and Reconstructive Surgery, University Hospital Regensburg, Regensburg, Germany.
  • Hoch CC; Division of Plastic Surgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, United States.
  • Funk PF; Department of Otolaryngology, Head and Neck Surgery, School of Medicine, Technical University of Munich, Munich, Germany.
  • Cotofana S; Department of Otolaryngology, Head and Neck Surgery, University Hospital Jena, Friedrich Schiller University Jena, Jena, Germany.
  • Maheta B; Department of Dermatology, Erasmus Hospital, Rotterdam, Netherlands.
  • Frank K; Centre for Cutaneous Research, Blizard Institute, Queen Mary University of London, London, United Kingdom.
  • Brébant V; College of Medicine, California Northstate University, Elk Grove, CA, United States.
  • Prantl L; Ocean Clinic, Marbella, Spain.
  • Lamby P; Department of Plastic, Hand and Reconstructive Surgery, University Hospital Regensburg, Regensburg, Germany.
JMIR Med Educ ; 10: e51148, 2024 Jan 05.
Article en En | MEDLINE | ID: mdl-38180782
ABSTRACT

BACKGROUND:

The United States Medical Licensing Examination (USMLE) has been critical in medical education since 1992, testing various aspects of a medical student's knowledge and skills through different steps, based on their training level. Artificial intelligence (AI) tools, including chatbots like ChatGPT, are emerging technologies with potential applications in medicine. However, comprehensive studies analyzing ChatGPT's performance on USMLE Step 3 in large-scale scenarios and comparing different versions of ChatGPT are limited.

OBJECTIVE:

This paper aimed to analyze ChatGPT's performance on USMLE Step 3 practice test questions to better elucidate the strengths and weaknesses of AI use in medical education and deduce evidence-based strategies to counteract AI cheating.

METHODS:

A total of 2069 USMLE Step 3 practice questions were extracted from the AMBOSS study platform. After including 229 image-based questions, a total of 1840 text-based questions were further categorized and entered into ChatGPT 3.5, while a subset of 229 questions were entered into ChatGPT 4. Responses were recorded, and the accuracy of ChatGPT answers as well as its performance in different test question categories and for different difficulty levels were compared between both versions.

RESULTS:

Overall, ChatGPT 4 demonstrated a statistically significant superior performance compared to ChatGPT 3.5, achieving an accuracy of 84.7% (194/229) and 56.9% (1047/1840), respectively. A noteworthy correlation was observed between the length of test questions and the performance of ChatGPT 3.5 (ρ=-0.069; P=.003), which was absent in ChatGPT 4 (P=.87). Additionally, the difficulty of test questions, as categorized by AMBOSS hammer ratings, showed a statistically significant correlation with performance for both ChatGPT versions, with ρ=-0.289 for ChatGPT 3.5 and ρ=-0.344 for ChatGPT 4. ChatGPT 4 surpassed ChatGPT 3.5 in all levels of test question difficulty, except for the 2 highest difficulty tiers (4 and 5 hammers), where statistical significance was not reached.

CONCLUSIONS:

In this study, ChatGPT 4 demonstrated remarkable proficiency in taking the USMLE Step 3, with an accuracy rate of 84.7% (194/229), outshining ChatGPT 3.5 with an accuracy rate of 56.9% (1047/1840). Although ChatGPT 4 performed exceptionally, it encountered difficulties in questions requiring the application of theoretical concepts, particularly in cardiology and neurology. These insights are pivotal for the development of examination strategies that are resilient to AI and underline the promising role of AI in the realm of medical education and diagnostics.
Asunto(s)
Palabras clave

Texto completo: 1 Bases de datos: MEDLINE Asunto principal: Cardiología / Educación Médica / Medicina Tipo de estudio: Prognostic_studies Límite: Humans Idioma: En Revista: JMIR Med Educ Año: 2024 Tipo del documento: Article País de afiliación: Alemania

Texto completo: 1 Bases de datos: MEDLINE Asunto principal: Cardiología / Educación Médica / Medicina Tipo de estudio: Prognostic_studies Límite: Humans Idioma: En Revista: JMIR Med Educ Año: 2024 Tipo del documento: Article País de afiliación: Alemania