Your browser doesn't support javascript.
loading
Can ChatGPT generate practice question explanations for medical students, a new faculty teaching tool?
Tong, Lilin; Wang, Jennifer; Rapaka, Srikar; Garg, Priya S.
Afiliação
  • Tong L; Boston University Chobanian and Avedisian School of Medicine, Boston, MA, USA.
  • Wang J; Boston University Chobanian and Avedisian School of Medicine, Boston, MA, USA.
  • Rapaka S; Boston University Chobanian and Avedisian School of Medicine, Boston, MA, USA.
  • Garg PS; Medical Education Office and Department of Pediatrics, Boston University Chobanian and Avedisian School of Medicine, Boston, MA, USA.
Med Teach ; : 1-5, 2024 Jun 20.
Article em En | MEDLINE | ID: mdl-38900675
ABSTRACT

INTRODUCTION:

Multiple-choice questions (MCQs) are frequently used for formative assessment in medical school but often lack sufficient answer explanations given time-restraints of faculty. Chat Generated Pre-trained Transformer (ChatGPT) has emerged as a potential student learning aid and faculty teaching tool. This study aims to evaluate ChatGPT's performance in answering and providing explanations for MCQs.

METHOD:

Ninety-four faculty-generated MCQs were collected from the pre-clerkship curriculum at a US medical school. ChatGPT's accuracy in answering MCQ's were tracked on first attempt without an answer prompt (Pass 1) and after being given a prompt for the correct answer (Pass 2). Explanations provided by ChatGPT were compared with faculty-generated explanations, and a 3-point evaluation scale was used to assess accuracy and thoroughness compared to faculty-generated answers.

RESULTS:

On first attempt, ChatGPT demonstrated a 75% accuracy in correctly answering faculty-generated MCQs. Among correctly answered questions, 66.4% of ChatGPT's explanations matched faculty explanations, and 89.1% captured some key aspects without providing inaccurate information. The amount of inaccurately generated explanations increases significantly if the questions was not answered correctly on the first pass (2.7% if correct on first pass vs. 34.6% if incorrect on first pass, p < 0.001).

CONCLUSION:

ChatGPT shows promise in assisting faculty and students with explanations for practice MCQ's but should be used with caution. Faculty should review explanations and supplement to ensure coverage of learning objectives. Students can benefit from ChatGPT for immediate feedback through explanations if ChatGPT answers the question correctly on the first try. If the question is answered incorrectly students should remain cautious of the explanation and seek clarification from instructors.
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2024 Tipo de documento: Article