Your browser doesn't support javascript.
loading
ChatGPT Knowledge Evaluation in Basic and Clinical Medical Sciences: Multiple Choice Question Examination-Based Performance.
Meo, Sultan Ayoub; Al-Masri, Abeer A; Alotaibi, Metib; Meo, Muhammad Zain Sultan; Meo, Muhammad Omair Sultan.
Afiliación
  • Meo SA; Department of Physiology, College of Medicine, King Saud University, Riyadh 11461, Saudi Arabia.
  • Al-Masri AA; Department of Physiology, College of Medicine, King Saud University, Riyadh 11461, Saudi Arabia.
  • Alotaibi M; University Diabetes Unit, Department of Medicine, College of Medicine, King Saud University, Riyadh 11461, Saudi Arabia.
  • Meo MZS; College of Medicine, Alfaisal University, Riyadh 11533, Saudi Arabia.
  • Meo MOS; College of Medicine, Alfaisal University, Riyadh 11533, Saudi Arabia.
Healthcare (Basel) ; 11(14)2023 Jul 17.
Article en En | MEDLINE | ID: mdl-37510487
ABSTRACT
The Chatbot Generative Pre-Trained Transformer (ChatGPT) has garnered great attention from the public, academicians and science communities. It responds with appropriate and articulate answers and explanations across various disciplines. For the use of ChatGPT in education, research and healthcare, different perspectives exist with some level of ambiguity around its acceptability and ideal uses. However, the literature is acutely lacking in establishing a link to assess the intellectual levels of ChatGPT in the medical sciences. Therefore, the present study aimed to investigate the knowledge level of ChatGPT in medical education both in basic and clinical medical sciences, multiple-choice question (MCQs) examination-based performance and its impact on the medical examination system. In this study, initially, a subject-wise question bank was established with a pool of multiple-choice questions (MCQs) from various medical textbooks and university examination pools. The research team members carefully reviewed the MCQ contents and ensured that the MCQs were relevant to the subject's contents. Each question was scenario-based with four sub-stems and had a single correct answer. In this study, 100 MCQs in various disciplines, including basic medical sciences (50 MCQs) and clinical medical sciences (50 MCQs), were randomly selected from the MCQ bank. The MCQs were manually entered one by one, and a fresh ChatGPT session was started for each entry to avoid memory retention bias. The task was given to ChatGPT to assess the response and knowledge level of ChatGPT. The first response obtained was taken as the final response. Based on a pre-determined answer key, scoring was made on a scale of 0 to 1, with zero representing incorrect and one representing the correct answer. The results revealed that out of 100 MCQs in various disciplines of basic and clinical medical sciences, ChatGPT attempted all the MCQs and obtained 37/50 (74%) marks in basic medical sciences and 35/50 (70%) marks in clinical medical sciences, with an overall score of 72/100 (72%) in both basic and clinical medical sciences. It is concluded that ChatGPT obtained a satisfactory score in both basic and clinical medical sciences subjects and demonstrated a degree of understanding and explanation. This study's findings suggest that ChatGPT may be able to assist medical students and faculty in medical education settings since it has potential as an innovation in the framework of medical sciences and education.
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Healthcare (Basel) Año: 2023 Tipo del documento: Article País de afiliación: Arabia Saudita

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Healthcare (Basel) Año: 2023 Tipo del documento: Article País de afiliación: Arabia Saudita