Your browser doesn't support javascript.
loading
Probing artificial intelligence in neurosurgical training: ChatGPT takes a neurosurgical residents written exam.
Bartoli, A; May, A T; Al-Awadhi, A; Schaller, K.
Afiliação
  • Bartoli A; Department of Clinical Neurosciences, Division of Neurosurgery, Geneva University Medical Center, Geneva, Switzerland.
  • May AT; Switzerland & Faculty of Medicine, University of Geneva, Switzerland.
  • Al-Awadhi A; Department of Clinical Neurosciences, Division of Neurosurgery, Geneva University Medical Center, Geneva, Switzerland.
  • Schaller K; Switzerland & Faculty of Medicine, University of Geneva, Switzerland.
Brain Spine ; 4: 102715, 2024.
Article em En | MEDLINE | ID: mdl-38163001
ABSTRACT

Introduction:

Artificial Intelligence tools are being introduced in almost every field of human life, including medical sciences and medical education, among scepticism and enthusiasm. Research question to assess how a generative language tool (Generative Pretrained Transformer 3.5, ChatGPT) performs at both generating questions and answering a neurosurgical residents' written exam. Namely, to assess how ChatGPT generates questions, how it answers human-generated questions, how residents answer AI-generated questions and how AI answers its self-generated question. Materials and

methods:

50 questions were included in the written exam, 46 questions were generated by humans (senior staff members) and 4 were generated by ChatGPT. 11 participants took the exam (ChatGPT and 10 residents). Questions were both open-ended and multiple-choice.8 questions were not submitted to ChatGPT since they contained images or schematic drawings to interpret.

Results:

formulating requests to ChatGPT required an iterative process to precise both questions and answers. Chat GPT scored among the lowest ranks (9/11) among all the participants). There was no difference in response rate for residents' between human-generated vs AI-generated questions that could have been attributed to less clarity of the question. ChatGPT answered correctly to all its self-generated questions. Discussion and

conclusions:

AI is a promising and powerful tool for medical education and for specific medical purposes, which need to be further determined. To request AI to generate logical and sound questions, that request must be formulated as precise as possible, framing the content, the type of question and its correct answers.
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Ano de publicação: 2024 Tipo de documento: Article