Your browser doesn't support javascript.
loading
Could ChatGPT Pass the UK Radiology Fellowship Examinations?
Ariyaratne, Sisith; Jenko, Nathan; Mark Davies, A; Iyengar, Karthikeyan P; Botchu, Rajesh.
Afiliación
  • Ariyaratne S; Department of Musculoskeletal Radiology, The Royal Orthopaedic Hospital NHS Foundation Trust, Northfield, Birmingham, UK (S.A., N.J., A.M.D., R.B.). Electronic address: sisithariyaratne@gmail.com.
  • Jenko N; Department of Musculoskeletal Radiology, The Royal Orthopaedic Hospital NHS Foundation Trust, Northfield, Birmingham, UK (S.A., N.J., A.M.D., R.B.).
  • Mark Davies A; Department of Musculoskeletal Radiology, The Royal Orthopaedic Hospital NHS Foundation Trust, Northfield, Birmingham, UK (S.A., N.J., A.M.D., R.B.).
  • Iyengar KP; Department of Trauma & Orthopaedics, Southport & Ormskirk Hospitals, Mersey and West Lancashire Teaching NHS Trust, Southport, UK (K.P.I.).
  • Botchu R; Department of Musculoskeletal Radiology, The Royal Orthopaedic Hospital NHS Foundation Trust, Northfield, Birmingham, UK (S.A., N.J., A.M.D., R.B.).
Acad Radiol ; 31(5): 2178-2182, 2024 05.
Article en En | MEDLINE | ID: mdl-38160089
ABSTRACT
RATIONALE AND

OBJECTIVES:

Chat Generative Pre-trained Transformer (ChatGPT) is an artificial intelligence (AI) tool which utilises machine learning to generate original text resembling human language. AI models have recently demonstrated remarkable ability at analysing and solving problems, including passing professional examinations. We investigate the performance of ChatGPT on some of the UK radiology fellowship equivalent examination questions.

METHODS:

ChatGPT was asked to answer questions from question banks resembling the Fellowship of the Royal College of Radiologists (FRCR) examination. The entire physics part 1 question bank (203 5-part true/false questions) was answered by the GPT-4 model and answers recorded. 240 single best answer questions (SBAs) (representing the true length of the FRCR 2A examination) were answered by both GPT-3.5 and GPT-4 models.

RESULTS:

ChatGPT 4 answered 74.8% of part 1 true/false statements correctly. The spring 2023 passing mark of the part 1 examination was 75.5% and ChatGPT thus narrowly failed. In the 2A examination, ChatGPT 3.5 answered 50.8% SBAs correctly, while GPT-4 answered 74.2% correctly. The winter 2022 2A pass mark was 63.3% and thus GPT-4 clearly passed.

CONCLUSION:

AI models such as ChatGPT are able to answer the majority of questions in an FRCR style examination. It is reasonable to assume that further developments in AI will be more likely to succeed in comprehending and solving questions related to medicine, specifically clinical radiology. ADVANCES IN KNOWLEDGE Our findings outline the unprecedented capabilities of AI, adding to the current relatively small body of literature on the subject, which in turn can play a role medical training, evaluation and practice. This can undoubtedly have implications for radiology.
Asunto(s)
Palabras clave

Texto completo: 1 Base de datos: MEDLINE Asunto principal: Radiología / Inteligencia Artificial / Evaluación Educacional / Becas Límite: Humans País/Región como asunto: Europa Idioma: En Revista: Acad Radiol Asunto de la revista: RADIOLOGIA Año: 2024 Tipo del documento: Article

Texto completo: 1 Base de datos: MEDLINE Asunto principal: Radiología / Inteligencia Artificial / Evaluación Educacional / Becas Límite: Humans País/Región como asunto: Europa Idioma: En Revista: Acad Radiol Asunto de la revista: RADIOLOGIA Año: 2024 Tipo del documento: Article