Your browser doesn't support javascript.
loading
Can ChatGPT pass the MRCP (UK) written examinations? Analysis of performance and errors using a clinical decision-reasoning framework.
Maitland, Amy; Fowkes, Ross; Maitland, Stuart.
Afiliación
  • Maitland A; Health Education England North East, Newcastle upon Tyne, UK.
  • Fowkes R; Health Education England North East, Newcastle upon Tyne, UK.
  • Maitland S; The Newcastle Upon Tyne NHS Hospitals Foundation Trust, Newcastle upon Tyne, UK stu.maitland@newcastle.ac.uk.
BMJ Open ; 14(3): e080558, 2024 Mar 15.
Article en En | MEDLINE | ID: mdl-38490655
ABSTRACT

OBJECTIVE:

Large language models (LLMs) such as ChatGPT are being developed for use in research, medical education and clinical decision systems. However, as their usage increases, LLMs face ongoing regulatory concerns. This study aims to analyse ChatGPT's performance on a postgraduate examination to identify areas of strength and weakness, which may provide further insight into their role in healthcare.

DESIGN:

We evaluated the performance of ChatGPT 4 (24 May 2023 version) on official MRCP (Membership of the Royal College of Physicians) parts 1 and 2 written examination practice questions. Statistical analysis was performed using Python. Spearman rank correlation assessed the relationship between the probability of correctly answering a question and two variables question difficulty and question length. Incorrectly answered questions were analysed further using a clinical reasoning framework to assess the errors made.

SETTING:

Online using ChatGPT web interface. PRIMARY AND SECONDARY OUTCOME

MEASURES:

Primary outcome was the score (percentage questions correct) in the MRCP postgraduate written examinations. Secondary outcomes were qualitative categorisation of errors using a clinical decision-making framework.

RESULTS:

ChatGPT achieved accuracy rates of 86.3% (part 1) and 70.3% (part 2). Weak but significant correlations were found between ChatGPT's accuracy and both just-passing rates in part 2 (r=0.34, p=0.0001) and question length in part 1 (r=-0.19, p=0.008). Eight types of error were identified, with the most frequent being factual errors, context errors and omission errors.

CONCLUSION:

ChatGPT performance greatly exceeded the passing mark for both exams. Multiple choice examinations provide a benchmark for LLM performance which is comparable to human demonstrations of knowledge, while also highlighting the errors LLMs make. Understanding the reasons behind ChatGPT's errors allows us to develop strategies to prevent them in medical devices that incorporate LLM technology.
Asunto(s)
Palabras clave

Texto completo: 1 Colección: 01-internacional Banco de datos: MEDLINE Asunto principal: Pancreatocolangiografía por Resonancia Magnética / Razonamiento Clínico Límite: Humans País/Región como asunto: Europa Idioma: En Revista: BMJ Open Año: 2024 Tipo del documento: Article

Texto completo: 1 Colección: 01-internacional Banco de datos: MEDLINE Asunto principal: Pancreatocolangiografía por Resonancia Magnética / Razonamiento Clínico Límite: Humans País/Región como asunto: Europa Idioma: En Revista: BMJ Open Año: 2024 Tipo del documento: Article