Your browser doesn't support javascript.
loading
Can large language models pass official high-grade exams of the European Society of Neuroradiology courses? A direct comparison between OpenAI chatGPT 3.5, OpenAI GPT4 and Google Bard.
D'Anna, Gennaro; Van Cauter, Sofie; Thurnher, Majda; Van Goethem, Johan; Haller, Sven.
Afiliação
  • D'Anna G; Neuroimaging Unit, ASST Ovest Milanese, Legnano, Milan, Italy. gennaro.danna@gmail.com.
  • Van Cauter S; Department of Medical Imaging, Ziekenhuis Oost-Limburg, Genk, Belgium.
  • Thurnher M; Department of Medicine and Life Sciences, Hasselt University, Hasselt, Belgium.
  • Van Goethem J; Department for Biomedical Imaging and Image-Guided Therapy, Medical University of Vienna, Vienna, Austria.
  • Haller S; Department of Medical and Molecular Imaging, VITAZ, Sint-Niklaas, Belgium.
Neuroradiology ; 66(8): 1245-1250, 2024 Aug.
Article em En | MEDLINE | ID: mdl-38705899
ABSTRACT
We compared different LLMs, notably chatGPT, GPT4, and Google Bard and we tested whether their performance differs in subspeciality domains, in executing examinations from four different courses of the European Society of Neuroradiology (ESNR) notably anatomy/embryology, neuro-oncology, head and neck and pediatrics. Written exams of ESNR were used as input data, related to anatomy/embryology (30 questions), neuro-oncology (50 questions), head and neck (50 questions), and pediatrics (50 questions). All exams together, and each exam separately were introduced to the three LLMs chatGPT 3.5, GPT4, and Google Bard. Statistical analyses included a group-wise Friedman test followed by a pair-wise Wilcoxon test with multiple comparison corrections. Overall, there was a significant difference between the 3 LLMs (p < 0.0001), with GPT4 having the highest accuracy (70%), followed by chatGPT 3.5 (54%) and Google Bard (36%). The pair-wise comparison showed significant differences between chatGPT vs GPT 4 (p < 0.0001), chatGPT vs Bard (p < 0. 0023), and GPT4 vs Bard (p < 0.0001). Analyses per subspecialty showed the highest difference between the best LLM (GPT4, 70%) versus the worst LLM (Google Bard, 24%) in the head and neck exam, while the difference was least pronounced in neuro-oncology (GPT4, 62% vs Google Bard, 48%). We observed significant differences in the performance of the three different LLMs in the running of official exams organized by ESNR. Overall GPT 4 performed best, and Google Bard performed worst. This difference varied depending on subspeciality and was most pronounced in head and neck subspeciality.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Sociedades Médicas Limite: Humans País como assunto: Europa Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Sociedades Médicas Limite: Humans País como assunto: Europa Idioma: En Ano de publicação: 2024 Tipo de documento: Article