Your browser doesn't support javascript.
loading
Capabilities of GPT-4 in ophthalmology: an analysis of model entropy and progress towards human-level medical question answering.
Antaki, Fares; Milad, Daniel; Chia, Mark A; Giguère, Charles-Édouard; Touma, Samir; El-Khoury, Jonathan; Keane, Pearse A; Duval, Renaud.
Afiliación
  • Antaki F; Moorfields Eye Hospital NHS Foundation Trust, London, UK.
  • Milad D; Institute of Ophthalmology, UCL, London, UK.
  • Chia MA; The CHUM School of Artificial Intelligence in Healthcare, Montreal, Quebec, Canada.
  • Giguère CÉ; Department of Ophthalmology, University of Montreal, Montreal, Quebec, Canada.
  • Touma S; Department of Ophthalmology, Centre Hospitalier de l'Universite de Montreal (CHUM), Montreal, Quebec, Canada.
  • El-Khoury J; Department of Ophthalmology, University of Montreal, Montreal, Quebec, Canada.
  • Keane PA; Department of Ophthalmology, Centre Hospitalier de l'Universite de Montreal (CHUM), Montreal, Quebec, Canada.
  • Duval R; Department of Ophthalmology, Hopital Maisonneuve-Rosemont, Montreal, Quebec, Canada.
Br J Ophthalmol ; 2023 Nov 03.
Article en En | MEDLINE | ID: mdl-37923374
ABSTRACT

BACKGROUND:

Evidence on the performance of Generative Pre-trained Transformer 4 (GPT-4), a large language model (LLM), in the ophthalmology question-answering domain is needed.

METHODS:

We tested GPT-4 on two 260-question multiple choice question sets from the Basic and Clinical Science Course (BCSC) Self-Assessment Program and the OphthoQuestions question banks. We compared the accuracy of GPT-4 models with varying temperatures (creativity setting) and evaluated their responses in a subset of questions. We also compared the best-performing GPT-4 model to GPT-3.5 and to historical human performance.

RESULTS:

GPT-4-0.3 (GPT-4 with a temperature of 0.3) achieved the highest accuracy among GPT-4 models, with 75.8% on the BCSC set and 70.0% on the OphthoQuestions set. The combined accuracy was 72.9%, which represents an 18.3% raw improvement in accuracy compared with GPT-3.5 (p<0.001). Human graders preferred responses from models with a temperature higher than 0 (more creative). Exam section, question difficulty and cognitive level were all predictive of GPT-4-0.3 answer accuracy. GPT-4-0.3's performance was numerically superior to human performance on the BCSC (75.8% vs 73.3%) and OphthoQuestions (70.0% vs 63.0%), but the difference was not statistically significant (p=0.55 and p=0.09).

CONCLUSION:

GPT-4, an LLM trained on non-ophthalmology-specific data, performs significantly better than its predecessor on simulated ophthalmology board-style exams. Remarkably, its performance tended to be superior to historical human performance, but that difference was not statistically significant in our study.
Palabras clave

Texto completo: 1 Colección: 01-internacional Banco de datos: MEDLINE Idioma: En Revista: Br J Ophthalmol Año: 2023 Tipo del documento: Article País de afiliación: Reino Unido

Texto completo: 1 Colección: 01-internacional Banco de datos: MEDLINE Idioma: En Revista: Br J Ophthalmol Año: 2023 Tipo del documento: Article País de afiliación: Reino Unido