Your browser doesn't support javascript.
loading
Comparative Performance of ChatGPT 3.5 and GPT4 on Rhinology Standardized Board Examination Questions.
Patel, Evan A; Fleischer, Lindsay; Filip, Peter; Eggerstedt, Michael; Hutz, Michael; Michaelides, Elias; Batra, Pete S; Tajudeen, Bobby A.
Affiliation
  • Patel EA; Department of Otorhinolaryngology-Head and Neck Surgery Rush University Medical Center Chicago Illinois USA.
  • Fleischer L; Department of Otorhinolaryngology-Head and Neck Surgery Rush University Medical Center Chicago Illinois USA.
  • Filip P; Department of Otorhinolaryngology-Head and Neck Surgery Rush University Medical Center Chicago Illinois USA.
  • Eggerstedt M; Department of Otorhinolaryngology-Head and Neck Surgery Rush University Medical Center Chicago Illinois USA.
  • Hutz M; Department of Otorhinolaryngology-Head and Neck Surgery Rush University Medical Center Chicago Illinois USA.
  • Michaelides E; Department of Otorhinolaryngology-Head and Neck Surgery Rush University Medical Center Chicago Illinois USA.
  • Batra PS; Department of Otorhinolaryngology-Head and Neck Surgery Rush University Medical Center Chicago Illinois USA.
  • Tajudeen BA; Department of Otorhinolaryngology-Head and Neck Surgery Rush University Medical Center Chicago Illinois USA.
OTO Open ; 8(2): e164, 2024.
Article in En | MEDLINE | ID: mdl-38938507
ABSTRACT

Objective:

Advances in deep learning and artificial intelligence (AI) have led to the emergence of large language models (LLM) like ChatGPT from OpenAI. The study aimed to evaluate the performance of ChatGPT 3.5 and GPT4 on Otolaryngology (Rhinology) Standardized Board Examination questions in comparison to Otolaryngology residents.

Methods:

This study selected all 127 rhinology standardized questions from www.boardvitals.com, a commonly used study tool by otolaryngology residents preparing for board exams. Ninety-three text-based questions were administered to ChatGPT 3.5 and GPT4, and their answers were compared with the average results of the question bank (used primarily by otolaryngology residents). Thirty-four image-based questions were provided to GPT4 and underwent the same analysis. Based on the findings of an earlier study, a pass-fail cutoff was set at the 10th percentile.

Results:

On text-based questions, ChatGPT 3.5 answered correctly 45.2% of the time (8th percentile) (P = .0001), while GPT4 achieved 86.0% (66th percentile) (P = .001). GPT4 answered image-based questions correctly 64.7% of the time. Projections suggest that ChatGPT 3.5 might not pass the American Board of Otolaryngology Written Question Exam (ABOto WQE), whereas GPT4 stands a strong chance of passing.

Discussion:

The older LLM, ChatGPT 3.5, is unlikely to pass the ABOto WQE. However, the advanced GPT4 model exhibits a much higher likelihood of success. This rapid progression in AI indicates its potential future role in otolaryngology education. Implications for Practice As AI technology rapidly advances, it may be that AI-assisted medical education, diagnosis, and treatment planning become commonplace in the medical and surgical landscape. Level of Evidence Level 5.
Key words

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: OTO Open Year: 2024 Type: Article

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: OTO Open Year: 2024 Type: Article