Your browser doesn't support javascript.
loading
Does ChatGPT Answer Otolaryngology Questions Accurately?
Maksimoski, Matthew; Noble, Anisha Rhea; Smith, David F.
Affiliation
  • Maksimoski M; Division of Pediatric Otolaryngology, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, U.S.A.
  • Noble AR; Department of Otolaryngology - Head and Neck Surgery, University of Cincinnati, 231 Albert Sabin Way, Cincinnati, USA.
  • Smith DF; Division of Pediatric Otolaryngology, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, U.S.A.
Laryngoscope ; 134(9): 4011-4015, 2024 Sep.
Article in En | MEDLINE | ID: mdl-38545679
ABSTRACT

OBJECTIVE:

Investigate the accuracy of ChatGPT in the manner of medical questions related to otolaryngology.

METHODS:

A ChatGPT session was opened within which 93 questions were asked related to otolaryngology topics. Questions were drawn from all major domains within otolaryngology and based upon key action statements (KAS) from clinical practice guidelines (CPGs). Twenty-one "patient-level" questions were also asked of the program. Answers were graded as either "correct," "partially correct," "incorrect," or "non-answer."

RESULTS:

Correct answers were given at a rate of 45.5% (71.4% correct in patient-level, 37.3% CPG); partially correct answers at 31.8% (28.6% patient-level, 32.8% CPG); incorrect at 21.6% (0% patient-level, 28.4% CPG); and 1.1% non-answers (% patient-level, 1.5% CPG). There was no difference in the rate of correct answers between CPGs published before or after the period of data collection cited by ChatGPT. CPG-based questions were less likely to be correct than patient-level questions (p = 0.003).

CONCLUSION:

Publicly available artificial intelligence software has become increasingly popular with consumers for everything from story-telling to data collection. In this study, we examined the accuracy of ChatGPT responses to questions related to otolaryngology over 7 domains and 21 published CPGs. Physicians and patients should understand the limitations of this software as it applies to otolaryngology, and programmers in future iterations should consider giving greater weight to information published by well-established journals and written by national content experts. LEVEL OF EVIDENCE N/A Laryngoscope, 1344011-4015, 2024.
Subject(s)
Key words

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Otolaryngology Limits: Humans Language: En Journal: Laryngoscope Journal subject: OTORRINOLARINGOLOGIA Year: 2024 Document type: Article Affiliation country: Estados Unidos

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Otolaryngology Limits: Humans Language: En Journal: Laryngoscope Journal subject: OTORRINOLARINGOLOGIA Year: 2024 Document type: Article Affiliation country: Estados Unidos