Your browser doesn't support javascript.
loading
ChatGPT Responses to Frequently Asked Questions on Ménière's Disease: A Comparison to Clinical Practice Guideline Answers.
Ho, Rebecca A; Shaari, Ariana L; Cowan, Paul T; Yan, Kenneth.
Affiliation
  • Ho RA; Department of Otolaryngology-Head and Neck Surgery Rutgers New Jersey Medical School Newark New Jersey USA.
  • Shaari AL; Department of Otolaryngology-Head and Neck Surgery Rutgers New Jersey Medical School Newark New Jersey USA.
  • Cowan PT; Department of Otolaryngology-Head and Neck Surgery Rutgers New Jersey Medical School Newark New Jersey USA.
  • Yan K; Department of Otolaryngology-Head and Neck Surgery Rutgers New Jersey Medical School Newark New Jersey USA.
OTO Open ; 8(3): e163, 2024.
Article in En | MEDLINE | ID: mdl-38974175
ABSTRACT

Objective:

Evaluate the quality of responses from Chat Generative Pre-Trained Transformer (ChatGPT) models compared to the answers for "Frequently Asked Questions" (FAQs) from the American Academy of Otolaryngology-Head and Neck Surgery (AAO-HNS) Clinical Practice Guidelines (CPG) for Ménière's disease (MD). Study

Design:

Comparative analysis.

Setting:

The AAO-HNS CPG for MD includes FAQs that clinicians can give to patients for MD-related questions. The ability of ChatGPT to properly educate patients regarding MD is unknown.

Methods:

ChatGPT-3.5 and 4.0 were each prompted with 16 questions from the MD FAQs. Each response was rated in terms of (1) comprehensiveness, (2) extensiveness, (3) presence of misleading information, and (4) quality of resources. Readability was assessed using Flesch-Kincaid Grade Level (FKGL) and Flesch Reading Ease Score (FRES).

Results:

ChatGPT-3.5 was comprehensive in 5 responses whereas ChatGPT-4.0 was comprehensive in 9 (31.3% vs 56.3%, P = .2852). ChatGPT-3.5 and 4.0 were extensive in all responses (P = 1.0000). ChatGPT-3.5 was misleading in 5 responses whereas ChatGPT-4.0 was misleading in 3 (31.3% vs 18.75%, P = .6851). ChatGPT-3.5 had quality resources in 10 responses whereas ChatGPT-4.0 had quality resources in 16 (62.5% vs 100%, P = .0177). AAO-HNS CPG FRES (62.4 ± 16.6) demonstrated an appropriate readability score of at least 60, while both ChatGPT-3.5 (39.1 ± 7.3) and 4.0 (42.8 ± 8.5) failed to meet this standard. All platforms had FKGL means that exceeded the recommended level of 6 or lower.

Conclusion:

While ChatGPT-4.0 had significantly better resource reporting, both models have room for improvement in being more comprehensive, more readable, and less misleading for patients.
Key words

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: OTO Open Year: 2024 Document type: Article

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: OTO Open Year: 2024 Document type: Article