Your browser doesn't support javascript.
loading
A Comparative Study of Responses to Retina Questions from Either Experts, Expert-Edited Large Language Models, or Expert-Edited Large Language Models Alone.
Tailor, Prashant D; Dalvin, Lauren A; Chen, John J; Iezzi, Raymond; Olsen, Timothy W; Scruggs, Brittni A; Barkmeier, Andrew J; Bakri, Sophie J; Ryan, Edwin H; Tang, Peter H; Parke, D Wilkin; Belin, Peter J; Sridhar, Jayanth; Xu, David; Kuriyan, Ajay E; Yonekawa, Yoshihiro; Starr, Matthew R.
Affiliation
  • Tailor PD; Department of Ophthalmology, Mayo Clinic, Rochester, Minnesota.
  • Dalvin LA; Department of Ophthalmology, Mayo Clinic, Rochester, Minnesota.
  • Chen JJ; Department of Ophthalmology, Mayo Clinic, Rochester, Minnesota.
  • Iezzi R; Department of Ophthalmology, Mayo Clinic, Rochester, Minnesota.
  • Olsen TW; Department of Ophthalmology, Mayo Clinic, Rochester, Minnesota.
  • Scruggs BA; Department of Ophthalmology, Mayo Clinic, Rochester, Minnesota.
  • Barkmeier AJ; Department of Ophthalmology, Mayo Clinic, Rochester, Minnesota.
  • Bakri SJ; Department of Ophthalmology, Mayo Clinic, Rochester, Minnesota.
  • Ryan EH; Retina Consultants of Minnesota, Edina, Minnesota.
  • Tang PH; Department of Ophthalmology & Visual Neurosciences, University of Minnesota Medical School, Minneapolis, Minnesota.
  • Parke DW; Retina Consultants of Minnesota, Edina, Minnesota.
  • Belin PJ; Department of Ophthalmology & Visual Neurosciences, University of Minnesota Medical School, Minneapolis, Minnesota.
  • Sridhar J; Retina Consultants of Minnesota, Edina, Minnesota.
  • Xu D; Department of Ophthalmology & Visual Neurosciences, University of Minnesota Medical School, Minneapolis, Minnesota.
  • Kuriyan AE; Retina Consultants of Minnesota, Edina, Minnesota.
  • Yonekawa Y; Olive View Medical Center, University of California Los Angeles, Los Angeles, California.
  • Starr MR; Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania.
Ophthalmol Sci ; 4(4): 100485, 2024.
Article in En | MEDLINE | ID: mdl-38660460
ABSTRACT

Objective:

To assess the quality, empathy, and safety of expert edited large language model (LLM), human expert created, and LLM responses to common retina patient questions.

Design:

Randomized, masked multicenter study.

Participants:

Twenty-one common retina patient questions were randomly assigned among 13 retina specialists.

Methods:

Each expert created a response (Expert) and then edited a LLM (ChatGPT-4)-generated response to that question (Expert + artificial intelligence [AI]), timing themselves for both tasks. Five LLMs (ChatGPT-3.5, ChatGPT-4, Claude 2, Bing, and Bard) also generated responses to each question. The original question along with anonymized and randomized Expert + AI, Expert, and LLM responses were evaluated by the other experts who did not write an expert response to the question. Evaluators judged quality and empathy (very poor, poor, acceptable, good, or very good) along with safety metrics (incorrect information, likelihood to cause harm, extent of harm, and missing content). Main

Outcome:

Mean quality and empathy score, proportion of responses with incorrect information, likelihood to cause harm, extent of harm, and missing content for each response type.

Results:

There were 4008 total grades collected (2608 for quality and empathy; 1400 for safety metrics), with significant differences in both quality and empathy (P < 0.001, P < 0.001) between LLM, Expert and Expert + AI groups. For quality, Expert + AI (3.86 ± 0.85) performed the best overall while GPT-3.5 (3.75 ± 0.79) was the top performing LLM. For empathy, GPT-3.5 (3.75 ± 0.69) had the highest mean score followed by Expert + AI (3.73 ± 0.63). By mean score, Expert placed 4 out of 7 for quality and 6 out of 7 for empathy. For both quality (P < 0.001) and empathy (P < 0.001), expert-edited LLM responses performed better than expert-created responses. There were time savings for an expert-edited LLM response versus expert-created response (P = 0.02). ChatGPT-4 performed similar to Expert for inappropriate content (P = 0.35), missing content (P = 0.001), extent of possible harm (P = 0.356), and likelihood of possible harm (P = 0.129).

Conclusions:

In this randomized, masked, multicenter study, LLM responses were comparable with experts in terms of quality, empathy, and safety metrics, warranting further exploration of their potential benefits in clinical settings. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of the article.
Key words

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: Ophthalmol Sci Year: 2024 Document type: Article Country of publication: Netherlands

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: Ophthalmol Sci Year: 2024 Document type: Article Country of publication: Netherlands