Your browser doesn't support javascript.
loading
Large language models: a new frontier in paediatric cataract patient education.
Dihan, Qais; Chauhan, Muhammad Z; Eleiwa, Taher K; Brown, Andrew D; Hassan, Amr K; Khodeiry, Mohamed M; Elsheikh, Reem H; Oke, Isdin; Nihalani, Bharti R; VanderVeen, Deborah K; Sallam, Ahmed B; Elhusseiny, Abdelrahman M.
Affiliation
  • Dihan Q; Rosalind Franklin University of Medicine and Science Chicago Medical School, North Chicago, Illinois, USA.
  • Chauhan MZ; Deparment of Ophthalmology, University of Arkansas for Medical Sciences, Little Rock, AR, USA.
  • Eleiwa TK; Deparment of Ophthalmology, University of Arkansas for Medical Sciences, Little Rock, AR, USA.
  • Brown AD; Department of Ophthalmology, Benha University, Benha, Egypt.
  • Hassan AK; University of Arkansas for Medical Sciences, Little Rock, Arkansas, USA.
  • Khodeiry MM; Department of Ophthalmology, South Valley University, Qena, Egypt.
  • Elsheikh RH; Department of Ophthalmology, University of Kentucky, Lexington, Kentucky, USA.
  • Oke I; Deparment of Ophthalmology, University of Arkansas for Medical Sciences, Little Rock, AR, USA.
  • Nihalani BR; Department of Ophthalmology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA.
  • VanderVeen DK; Department of Ophthalmology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA.
  • Sallam AB; Department of Ophthalmology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA.
  • Elhusseiny AM; Deparment of Ophthalmology, University of Arkansas for Medical Sciences, Little Rock, AR, USA.
Br J Ophthalmol ; 108(10): 1470-1476, 2024 Sep 20.
Article in En | MEDLINE | ID: mdl-39174290
ABSTRACT
BACKGROUND/

AIMS:

This was a cross-sectional comparative study. We evaluated the ability of three large language models (LLMs) (ChatGPT-3.5, ChatGPT-4, and Google Bard) to generate novel patient education materials (PEMs) and improve the readability of existing PEMs on paediatric cataract.

METHODS:

We compared LLMs' responses to three prompts. Prompt A requested they write a handout on paediatric cataract that was 'easily understandable by an average American.' Prompt B modified prompt A and requested the handout be written at a 'sixth-grade reading level, using the Simple Measure of Gobbledygook (SMOG) readability formula.' Prompt C rewrote existing PEMs on paediatric cataract 'to a sixth-grade reading level using the SMOG readability formula'. Responses were compared on their quality (DISCERN; 1 (low quality) to 5 (high quality)), understandability and actionability (Patient Education Materials Assessment Tool (≥70% understandable, ≥70% actionable)), accuracy (Likert misinformation; 1 (no misinformation) to 5 (high misinformation) and readability (SMOG, Flesch-Kincaid Grade Level (FKGL); grade level <7 highly readable).

RESULTS:

All LLM-generated responses were of high-quality (median DISCERN ≥4), understandability (≥70%), and accuracy (Likert=1). All LLM-generated responses were not actionable (<70%). ChatGPT-3.5 and ChatGPT-4 prompt B responses were more readable than prompt A responses (p<0.001). ChatGPT-4 generated more readable responses (lower SMOG and FKGL scores; 5.59±0.5 and 4.31±0.7, respectively) than the other two LLMs (p<0.001) and consistently rewrote them to or below the specified sixth-grade reading level (SMOG 5.14±0.3).

CONCLUSION:

LLMs, particularly ChatGPT-4, proved valuable in generating high-quality, readable, accurate PEMs and in improving the readability of existing materials on paediatric cataract.
Subject(s)
Key words

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Cataract / Patient Education as Topic / Comprehension Limits: Child / Humans Language: En Journal: Br J Ophthalmol Year: 2024 Document type: Article Affiliation country: United States Country of publication: United kingdom

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Cataract / Patient Education as Topic / Comprehension Limits: Child / Humans Language: En Journal: Br J Ophthalmol Year: 2024 Document type: Article Affiliation country: United States Country of publication: United kingdom