Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
J Vitreoretin Dis ; 8(4): 421-427, 2024.
Article in English | MEDLINE | ID: mdl-39148568

ABSTRACT

Purpose: To evaluate the readability, accountability, accessibility, and source of online patient education materials for treatment of age-related macular degeneration (AMD) and to quantify public interest in Syfovre and geographic atrophy after US Food and Drug Administration (FDA) approval. Methods: Websites were classified into 4 categories by information source. Readability was assessed using 5 validated readability indices. Accountability was assessed using 4 benchmarks of the Journal of the American Medical Association (JAMA). Accessibility was evaluated using 3 established criteria. The Google Trends tool was used to evaluate temporal trends in public interest in "Syfovre" and "geographic atrophy" in the months after FDA approval. Results: Of 100 websites analyzed, 22% were written below the recommended sixth-grade reading level. The mean (±SD) grade level of analyzed articles was 9.76 ± 3.35. Websites averaged 1.40 ± 1.39 (of 4) JAMA accountability metrics. The majority of articles (67%) were from private practice/independent organizations. A significant increase in the public interest in the terms "Syfovre" and "geographic atrophy" after FDA approval was found with the Google Trends tool (P < .001). Conclusions: Patient education materials related to AMD treatment are often written at inappropriate reading levels and lack established accountability and accessibility metrics. Articles from national organizations ranked highest on accessibility metrics but were less visible on a Google search, suggesting the need for visibility-enhancing measures. Patient education materials related to the term "Syfovre" had the highest average reading level and low accountability, suggesting the need to modify resources to best address the needs of an increasingly curious public.

2.
Semin Ophthalmol ; 39(6): 472-479, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38516983

ABSTRACT

PURPOSE: Patients are using online search modalities to learn about their eye health. While Google remains the most popular search engine, the use of large language models (LLMs) like ChatGPT has increased. Cataract surgery is the most common surgical procedure in the US, and there is limited data on the quality of online information that populates after searches related to cataract surgery on search engines such as Google and LLM platforms such as ChatGPT. We identified the most common patient frequently asked questions (FAQs) about cataracts and cataract surgery and evaluated the accuracy, safety, and readability of the answers to these questions provided by both Google and ChatGPT. We demonstrated the utility of ChatGPT in writing notes and creating patient education materials. METHODS: The top 20 FAQs related to cataracts and cataract surgery were recorded from Google. Responses to the questions provided by Google and ChatGPT were evaluated by a panel of ophthalmologists for accuracy and safety. Evaluators were also asked to distinguish between Google and LLM chatbot answers. Five validated readability indices were used to assess the readability of responses. ChatGPT was instructed to generate operative notes, post-operative instructions, and customizable patient education materials according to specific readability criteria. RESULTS: Responses to 20 patient FAQs generated by ChatGPT were significantly longer and written at a higher reading level than responses provided by Google (p < .001), with an average grade level of 14.8 (college level). Expert reviewers were correctly able to distinguish between a human-reviewed and chatbot generated response an average of 31% of the time. Google answers contained incorrect or inappropriate material 27% of the time, compared with 6% of LLM generated answers (p < .001). When expert reviewers were asked to compare the responses directly, chatbot responses were favored (66%). CONCLUSIONS: When comparing the responses to patients' cataract FAQs provided by ChatGPT and Google, practicing ophthalmologists overwhelming preferred ChatGPT responses. LLM chatbot responses were less likely to contain inaccurate information. ChatGPT represents a viable information source for eye health for patients with higher health literacy. ChatGPT may also be used by ophthalmologists to create customizable patient education materials for patients with varying health literacy.


Subject(s)
Artificial Intelligence , Cataract Extraction , Comprehension , Ophthalmology , Patient Education as Topic , Humans , Patient Education as Topic/methods , Cataract , Search Engine , Health Literacy , Internet , Surveys and Questionnaires
3.
Am J Ophthalmol ; 257: 38-45, 2024 01.
Article in English | MEDLINE | ID: mdl-37714282

ABSTRACT

PURPOSE: To describe the association between visual field loss and frailty in a nationally representative cohort of US adults. DESIGN: Retrospective cross-sectional study. METHODS: The cohort included adults 40 years or older with complete eye examination data from the 2005-2006 and 2007-2008 National Health and Nutrition Examination Surveys (NHANES). Visual field loss (VFL) was determined by frequency doubling technology and a 2-2-1 algorithm. A 36-item deficit accumulation-based frailty index was used to divide subjects into 4 categories of increasing frailty severity. RESULTS: Of the 4897 participants, 4402 (93.2%) had no VFL, 301 (4.1%) had unilateral VFL, and 194 (2.73%) had bilateral VFL. Within the sample, 2 subjects197 (53.1%) were categorized as non-frail, 1659 (31.3%) as vulnerable, 732 (11.3%) as mildly frail, and 312 (4.3%) as most frail. In multivariable models adjusted for demographics, visual acuity, and history of cataract surgery, subjects with unilateral VFL had higher adjusted odds of being in a more frail category (adjusted odds ratio [aOR], 2.07; 95% CI, 1.42-3.02) than subjects without VFL. Subjects with bilateral VFL also had higher odds of a more frail category compared to subjects without VFL (aOR, 1.74; 95% CI, 1.20-2.52). CONCLUSIONS: In the 2005-2008 NHANES adult population, VFL is associated with higher odds of frailty, independent of central visual acuity loss. Frail individuals may be more susceptible to diseases that can cause VFL, and/or VFL may predispose to frailty. Additional studies are needed to determine the directionality of this relationship and to assess potential interventions.


Subject(s)
Frailty , Adult , Humans , Frailty/diagnosis , Frailty/epidemiology , Nutrition Surveys , Visual Fields , Cross-Sectional Studies , Retrospective Studies , Vision Disorders/diagnosis , Vision Disorders/epidemiology
4.
Clin Ophthalmol ; 17: 779-788, 2023.
Article in English | MEDLINE | ID: mdl-36923248

ABSTRACT

Purpose: To assess the readability and accountability of online patient education materials related to glaucoma diagnosis and treatment. Methods: We conducted a Google search for 10 search terms related to glaucoma diagnosis and 10 search terms related to glaucoma treatment. For each search term, the first 10 patient education websites populated after Google search were assessed for readability and accountability. Readability was assessed using five validated measures: Flesch Reading Ease (FRE), Gunning Fog Index (GFI), Flesch-Kincaid Grade Level (FKGL), Simple Measure of Gobbledygook (SMOG), and New Dale-Chall (NDC). Accountability was assessed using the Journal of the American Medical Association (JAMA) benchmarks. The source of information for each article analyzed was recorded. Results: Of the 200 total websites analyzed, only 11% were written at or below the recommended 6th grade reading level. The average FRE and grade level for 100 glaucoma diagnosis-related articles were 42.02 ± 1.08 and 10.53 ± 1.30, respectively. The average FRE and grade level for 100 glaucoma treatment-related articles were 43.86 ± 1.01 and 11.29 ± 1.54, respectively. Crowdsourced articles were written at the highest average grade level (12.32 ± 0.78), followed by articles written by private practice/independent users (11.22 ± 1.74), national organizations (10.92 ± 1.24), and educational institutions (10.33 ± 1.35). Websites averaged 1.12 ± 1.15 of 4 JAMA accountability metrics. Conclusion: Despite wide variation in the readability and accountability of online patient education materials related to glaucoma diagnosis and treatment, patient education materials are consistently written at levels above the recommended reading level and often lack accountability. Articles from educational institutions and national organizations were often written at lower reading levels but are less frequently encountered after Google search. There is a need for accurate and understandable online information that glaucoma patients can use to inform decisions about their eye health.

SELECTION OF CITATIONS
SEARCH DETAIL