Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Assunto da revista
País de afiliação
Intervalo de ano de publicação
1.
Am J Ophthalmol ; 265: 28-38, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-38614196

RESUMO

PURPOSE: To evaluate the quality, readability, and accuracy of large language model (LLM)-generated patient education materials (PEMs) on childhood glaucoma, and their ability to improve existing the readability of online information. DESIGN: Cross-sectional comparative study. METHODS: We evaluated responses of ChatGPT-3.5, ChatGPT-4, and Bard to 3 separate prompts requesting that they write PEMs on "childhood glaucoma." Prompt A required PEMs be "easily understandable by the average American." Prompt B required that PEMs be written "at a 6th-grade level using Simple Measure of Gobbledygook (SMOG) readability formula." We then compared responses' quality (DISCERN questionnaire, Patient Education Materials Assessment Tool [PEMAT]), readability (SMOG, Flesch-Kincaid Grade Level [FKGL]), and accuracy (Likert Misinformation scale). To assess the improvement of readability for existing online information, Prompt C requested that LLM rewrite 20 resources from a Google search of keyword "childhood glaucoma" to the American Medical Association-recommended "6th-grade level." Rewrites were compared on key metrics such as readability, complex words (≥3 syllables), and sentence count. RESULTS: All 3 LLMs generated PEMs that were of high quality, understandability, and accuracy (DISCERN ≥4, ≥70% PEMAT understandability, Misinformation score = 1). Prompt B responses were more readable than Prompt A responses for all 3 LLM (P ≤ .001). ChatGPT-4 generated the most readable PEMs compared to ChatGPT-3.5 and Bard (P ≤ .001). Although Prompt C responses showed consistent reduction of mean SMOG and FKGL scores, only ChatGPT-4 achieved the specified 6th-grade reading level (4.8 ± 0.8 and 3.7 ± 1.9, respectively). CONCLUSIONS: LLMs can serve as strong supplemental tools in generating high-quality, accurate, and novel PEMs, and improving the readability of existing PEMs on childhood glaucoma.


Assuntos
Compreensão , Glaucoma , Educação de Pacientes como Assunto , Humanos , Estudos Transversais , Glaucoma/fisiopatologia , Criança , Inquéritos e Questionários , Idioma , Materiais de Ensino/normas , Letramento em Saúde
2.
Semin Ophthalmol ; : 1-6, 2024 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-39350691

RESUMO

PURPOSE: To quantify the risk of posterior capsule rupture (PCR) in fellow-eye phacoemulsification surgery and to determine risk factors. METHODS: We pooled data from 8 United Kingdom sites for patients undergoing bilateral non-simultaneous phacoemulsification. Main outcome measures were the incidence and risk factors of the development of PCR during the fellow-eye phacoemulsification. RESULTS: We included 66,288 patients with a mean age of 75.3 ± 10.2 years. PCR during phacoemulsification occurred in the first eye in 932 patients (1.4%) and the fellow eye in 1039 patients (1.5%). The risk of fellow eye developing PCR in patients with PCR in the first eye was significantly higher than in patients without first eye PCR: 30 patients (3.2%) vs. 1009 (1.5%), respectively (odds ratio (OR) = 1.7, 95% confidence interval (CI) = 1.1-2.7). Other risk factors for fellow-eye PCR included zonular dialysis (OR = 5.4, CI = 3.3-7.8) and advanced cataract (OR = 2.8, CI = 2.1-3.7). CONCLUSIONS: History of PCR in the first-operated eye is an independent risk factor for PCR in the fellow eye.

3.
JAMA Ophthalmol ; 2024 Sep 05.
Artigo em Inglês | MEDLINE | ID: mdl-39235786

RESUMO

Importance: Racial, ethnic, and sex disparities exist in US clinical study enrollment, and the prevalence of these disparities in Pediatric Eye Disease Investigator Group (PEDIG) clinical studies has not been thoroughly assessed. Objective: To evaluate racial, ethnic, and sex representation in PEDIG clinical studies compared with the 2010 US Census pediatric population. Design, Setting, and Participants: This cross-sectional analysis examined PEDIG clinical studies based in the US from December 1, 1997 to September 12, 2022, 41 of which met inclusion criteria of a completed study, a study population younger than 18 years, and 1 or more accompanying publication. Data analysis was performed between November 2023 and February 2024. Exposure: Study participant race, ethnicity, and sex for each clinical study, as collected from peer-reviewed publications, patient-enrollment datasets, and ClinicalTrials.gov. Main Outcomes and Measures: Median enrollment percentages of female, White, Black, Hispanic, Asian, and other race participants were calculated and compared with the 2010 US Census pediatric population using a 1-sample Wilcoxon rank test. Proportionate enrollment was defined as no difference on a 1-sample Wilcoxon rank test if P ≥ .05. If P < .05, we determined if the median enrollment percentage was greater than or less than 2010 US Census proportion to determine if enrollees were underrepresented or overrepresented. To calculate the magnitude of overrepresentation or underrepresentation, enrollment-census difference (ECD) was defined as the difference between groups' median enrollment percentage and percentage representation in the 2010 US Census. Compound annual growth rate (CAGR) was used to measure temporal trends in enrollment, and logistic regression analysis was used to analyze factors that may have contributed to proportionate representation outcomes. Results: A total of 11 658 study participants in 41 clinical studies were included; mean (SD) participant age was 5.9 (2.8) years and 5918 study participants (50.8%) were female. In clinical studies meeting inclusion criteria, White participants were overrepresented (ECD, 0.19; 95% CI, 0.10-0.28; P < .001). Black participants (ECD, -0.07; 95% CI, -0.10 to -0.03; P < .001), Asian participants (ECD, -0.03; 95% CI, -0.04 to -0.02; P < .001), and Hispanic participants (ECD, -0.09; 95% CI, -0.13 to -0.05; P < .001) were underrepresented. Female participants were represented proportionately (ECD, 0.004; 95% CI, -0.036 to 0.045; P = .21). White and Asian participants demonstrated a decreasing trend in study enrollment from 1997 to 2022 (White: CAGR, -1.5%; 95% CI, -2.3% to -0.6%; Asian: CAGR, -1.7%; 95% CI, -2.0% to -1.4%), while Hispanic participants demonstrated an increasing enrollment trend (CAGR, 7.2%; 95% CI, 3.7%-10.7%). Conclusions and Relevance: In this retrospective cross-sectional study of PEDIG clinical studies from December 1, 1997 to September 12, 2022, Black, Hispanic, and Asian participants were underrepresented, White participants were overrepresented, and female participants were represented proportionally. Trends suggested increasing enrollment of Hispanic participants and decreasing enrollment of White participants over time. This study demonstrates an opportunity to advocate for increased enrollment of underrepresented groups in pediatric ophthalmology clinical studies.

4.
Br J Ophthalmol ; 108(10): 1470-1476, 2024 Sep 20.
Artigo em Inglês | MEDLINE | ID: mdl-39174290

RESUMO

BACKGROUND/AIMS: This was a cross-sectional comparative study. We evaluated the ability of three large language models (LLMs) (ChatGPT-3.5, ChatGPT-4, and Google Bard) to generate novel patient education materials (PEMs) and improve the readability of existing PEMs on paediatric cataract. METHODS: We compared LLMs' responses to three prompts. Prompt A requested they write a handout on paediatric cataract that was 'easily understandable by an average American.' Prompt B modified prompt A and requested the handout be written at a 'sixth-grade reading level, using the Simple Measure of Gobbledygook (SMOG) readability formula.' Prompt C rewrote existing PEMs on paediatric cataract 'to a sixth-grade reading level using the SMOG readability formula'. Responses were compared on their quality (DISCERN; 1 (low quality) to 5 (high quality)), understandability and actionability (Patient Education Materials Assessment Tool (≥70%: understandable, ≥70%: actionable)), accuracy (Likert misinformation; 1 (no misinformation) to 5 (high misinformation) and readability (SMOG, Flesch-Kincaid Grade Level (FKGL); grade level <7: highly readable). RESULTS: All LLM-generated responses were of high-quality (median DISCERN ≥4), understandability (≥70%), and accuracy (Likert=1). All LLM-generated responses were not actionable (<70%). ChatGPT-3.5 and ChatGPT-4 prompt B responses were more readable than prompt A responses (p<0.001). ChatGPT-4 generated more readable responses (lower SMOG and FKGL scores; 5.59±0.5 and 4.31±0.7, respectively) than the other two LLMs (p<0.001) and consistently rewrote them to or below the specified sixth-grade reading level (SMOG: 5.14±0.3). CONCLUSION: LLMs, particularly ChatGPT-4, proved valuable in generating high-quality, readable, accurate PEMs and in improving the readability of existing materials on paediatric cataract.


Assuntos
Catarata , Compreensão , Educação de Pacientes como Assunto , Humanos , Estudos Transversais , Educação de Pacientes como Assunto/métodos , Criança , Letramento em Saúde , Idioma , Leitura , Extração de Catarata
5.
J Pediatr Ophthalmol Strabismus ; 61(5): 332-338, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38815099

RESUMO

PURPOSE: To evaluate the quality, reliability, and readability of online patient educational materials on leukocoria. METHODS: In this cross-sectional study, the Google search engine was searched for the terms "leukocoria" and "white pupil." The first 50 search outcomes were evaluated for each search term based on predefined inclusion criteria, excluding duplicates, peer-reviewed papers, forum posts, paywalled content, and multimedia links. Sources were categorized as "institutional" or "private." Three independent raters assessed each web-site for quality and reliability using DISCERN, Health on the Net Code of Conduct (HONcode), and JAMA criteria. Readability was evaluated using seven formulas: Flesch Reading Ease (FRE), Flesch-Kincaid Grade Level (FKGL), Simple Measure of Gobbledygook (SMOG) Index, Automated Readability Index (ARI), Linsear Write (LW), Gunning Fog Index (GFI), and Coleman-Liau Index (CLI). RESULTS: A total of 51 websites were included. Quality, assessed by the DISCERN tool, showed a median score of 4, denoting moderate to high quality, with no significant differences between institutional and private sites or search terms. HONcode scores indicated variable reliability and trustworthiness (median: 10, range: 3 to 16), with institutional sites excelling in financial disclosure and ad differentiation. Additionally, institutional and private sites performed well in reliability and accountability, as measured by the JAMA Benchmark criteria (median: 3; range: 1 to 4). Readability, averaging an 11.3 ± 3.7 grade level, did not differ significantly between site types or search terms, consistently falling short of the recommended sixth-grade level for patient educational materials. CONCLUSIONS: The patient educational materials on leukocoria demonstrated moderate to high quality, commendable reliability, and accountability. However, the readability scores were above the recommended level for the layperson. [J Pediatr Ophthalmol Strabismus. 2024;61(5):332-338.].


Assuntos
Compreensão , Internet , Humanos , Estudos Transversais , Reprodutibilidade dos Testes , Educação de Pacientes como Assunto/normas , Letramento em Saúde
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa