Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros

Bases de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Clin Pract ; 14(4): 1507-1514, 2024 Jul 30.
Artículo en Inglés | MEDLINE | ID: mdl-39194925

RESUMEN

Background: Inferior Vena Cava (IVC) filters have become an advantageous treatment modality for patients with venous thromboembolism. As the use of these filters continues to grow, it is imperative for providers to appropriately educate patients in a comprehensive yet understandable manner. Likewise, generative artificial intelligence models are a growing tool in patient education, but there is little understanding of the readability of these tools on IVC filters. Methods: This study aimed to determine the Flesch Reading Ease (FRE), Flesch-Kincaid, and Gunning Fog readability of IVC Filter patient educational materials generated by these artificial intelligence models. Results: The ChatGPT cohort had the highest mean Gunning Fog score at 17.76 ± 1.62 and the lowest at 11.58 ± 1.55 among the Copilot cohort. The difference between groups for Flesch Reading Ease scores (p = 8.70408 × 10-8) was found to be statistically significant albeit with priori power found to be low at 0.392. Conclusions: The results of this study indicate that the answers generated by the Microsoft Copilot cohort offers a greater degree of readability compared to ChatGPT cohort regarding IVC filters. Nevertheless, the mean Flesch-Kincaid readability for both cohorts does not meet the recommended U.S. grade reading levels.

2.
Kans J Med ; 16: 309-315, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38298385

RESUMEN

Introduction: There remains an increasing utilization of internet-based resources as a first line of medical knowledge. Among patients with cardiovascular disease, these resources often are relied upon for numerous diagnostic and therapeutic modalities. However, the reliability of this information is not fully understood. The aim of this study was to provide a descriptive profile on the literacy quality, readability, and transparency of publicly available educational resources in cardiology. Methods: The frequently asked questions and associated online educational articles on common cardiovascular diagnostic and therapeutic interventions were investigated using publicly available data from the Google RankBrain machine learning algorithm after applying inclusion and exclusion criteria. Independent raters evaluated questions for Rothwell's Classification and readability calculations. Results: Collectively, 520 questions and articles were evaluated across 13 cardiac interventions, resulting in 3,120 readability scores. The sources of articles were most frequently from academic institutions followed by commercial sources. Most questions were classified as "Fact" at 76.0% (n = 395), and questions regarding "Technical Details" of each intervention were the most common subclassification at 56.3% (n = 293). Conclusions: Our data show that patients most often are using online search query programs to seek information regarding specific knowledge of each cardiovascular intervention rather than form an evaluation of the intervention. Additionally, these online patient educational resources continue to not meet grade-level reading recommendations.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA