Your browser doesn't support javascript.
loading
Can Artificial Intelligence Improve the Readability of Patient Education Materials on Aortic Stenosis? A Pilot Study.
Rouhi, Armaun D; Ghanem, Yazid K; Yolchieva, Laman; Saleh, Zena; Joshi, Hansa; Moccia, Matthew C; Suarez-Pierre, Alejandro; Han, Jason J.
Afiliación
  • Rouhi AD; Department of Surgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA.
  • Ghanem YK; Department of Surgery, Cooper University Hospital, Camden, NJ, USA.
  • Yolchieva L; College of Arts and Sciences, University of Pennsylvania, Philadelphia, PA, USA.
  • Saleh Z; Department of Surgery, Cooper University Hospital, Camden, NJ, USA.
  • Joshi H; Department of Surgery, Cooper University Hospital, Camden, NJ, USA.
  • Moccia MC; Department of Surgery, Cooper University Hospital, Camden, NJ, USA.
  • Suarez-Pierre A; Department of Surgery, University of Colorado School of Medicine, Aurora, CO, USA.
  • Han JJ; Division of Cardiovascular Surgery, Department of Surgery, Perelman School of Medicine, Hospital of the University of Pennsylvania, Philadelphia, PA, USA. Jason.Han@pennmedicine.upenn.edu.
Cardiol Ther ; 13(1): 137-147, 2024 Mar.
Article en En | MEDLINE | ID: mdl-38194058
ABSTRACT

INTRODUCTION:

The advent of generative artificial intelligence (AI) dialogue platforms and large language models (LLMs) may help facilitate ongoing efforts to improve health literacy. Additionally, recent studies have highlighted inadequate health literacy among patients with cardiac disease. The aim of the present study was to ascertain whether two freely available generative AI dialogue platforms could rewrite online aortic stenosis (AS) patient education materials (PEMs) to meet recommended reading skill levels for the public.

METHODS:

Online PEMs were gathered from a professional cardiothoracic surgical society and academic institutions in the USA. PEMs were then inputted into two AI-powered LLMs, ChatGPT-3.5 and Bard, with the prompt "translate to 5th-grade reading level". Readability of PEMs before and after AI conversion was measured using the validated Flesch Reading Ease (FRE), Flesch-Kincaid Grade Level (FKGL), Simple Measure of Gobbledygook Index (SMOGI), and Gunning-Fog Index (GFI) scores.

RESULTS:

Overall, 21 PEMs on AS were gathered. Original readability measures indicated difficult readability at the 10th-12th grade reading level. ChatGPT-3.5 successfully improved readability across all four measures (p < 0.001) to the approximately 6th-7th grade reading level. Bard successfully improved readability across all measures (p < 0.001) except for SMOGI (p = 0.729) to the approximately 8th-9th grade level. Neither platform generated PEMs written below the recommended 6th-grade reading level. ChatGPT-3.5 demonstrated significantly more favorable post-conversion readability scores, percentage change in readability scores, and conversion time compared to Bard (all p < 0.001).

CONCLUSION:

AI dialogue platforms can enhance the readability of PEMs for patients with AS but may not fully meet recommended reading skill levels, highlighting potential tools to help strengthen cardiac health literacy in the future.
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Tipo de estudio: Prognostic_studies Idioma: En Revista: Cardiol Ther Año: 2024 Tipo del documento: Article País de afiliación: Estados Unidos

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Tipo de estudio: Prognostic_studies Idioma: En Revista: Cardiol Ther Año: 2024 Tipo del documento: Article País de afiliación: Estados Unidos
...