Your browser doesn't support javascript.
loading
Evaluation of Generative Language Models in Personalizing Medical Information: Instrument Validation Study.
Spina, Aidin; Andalib, Saman; Flores, Daniel; Vermani, Rishi; Halaseh, Faris F; Nelson, Ariana M.
Affiliation
  • Spina A; School of Medicine, University of California, Irvine, Irvine, CA, United States.
  • Andalib S; School of Medicine, University of California, Irvine, Irvine, CA, United States.
  • Flores D; School of Medicine, University of California, Irvine, Irvine, CA, United States.
  • Vermani R; School of Medicine, University of California, Irvine, Irvine, CA, United States.
  • Halaseh FF; School of Medicine, University of California, Irvine, Irvine, CA, United States.
  • Nelson AM; School of Medicine, University of California, Irvine, Irvine, CA, United States.
JMIR AI ; 3: e54371, 2024 Aug 13.
Article in En | MEDLINE | ID: mdl-39137416
ABSTRACT

BACKGROUND:

Although uncertainties exist regarding implementation, artificial intelligence-driven generative language models (GLMs) have enormous potential in medicine. Deployment of GLMs could improve patient comprehension of clinical texts and improve low health literacy.

OBJECTIVE:

The goal of this study is to evaluate the potential of ChatGPT-3.5 and GPT-4 to tailor the complexity of medical information to patient-specific input education level, which is crucial if it is to serve as a tool in addressing low health literacy.

METHODS:

Input templates related to 2 prevalent chronic diseases-type II diabetes and hypertension-were designed. Each clinical vignette was adjusted for hypothetical patient education levels to evaluate output personalization. To assess the success of a GLM (GPT-3.5 and GPT-4) in tailoring output writing, the readability of pre- and posttransformation outputs were quantified using the Flesch reading ease score (FKRE) and the Flesch-Kincaid grade level (FKGL).

RESULTS:

Responses (n=80) were generated using GPT-3.5 and GPT-4 across 2 clinical vignettes. For GPT-3.5, FKRE means were 57.75 (SD 4.75), 51.28 (SD 5.14), 32.28 (SD 4.52), and 28.31 (SD 5.22) for 6th grade, 8th grade, high school, and bachelor's, respectively; FKGL mean scores were 9.08 (SD 0.90), 10.27 (SD 1.06), 13.4 (SD 0.80), and 13.74 (SD 1.18). GPT-3.5 only aligned with the prespecified education levels at the bachelor's degree. Conversely, GPT-4's FKRE mean scores were 74.54 (SD 2.6), 71.25 (SD 4.96), 47.61 (SD 6.13), and 13.71 (SD 5.77), with FKGL mean scores of 6.3 (SD 0.73), 6.7 (SD 1.11), 11.09 (SD 1.26), and 17.03 (SD 1.11) for the same respective education levels. GPT-4 met the target readability for all groups except the 6th-grade FKRE average. Both GLMs produced outputs with statistically significant differences (P<.001; 8th grade P<.001; high school P<.001; bachelors P=.003; FKGL 6th grade P=.001; 8th grade P<.001; high school P<.001; bachelors P<.001) between mean FKRE and FKGL across input education levels.

CONCLUSIONS:

GLMs can change the structure and readability of medical text outputs according to input-specified education. However, GLMs categorize input education designation into 3 broad tiers of output readability easy (6th and 8th grade), medium (high school), and difficult (bachelor's degree). This is the first result to suggest that there are broader boundaries in the success of GLMs in output text simplification. Future research must establish how GLMs can reliably personalize medical texts to prespecified education levels to enable a broader impact on health care literacy.
Key words

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: JMIR AI Year: 2024 Document type: Article Affiliation country: Estados Unidos Country of publication: Canadá

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: JMIR AI Year: 2024 Document type: Article Affiliation country: Estados Unidos Country of publication: Canadá