Your browser doesn't support javascript.
loading
Bridging the Gap Between Urological Research and Patient Understanding: The Role of Large Language Models in Automated Generation of Layperson's Summaries.
Eppler, Michael B; Ganjavi, Conner; Knudsen, J Everett; Davis, Ryan J; Ayo-Ajibola, Oluwatobiloba; Desai, Aditya; Storino Ramacciotti, Lorenzo; Chen, Andrew; De Castro Abreu, Andre; Desai, Mihir M; Gill, Inderbir S; Cacciamani, Giovanni E.
Afiliação
  • Eppler MB; USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, California.
  • Ganjavi C; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, California.
  • Knudsen JE; USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, California.
  • Davis RJ; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, California.
  • Ayo-Ajibola O; USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, California.
  • Desai A; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, California.
  • Storino Ramacciotti L; USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, California.
  • Chen A; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, California.
  • De Castro Abreu A; USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, California.
  • Desai MM; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, California.
  • Gill IS; USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, California.
  • Cacciamani GE; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, California.
Urol Pract ; 10(5): 436-443, 2023 09.
Article em En | MEDLINE | ID: mdl-37410015
ABSTRACT

INTRODUCTION:

This study assessed ChatGPT's ability to generate readable, accurate, and clear layperson summaries of urological studies, and compared the performance of ChatGPT-generated summaries with original abstracts and author-written patient summaries to determine its effectiveness as a potential solution for creating accessible medical literature for the public.

METHODS:

Articles from the top 5 ranked urology journals were selected. A ChatGPT prompt was developed following guidelines to maximize readability, accuracy, and clarity, minimizing variability. Readability scores and grade-level indicators were calculated for the ChatGPT summaries, original abstracts, and patient summaries. Two MD physicians independently rated the accuracy and clarity of the ChatGPT-generated layperson summaries. Statistical analyses were conducted to compare readability scores. Cohen's κ coefficient was used to assess interrater reliability for correctness and clarity evaluations.

RESULTS:

A total of 256 journal articles were included. The ChatGPT-generated summaries were created with an average time of 17.5 (SD 15.0) seconds. The readability scores of the ChatGPT-generated summaries were significantly better than the original abstracts, with Global Readability Score 54.8 (12.3) vs 29.8 (18.5), Flesch Kincade Reading Ease 54.8 (12.3) vs 29.8 (18.5), Flesch Kincaid Grade Level 10.4 (2.2) vs 13.5 (4.0), Gunning Fog Score 12.9 (2.6) vs 16.6 (4.1), Smog Index 9.1 (2.0) vs 12.0 (3.0), Coleman Liau Index 12.9 (2.1) vs 14.9 (3.7), and Automated Readability Index 11.1 (2.5) vs 12.0 (5.7; P < .0001 for all except Automated Readability Index, which was P = .037). The correctness rate of ChatGPT outputs was >85% across all categories assessed, with interrater agreement (Cohen's κ) between 2 independent physician reviewers ranging from 0.76-0.95.

CONCLUSIONS:

ChatGPT can create accurate summaries of scientific abstracts for patients, with well-crafted prompts enhancing user-friendliness. Although the summaries are satisfactory, expert verification is necessary for improved accuracy.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Urologia / Letramento em Saúde Idioma: En Ano de publicação: 2023 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Urologia / Letramento em Saúde Idioma: En Ano de publicação: 2023 Tipo de documento: Article