Your browser doesn't support javascript.
loading
Evaluation of the Current Status of Artificial Intelligence for Endourology Patient Education: A Blind Comparison of ChatGPT and Google Bard Against Traditional Information Resources.
Connors, Christopher; Gupta, Kavita; Khusid, Johnathan A; Khargi, Raymond; Yaghoubian, Alan J; Levy, Micah; Gallante, Blair; Atallah, William; Gupta, Mantu.
Afiliação
  • Connors C; Department of Urology, Icahn School of Medicine at Mount Sinai, New York, New York, USA.
  • Gupta K; Department of Urology, Icahn School of Medicine at Mount Sinai, New York, New York, USA.
  • Khusid JA; Department of Urology, Icahn School of Medicine at Mount Sinai, New York, New York, USA.
  • Khargi R; Department of Urology, Icahn School of Medicine at Mount Sinai, New York, New York, USA.
  • Yaghoubian AJ; Department of Urology, David Geffen School of Medicine at University of California, Los Angeles, California, USA.
  • Levy M; Department of Urology, Icahn School of Medicine at Mount Sinai, New York, New York, USA.
  • Gallante B; Department of Urology, Icahn School of Medicine at Mount Sinai, New York, New York, USA.
  • Atallah W; Department of Urology, Icahn School of Medicine at Mount Sinai, New York, New York, USA.
  • Gupta M; Department of Urology, Icahn School of Medicine at Mount Sinai, New York, New York, USA.
J Endourol ; 38(8): 843-851, 2024 Aug.
Article em En | MEDLINE | ID: mdl-38441078
ABSTRACT

Introduction:

Artificial intelligence (AI) platforms such as ChatGPT and Bard are increasingly utilized to answer patient health care questions. We present the first study to blindly evaluate AI-generated responses to common endourology patient questions against official patient education materials.

Methods:

Thirty-two questions and answers spanning kidney stones, ureteral stents, benign prostatic hyperplasia (BPH), and upper tract urothelial carcinoma were extracted from official Urology Care Foundation (UCF) patient education documents. The same questions were input into ChatGPT 4.0 and Bard, limiting responses to within ±10% of the word count of the corresponding UCF response to ensure fair comparison. Six endourologists blindly evaluated responses from each platform using Likert scales for accuracy, clarity, comprehensiveness, and patient utility. Reviewers identified which response they believed was not AI generated. Finally, Flesch-Kincaid Reading Grade Level formulas assessed the readability of each platform response. Ratings were compared using analysis of variance (ANOVA) and chi-square tests.

Results:

ChatGPT responses were rated the highest across all categories, including accuracy, comprehensiveness, clarity, and patient utility, while UCF answers were consistently scored the lowest, all p < 0.01. A subanalysis revealed that this trend was consistent across question categories (i.e., kidney stones, BPH, etc.). However, AI-generated responses were more likely to be classified at an advanced reading level, while UCF responses showed improved readability (college or higher reading level ChatGPT = 100%, Bard = 66%, and UCF = 19%), p < 0.001. When asked to identify which answer was not AI generated, 54.2% of responses indicated ChatGPT, 26.6% indicated Bard, and only 19.3% correctly identified it as the UCF response.

Conclusions:

In a blind evaluation, AI-generated responses from ChatGPT and Bard surpassed the quality of official patient education materials in endourology, suggesting that current AI platforms are already a reliable resource for basic urologic care information. AI-generated responses do, however, tend to require a higher reading level, which may limit their applicability to a broader audience.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Urologia / Inteligência Artificial / Educação de Pacientes como Assunto Limite: Humans Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Urologia / Inteligência Artificial / Educação de Pacientes como Assunto Limite: Humans Idioma: En Ano de publicação: 2024 Tipo de documento: Article