Your browser doesn't support javascript.
loading
Evaluating the Effectiveness of Artificial Intelligence-powered Large Language Models Application in Disseminating Appropriate and Readable Health Information in Urology.
Davis, Ryan; Eppler, Michael; Ayo-Ajibola, Oluwatobiloba; Loh-Doyle, Jeffrey C; Nabhani, Jamal; Samplaski, Mary; Gill, Inderbir; Cacciamani, Giovanni E.
Afiliação
  • Davis R; USC Institute of Urology, and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, California.
  • Eppler M; AI Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, California.
  • Ayo-Ajibola O; USC Institute of Urology, and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, California.
  • Loh-Doyle JC; AI Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, California.
  • Nabhani J; USC Institute of Urology, and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, California.
  • Samplaski M; AI Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, California.
  • Gill I; USC Institute of Urology, and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, California.
  • Cacciamani GE; USC Institute of Urology, and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, California.
J Urol ; 210(4): 688-694, 2023 10.
Article em En | MEDLINE | ID: mdl-37428117
ABSTRACT

PURPOSE:

The Internet is a ubiquitous source of medical information, and natural language processors are gaining popularity as alternatives to traditional search engines. However, suitability of their generated content for patients is not well understood. We aimed to evaluate the appropriateness and readability of natural language processor-generated responses to urology-related medical inquiries. MATERIALS AND

METHODS:

Eighteen patient questions were developed based on Google Trends and were used as inputs in ChatGPT. Three categories were assessed oncologic, benign, and emergency. Questions in each category were either treatment or sign/symptom-related questions. Three native English-speaking Board-Certified urologists independently assessed appropriateness of ChatGPT outputs for patient counseling using accuracy, comprehensiveness, and clarity as proxies for appropriateness. Readability was assessed using the Flesch Reading Ease and Flesh-Kincaid Reading Grade Level formulas. Additional measures were created based on validated tools and assessed by 3 independent reviewers.

RESULTS:

Fourteen of 18 (77.8%) responses were deemed appropriate, with clarity having the most 4 and 5 scores (P = .01). There was no significant difference in appropriateness of the responses between treatments and symptoms or between different categories of conditions. The most common reason from urologists for low scores was responses lacking information-sometimes vital information. The mean (SD) Flesch Reading Ease score was 35.5 (SD=10.2) and the mean Flesh-Kincaid Reading Grade Level score was 13.5 (1.74). Additional quality assessment scores showed no significant differences between different categories of conditions.

CONCLUSIONS:

Despite impressive capabilities, natural language processors have limitations as sources of medical information. Refinement is crucial before adoption for this purpose.
Assuntos
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Urologia / Letramento em Saúde Tipo de estudo: Prognostic_studies Limite: Humans Idioma: En Ano de publicação: 2023 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Urologia / Letramento em Saúde Tipo de estudo: Prognostic_studies Limite: Humans Idioma: En Ano de publicação: 2023 Tipo de documento: Article