Your browser doesn't support javascript.
loading
Can ChatGPT provide high-quality patient information on male lower urinary tract symptoms suggestive of benign prostate enlargement?
Puerto Nino, Angie K; Garcia Perez, Valentina; Secco, Silvia; De Nunzio, Cosimo; Lombardo, Riccardo; Tikkinen, Kari A O; Elterman, Dean S.
Affiliation
  • Puerto Nino AK; Faculty of Medicine, University of Helsinki, Helsinki, Finland. angie.puerto-nino@helsinki.fi.
  • Garcia Perez V; Division of Urology, Department of Surgery, University of Toronto, Toronto, ON, Canada. angie.puerto-nino@helsinki.fi.
  • Secco S; Faculty of Medicine, University of the Andes, Bogota, Colombia.
  • De Nunzio C; Department of Urology, Niguarda Hospital, Milan, Italy.
  • Lombardo R; Urology Unit, Ospedale Sant'Andrea, La Sapienza University of Rome, Rome, Italy.
  • Tikkinen KAO; Urology Unit, Ospedale Sant'Andrea, La Sapienza University of Rome, Rome, Italy.
  • Elterman DS; Faculty of Medicine, University of Helsinki, Helsinki, Finland.
Article in En | MEDLINE | ID: mdl-38871841
ABSTRACT

BACKGROUND:

ChatGPT has recently emerged as a novel resource for patients' disease-specific inquiries. There is, however, limited evidence assessing the quality of the information. We evaluated the accuracy and quality of the ChatGPT's responses on male lower urinary tract symptoms (LUTS) suggestive of benign prostate enlargement (BPE) when compared to two reference resources.

METHODS:

Using patient information websites from the European Association of Urology and the American Urological Association as reference material, we formulated 88 BPE-centric questions for ChatGPT 4.0+. Independently and in duplicate, we compared the ChatGPT's responses and the reference material, calculating accuracy through F1 score, precision, and recall metrics. We used a 5-point Likert scale for quality rating. We evaluated examiner agreement using the interclass correlation coefficient and assessed the difference in the quality scores with the Wilcoxon signed-rank test.

RESULTS:

ChatGPT addressed all (88/88) LUTS/BPE-related questions. For the 88 questions, the recorded F1 score was 0.79 (range 0-1), precision 0.66 (range 0-1), recall 0.97 (range 0-1), and the quality score had a median of 4 (range = 1-5). Examiners had a good level of agreement (ICC = 0.86). We found no statistically significant difference between the scores given by the examiners and the overall quality of the responses (p = 0.72).

DISCUSSION:

ChatGPT demostrated a potential utility in educating patients about BPE/LUTS, its prognosis, and treatment that helps in the decision-making process. One must exercise prudence when recommending this as the sole information outlet. Additional studies are needed to completely understand the full extent of AI's efficacy in delivering patient education in urology.

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: Prostate Cancer Prostatic Dis Journal subject: ENDOCRINOLOGIA / NEOPLASIAS / UROLOGIA Year: 2024 Document type: Article Affiliation country: Finland Country of publication: United kingdom

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: Prostate Cancer Prostatic Dis Journal subject: ENDOCRINOLOGIA / NEOPLASIAS / UROLOGIA Year: 2024 Document type: Article Affiliation country: Finland Country of publication: United kingdom