Your browser doesn't support javascript.
loading
Quality of information about urologic pathology in English and Spanish from ChatGPT, BARD, and Copilot.
Szczesniewski, J J; Ramos Alba, A; Rodríguez Castro, P M; Lorenzo Gómez, M F; Sainz González, J; Llanes González, L.
Affiliation
  • Szczesniewski JJ; Servicio de Urología, Hospital Universitario de Getafe, Getafe, Madrid, Spain; Departamento de Cirugía, Facultad de Medicina, Universidad de Salamanca, Salamanca, Spain. Electronic address: Juliusz.szcz@gmail.com.
  • Ramos Alba A; DXC Technology, Las Rozas, Madrid, Spain; Departamento de Economía Aplicada I e Historia e Instituciones Económicas, Universidad Rey Juan Carlos, Madrid, Spain.
  • Rodríguez Castro PM; Servicio de Urología, Hospital Universitario de Getafe, Getafe, Madrid, Spain.
  • Lorenzo Gómez MF; Departamento de Cirugía, Facultad de Medicina, Universidad de Salamanca, Salamanca, Spain; Servicio de Urología, Hospital Universitario de Salamanca, Salamanca, Spain.
  • Sainz González J; Departamento de Economía Aplicada I e Historia e Instituciones Económicas, Universidad Rey Juan Carlos, Madrid, Spain.
  • Llanes González L; Servicio de Urología, Hospital Universitario de Getafe, Getafe, Madrid, Spain; Universidad Francisco de Vitoria, Madrid, Spain.
Actas Urol Esp (Engl Ed) ; 48(5): 398-403, 2024 Jun.
Article in En, Es | MEDLINE | ID: mdl-38373482
ABSTRACT
INTRODUCTION AND

OBJECTIVE:

Generative artificial intelligence makes it possible to ask about medical pathologies in dialog boxes. Our objective was to analyze the quality of information about the most common urological pathologies provided by ChatGPT (OpenIA), BARD (Google), and Copilot (Microsoft).

METHODS:

We analyzed information on the following pathologies and their treatments as provided by AI prostate cancer, kidney cancer, bladder cancer, urinary lithiasis, and benign prostatic hypertrophy (BPH). Questions in English and Spanish were posed in dialog boxes; the answers were collected and analyzed with DISCERN questionnaires and the overall appropriateness of the response. Surgical procedures were performed with an informed consent questionnaire.

RESULTS:

The responses from the three chatbots explained the pathology, detailed risk factors, and described treatments. The difference is that BARD and Copilot provide external information citations, which ChatGPT does not. The highest DISCERN scores, in absolute numbers, were obtained in Copilot; however, on the appropriacy scale it was noted that their responses were not the most appropriate. The best surgical treatment scores were obtained by BARD, followed by ChatGPT, and finally Copilot.

CONCLUSIONS:

The answers obtained from generative AI on urological diseases depended on the formulation of the question. The information provided had significant biases, depending on pathology, language, and above all, the dialog box consulted.
Subject(s)
Key words

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Urologic Diseases / Language Limits: Humans Language: En / Es Journal: Actas Urol Esp (Engl Ed) Year: 2024 Document type: Article

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Urologic Diseases / Language Limits: Humans Language: En / Es Journal: Actas Urol Esp (Engl Ed) Year: 2024 Document type: Article