Your browser doesn't support javascript.
loading
Evaluating the Performance of ChatGPT in Urology: A Comparative Study of Knowledge Interpretation and Patient Guidance.
Sahin, Bahadir; Emre Genç, Yunus; Dogan, Kader; Emre Sener, Tarik; Sekerci, Çagri Akin; Tanidir, Yilören; Yücel, Selçuk; Tarcan, Tufan; Çam, Haydar Kamil.
Afiliación
  • Sahin B; Department of Urology, Marmara University, School of Medicine, Marmara University, Istanbul, Turkey.
  • Emre Genç Y; Department of Urology, Marmara University, School of Medicine, Marmara University, Istanbul, Turkey.
  • Dogan K; Department of Urology, Marmara University, School of Medicine, Marmara University, Istanbul, Turkey.
  • Emre Sener T; Department of Urology, Marmara University, School of Medicine, Marmara University, Istanbul, Turkey.
  • Sekerci ÇA; Department of Urology, Marmara University, School of Medicine, Marmara University, Istanbul, Turkey.
  • Tanidir Y; Department of Urology, Marmara University, School of Medicine, Marmara University, Istanbul, Turkey.
  • Yücel S; Department of Urology, Marmara University, School of Medicine, Marmara University, Istanbul, Turkey.
  • Tarcan T; Department of Urology, Marmara University, School of Medicine, Marmara University, Istanbul, Turkey.
  • Çam HK; Department of Urology, Marmara University, School of Medicine, Marmara University, Istanbul, Turkey.
J Endourol ; 38(8): 799-808, 2024 Aug.
Article en En | MEDLINE | ID: mdl-38815140
ABSTRACT
Background/

Aim:

To evaluate the performance of Chat Generative Pre-trained Transformer (ChatGPT), a large language model trained by Open artificial intelligence. Materials and

Methods:

This study has three main steps to evaluate the effectiveness of ChatGPT in the urologic field. The first step involved 35 questions from our institution's experts, who have at least 10 years of experience in their fields. The responses of ChatGPT versions were qualitatively compared with the responses of urology residents to the same questions. The second step assesses the reliability of ChatGPT versions in answering current debate topics. The third step was to assess the reliability of ChatGPT versions in providing medical recommendations and directives to patients' commonly asked questions during the outpatient and inpatient clinic.

Results:

In the first step, version 4 provided correct answers to 25 questions out of 35 while version 3.5 provided only 19 (71.4% vs 54%). It was observed that residents in their last year of education in our clinic also provided a mean of 25 correct answers, and 4th year residents provided a mean of 19.3 correct responses. The second step involved evaluating the response of both versions to debate situations in urology, and it was found that both versions provided variable and inappropriate results. In the last step, both versions had a similar success rate in providing recommendations and guidance to patients based on expert ratings.

Conclusion:

The difference between the two versions of the 35 questions in the first step of the study was thought to be due to the improvement of ChatGPT's literature and data synthesis abilities. It may be a logical approach to use ChatGPT versions to inform the nonhealth care providers' questions with quick and safe answers but should not be used to as a diagnostic tool or make a choice among different treatment modalities.
Asunto(s)
Palabras clave

Texto completo: 1 Bases de datos: MEDLINE Asunto principal: Urología Límite: Humans Idioma: En Revista: J Endourol Asunto de la revista: UROLOGIA Año: 2024 Tipo del documento: Article País de afiliación: Turquía

Texto completo: 1 Bases de datos: MEDLINE Asunto principal: Urología Límite: Humans Idioma: En Revista: J Endourol Asunto de la revista: UROLOGIA Año: 2024 Tipo del documento: Article País de afiliación: Turquía