Your browser doesn't support javascript.
loading
Evaluating the Efficacy of AI Chatbots as Tutors in Urology: A Comparative Analysis of Responses to the 2022 In-Service Assessment of the European Board of Urology.
May, Matthias; Körner-Riffard, Katharina; Kollitsch, Lisa; Burger, Maximilian; Brookman-May, Sabine D; Rauchenwald, Michael; Marszalek, Martin; Eredics, Klaus.
Afiliação
  • May M; Department of Urology, St. Elisabeth Hospital Straubing, Brothers of Mercy Hospital, Straubing, Germany.
  • Körner-Riffard K; Department of Urology, Caritas St. Josef Medical Centre, University of Regensburg, Regensburg, Germany.
  • Kollitsch L; Department of Urology and Andrology, Klinik Donaustadt, Vienna, Austria.
  • Burger M; Department of Urology, Caritas St. Josef Medical Centre, University of Regensburg, Regensburg, Germany.
  • Brookman-May SD; Department of Urology, University of Munich, LMU, Munich, Germany.
  • Rauchenwald M; Johnson and Johnson Innovative Medicine, Research and Development, Spring House, Pennsylvania, USA.
  • Marszalek M; Department of Urology and Andrology, Klinik Donaustadt, Vienna, Austria.
  • Eredics K; European Board of Urology, Arnhem, The Netherlands.
Urol Int ; 108(4): 359-366, 2024.
Article em En | MEDLINE | ID: mdl-38555637
ABSTRACT

INTRODUCTION:

This study assessed the potential of large language models (LLMs) as educational tools by evaluating their accuracy in answering questions across urological subtopics.

METHODS:

Three LLMs (ChatGPT-3.5, ChatGPT-4, and Bing AI) were examined in two testing rounds, separated by 48 h, using 100 Multiple-Choice Questions (MCQs) from the 2022 European Board of Urology (EBU) In-Service Assessment (ISA), covering five different subtopics. The correct answer was defined as "formal accuracy" (FA) representing the designated single best answer (SBA) among four options. Alternative answers selected from LLMs, which may not necessarily be the SBA but are still deemed correct, were labeled as "extended accuracy" (EA). Their capacity to enhance the overall accuracy rate when combined with FA was examined.

RESULTS:

In two rounds of testing, the FA scores were achieved as follows ChatGPT-3.5 58% and 62%, ChatGPT-4 63% and 77%, and BING AI 81% and 73%. The incorporation of EA did not yield a significant enhancement in overall performance. The achieved gains for ChatGPT-3.5, ChatGPT-4, and BING AI were as a result 7% and 5%, 5% and 2%, and 3% and 1%, respectively (p > 0.3). Within urological subtopics, LLMs showcased best performance in Pediatrics/Congenital and comparatively less effectiveness in Functional/BPS/Incontinence.

CONCLUSION:

LLMs exhibit suboptimal urology knowledge and unsatisfactory proficiency for educational purposes. The overall accuracy did not significantly improve when combining EA to FA. The error rates remained high ranging from 16 to 35%. Proficiency levels vary substantially across subtopics. Further development of medicine-specific LLMs is required before integration into urological training programs.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Urologia Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Urologia Idioma: En Ano de publicação: 2024 Tipo de documento: Article