Your browser doesn't support javascript.
loading
Still Using Only ChatGPT? The Comparison of Five Different Artificial Intelligence Chatbots' Answers to the Most Common Questions About Kidney Stones.
Sahin, Mehmet Fatih; Topkaç, Erdem Can; Dogan, Çagri; Seramet, Serkan; Özcan, Ridvan; Akgül, Murat; Yazici, Cenk Murat.
Afiliação
  • Sahin MF; Faculty of Medicine Department of Urology, Tekirdag Namik Kemal University, Tekirdag, Turkey.
  • Topkaç EC; Faculty of Medicine Department of Urology, Tekirdag Namik Kemal University, Tekirdag, Turkey.
  • Dogan Ç; Faculty of Medicine Department of Urology, Tekirdag Namik Kemal University, Tekirdag, Turkey.
  • Seramet S; Faculty of Medicine Department of Urology, Tekirdag Namik Kemal University, Tekirdag, Turkey.
  • Özcan R; Department of Urology, Bursa State Hospital, Nilufer, Turkey.
  • Akgül M; Department of Urology, Ümraniye Research and Training Hospital, Istanbul, Turkey.
  • Yazici CM; Faculty of Medicine Department of Urology, Tekirdag Namik Kemal University, Tekirdag, Turkey.
J Endourol ; 2024 Sep 06.
Article em En | MEDLINE | ID: mdl-39212674
ABSTRACT

Objective:

To evaluate and compare the quality and comprehensibility of answers produced by five distinct artificial intelligence (AI) chatbots-GPT-4, Claude, Mistral, Google PaLM, and Grok-in response to the most frequently searched questions about kidney stones (KS). Materials and

Methods:

Google Trends facilitated the identification of pertinent terms related to KS. Each AI chatbot was provided with a unique sequence of 25 commonly searched phrases as input. The responses were assessed using DISCERN, the Patient Education Materials Assessment Tool for Printable Materials (PEMAT-P), the Flesch-Kincaid Grade Level (FKGL), and the Flesch-Kincaid Reading Ease (FKRE) criteria.

Results:

The three most frequently searched terms were "stone in kidney," "kidney stone pain," and "kidney pain." Nepal, India, and Trinidad and Tobago were the countries that performed the most searches in KS. None of the AI chatbots attained the requisite level of comprehensibility. Grok demonstrated the highest FKRE (55.6 ± 7.1) and lowest FKGL (10.0 ± 1.1) ratings (p = 0.001), whereas Claude outperformed the other chatbots in its DISCERN scores (47.6 ± 1.2) (p = 0.001). PEMAT-P understandability was the lowest in GPT-4 (53.2 ± 2.0), and actionability was the highest in Claude (61.8 ± 3.5) (p = 0.001).

Conclusion:

GPT-4 had the most complex language structure of the five chatbots, making it the most difficult to read and comprehend, whereas Grok was the simplest. Claude had the best KS text quality. Chatbot technology can improve healthcare material and make it easier to grasp.
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2024 Tipo de documento: Article