Your browser doesn't support javascript.
loading
Accuracy and consistency of chatbots versus clinicians for answering pediatric dentistry questions: A pilot study.
Rokhshad, Rata; Zhang, Ping; Mohammad-Rahimi, Hossein; Pitchika, Vinay; Entezari, Niloufar; Schwendicke, Falk.
Afiliação
  • Rokhshad R; Department of Pediatric Dentistry, University of Alabama at Birmingham, Birmingham, AL, USA. Electronic address: Ratarokhshad@gmail.com.
  • Zhang P; Department of Pediatric Dentistry, University of Alabama at Birmingham, Birmingham, AL, USA.
  • Mohammad-Rahimi H; Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI on Health, Berlin, Germany.
  • Pitchika V; Department of Conservative Dentistry and Periodontology, LMU Klinikum Munich, Germany.
  • Entezari N; Department of pediatric dentistry, School of Dentistry, Qom University of Medical Sciences, Qom, Iran.
  • Schwendicke F; Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI on Health, Berlin, Germany; Department of Conservative Dentistry and Periodontology, LMU Klinikum Munich, Germany.
J Dent ; 144: 104938, 2024 05.
Article em En | MEDLINE | ID: mdl-38499280
ABSTRACT

OBJECTIVES:

Artificial Intelligence has applications such as Large Language Models (LLMs), which simulate human-like conversations. The potential of LLMs in healthcare is not fully evaluated. This pilot study assessed the accuracy and consistency of chatbots and clinicians in answering common questions in pediatric dentistry.

METHODS:

Two expert pediatric dentists developed thirty true or false questions involving different aspects of pediatric dentistry. Publicly accessible chatbots (Google Bard, ChatGPT4, ChatGPT 3.5, Llama, Sage, Claude 2 100k, Claude-instant, Claude-instant-100k, and Google Palm) were employed to answer the questions (3 independent new conversations). Three groups of clinicians (general dentists, pediatric specialists, and students; n = 20/group) also answered. Responses were graded by two pediatric dentistry faculty members, along with a third independent pediatric dentist. Resulting accuracies (percentage of correct responses) were compared using analysis of variance (ANOVA), and post-hoc pairwise group comparisons were corrected using Tukey's HSD method. ACronbach's alpha was calculated to determine consistency.

RESULTS:

Pediatric dentists were significantly more accurate (mean±SD 96.67 %± 4.3 %) than other clinicians and chatbots (p < 0.001). General dentists (88.0 % ± 6.1 %) also demonstrated significantly higher accuracy than chatbots (p < 0.001), followed by students (80.8 %±6.9 %). ChatGPT showed the highest accuracy (78 %±3 %) among chatbots. All chatbots except ChatGPT3.5 showed acceptable consistency (Cronbach alpha>0.7). CLINICAL

SIGNIFICANCE:

Based on this pilot study, chatbots may be valuable adjuncts for educational purposes and for distributing information to patients. However, they are not yet ready to serve as substitutes for human clinicians in diagnostic decision-making.

CONCLUSION:

In this pilot study, chatbots showed lower accuracy than dentists. Chatbots may not yet be recommended for clinical pediatric dentistry.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Odontopediatria / Odontólogos Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Odontopediatria / Odontólogos Idioma: En Ano de publicação: 2024 Tipo de documento: Article