Your browser doesn't support javascript.
loading
Performance evaluation of ChatGPT in detecting diagnostic errors and their contributing factors: an analysis of 545 case reports of diagnostic errors.
Harada, Yukinori; Suzuki, Tomoharu; Harada, Taku; Sakamoto, Tetsu; Ishizuka, Kosuke; Miyagami, Taiju; Kawamura, Ren; Kunitomo, Kotaro; Nagano, Hiroyuki; Shimizu, Taro; Watari, Takashi.
Afiliação
  • Harada Y; Department of Diagnostic and Generalist Medicine, Dokkyo Medical University, Shimotsuga-gun, Tochigi, Japan yuki.gym23@gmail.com.
  • Suzuki T; Urasoe General Hospital, Urasoe, Okinawa, Japan.
  • Harada T; Department of Diagnostic and Generalist Medicine, Dokkyo Medical University, Shimotsuga-gun, Tochigi, Japan.
  • Sakamoto T; Nerima Hikarigaoka Hospital, Nerima-ku, Tokyo, Japan.
  • Ishizuka K; Department of Diagnostic and Generalist Medicine, Dokkyo Medical University, Shimotsuga-gun, Tochigi, Japan.
  • Miyagami T; Yokohama City University School of Medicine Graduate School of Medicine, Yokohama, Kanagawa, Japan.
  • Kawamura R; Department of General Medicine, Faculty of Medicine, Juntendo University, Bunkyo-ku, Tokyo, Japan.
  • Kunitomo K; Department of Diagnostic and Generalist Medicine, Dokkyo Medical University, Shimotsuga-gun, Tochigi, Japan.
  • Nagano H; NHO Kumamoto Medical Center, Kumamoto, Kumamoto, Japan.
  • Shimizu T; Department of General Internal Medicine, Tenri Hospital, Tenri, Nara, Japan.
  • Watari T; Department of Diagnostic and Generalist Medicine, Dokkyo Medical University, Shimotsuga-gun, Tochigi, Japan.
BMJ Open Qual ; 13(2)2024 Jun 03.
Article em En | MEDLINE | ID: mdl-38830730
ABSTRACT

BACKGROUND:

Manual chart review using validated assessment tools is a standardised methodology for detecting diagnostic errors. However, this requires considerable human resources and time. ChatGPT, a recently developed artificial intelligence chatbot based on a large language model, can effectively classify text based on suitable prompts. Therefore, ChatGPT can assist manual chart reviews in detecting diagnostic errors.

OBJECTIVE:

This study aimed to clarify whether ChatGPT could correctly detect diagnostic errors and possible factors contributing to them based on case presentations.

METHODS:

We analysed 545 published case reports that included diagnostic errors. We imputed the texts of case presentations and the final diagnoses with some original prompts into ChatGPT (GPT-4) to generate responses, including the judgement of diagnostic errors and contributing factors of diagnostic errors. Factors contributing to diagnostic errors were coded according to the following three taxonomies Diagnosis Error Evaluation and Research (DEER), Reliable Diagnosis Challenges (RDC) and Generic Diagnostic Pitfalls (GDP). The responses on the contributing factors from ChatGPT were compared with those from physicians.

RESULTS:

ChatGPT correctly detected diagnostic errors in 519/545 cases (95%) and coded statistically larger numbers of factors contributing to diagnostic errors per case than physicians DEER (median 5 vs 1, p<0.001), RDC (median 4 vs 2, p<0.001) and GDP (median 4 vs 1, p<0.001). The most important contributing factors of diagnostic errors coded by ChatGPT were 'failure/delay in considering the diagnosis' (315, 57.8%) in DEER, 'atypical presentation' (365, 67.0%) in RDC, and 'atypical presentation' (264, 48.4%) in GDP.

CONCLUSION:

ChatGPT accurately detects diagnostic errors from case presentations. ChatGPT may be more sensitive than manual reviewing in detecting factors contributing to diagnostic errors, especially for 'atypical presentation'.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Erros de Diagnóstico Limite: Humans Idioma: En Revista: BMJ Open Qual Ano de publicação: 2024 Tipo de documento: Article País de afiliação: Japão

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Erros de Diagnóstico Limite: Humans Idioma: En Revista: BMJ Open Qual Ano de publicação: 2024 Tipo de documento: Article País de afiliação: Japão