Your browser doesn't support javascript.
loading
ChatGPT4's Proficiency in Addressing Patients' Questions on Systemic Lupus Erythematosus: A Blinded Comparative Study with Specialists.
Xu, Dan; Zhao, Jinxia; Liu, Rui; Dai, Yijun; Sun, Kai; Wong, Priscilla; Ming, Samuel Lee Shang; Wearn, Koh Li; Wang, Jiangyuan; Xie, Shasha; Zeng, Lin; Mu, Rong; Xu, Chuanhui.
Afiliação
  • Xu D; Department of Rheumatology and Immunology, Peking University Third Hospital, Beijing, China.
  • Zhao J; Department of Rheumatology and Immunology, Peking University Third Hospital, Beijing, China.
  • Liu R; Department of Rheumatology and Immunology, Peking University Third Hospital, Beijing, China.
  • Dai Y; Department of Rheumatology and Immunology, Fujian Provincial Hospital, Fuzhou, China.
  • Sun K; Department of Medicine, Division of Rheumatology and Immunology, Duke University, North Carolina, USA.
  • Wong P; Division of Medicine and Therapeutics, Prince of Wales Hospital, The Chinese University of Hong Kong, Hong Kong, China.
  • Ming SLS; Department of Rheumatology, Allergy and Immunology, Tan Tock Seng Hospital, Singapore.
  • Wearn KL; Department of Rheumatology, Allergy and Immunology, Tan Tock Seng Hospital, Singapore.
  • Wang J; Beijing Kidney Health Technology Co., Ltd, Beijing, China.
  • Xie S; Beijing Kidney Health Technology Co., Ltd, Beijing, China.
  • Zeng L; Research Center of Clinical Epidemiology, Peking University Third Hospital, Beijing, China.
  • Mu R; Department of Rheumatology and Immunology, Peking University Third Hospital, Beijing, China.
  • Xu C; Department of Rheumatology, Allergy and Immunology, Tan Tock Seng Hospital, Singapore.
Article em En | MEDLINE | ID: mdl-38648756
ABSTRACT

OBJECTIVES:

The efficacy of artificial intelligence (AI)-driven chatbots like ChatGPT4 in specialized medical consultations, particularly in rheumatology, remains underexplored. This study compares the proficiency of ChatGPT4' responses with practicing rheumatologists to inquiries from patients with systemic lupus erythematosus (SLE).

METHODS:

In this cross-sectional study, we curated 95 frequently asked questions (FAQs), including 55 in Chinese and 40 in English. Responses for FAQs from ChatGPT4 and 5 rheumatologists were scored separately by a panel of rheumatologists and a group of patients with SLE across 6 domains (scientific validity, logical consistency, comprehensibility, completeness, satisfaction level, and empathy) on a 0-10 scale (a score of 0 indicates entirely incorrect responses, while 10 indicates accurate and comprehensive answers).

RESULTS:

Rheumatologists' scoring revealed that ChatGPT4-generated responses outperformed those from rheumatologists in satisfaction level and empathy, with mean differences of 0.537 (95% CI, 0.252-0.823; p < 0.01) and 0.460 (95% CI, 0.227-0.693 p < 0.01), respectively. From the SLE patients' perspective, ChatGPT4-generated responses were comparable to the rheumatologist-provided answers in all 6 domains. Subgroup analysis revealed ChatGPT4 responses were more logically consistent and complete regardless of language, and exhibited greater comprehensibility, satisfaction, and empathy in Chinese. However, ChatGPT4 responses were inferior in comprehensibility for English FAQs.

CONCLUSION:

ChatGPT4 demonstrated comparable, possibly better in certain domains, to address FAQs from patients with SLE, when compared with the answers provided by specialists. This study showed the potential of applying ChatGPT4 to improve consultation in SLE patients.
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2024 Tipo de documento: Article