Your browser doesn't support javascript.
loading
Physician Assessment of ChatGPT and Bing Answers to American Cancer Society's Questions to Ask About Your Cancer.
Janopaul-Naylor, James R; Koo, Andee; Qian, David C; McCall, Neal S; Liu, Yuan; Patel, Sagar A.
Afiliação
  • Janopaul-Naylor JR; Department of Radiation Oncology, Emory University.
  • Koo A; Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center.
  • Qian DC; Department of Radiation Oncology, Emory University.
  • McCall NS; Department of Radiation Oncology, Emory University.
  • Liu Y; Department of Radiation Oncology, Emory University.
  • Patel SA; Department of Biostatistics and Bioinformatics, Rollins School of Public Health, Emory University.
Am J Clin Oncol ; 47(1): 17-21, 2024 Jan 01.
Article em En | MEDLINE | ID: mdl-37823708
ABSTRACT

OBJECTIVES:

Artificial intelligence (AI) chatbots are a new, publicly available tool for patients to access health care-related information with unknown reliability related to cancer-related questions. This study assesses the quality of responses to common questions for patients with cancer.

METHODS:

From February to March 2023, we queried chat generative pretrained transformer (ChatGPT) from OpenAI and Bing AI from Microsoft questions from the American Cancer Society's recommended "Questions to Ask About Your Cancer" customized for all stages of breast, colon, lung, and prostate cancer. Questions were, in addition, grouped by type (prognosis, treatment, or miscellaneous). The quality of AI chatbot responses was assessed by an expert panel using the validated DISCERN criteria.

RESULTS:

Of the 117 questions presented to ChatGPT and Bing, the average score for all questions were 3.9 and 3.2, respectively ( P < 0.001) and the overall DISCERN scores were 4.1 and 4.4, respectively. By disease site, the average score for ChatGPT and Bing, respectively, were 3.9 and 3.6 for prostate cancer ( P = 0.02), 3.7 and 3.3 for lung cancer ( P < 0.001), 4.1 and 2.9 for breast cancer ( P < 0.001), and 3.8 and 3.0 for colorectal cancer ( P < 0.001). By type of question, the average score for ChatGPT and Bing, respectively, were 3.6 and 3.4 for prognostic questions ( P = 0.12), 3.9 and 3.1 for treatment questions ( P < 0.001), and 4.2 and 3.3 for miscellaneous questions ( P = 0.001). For 3 responses (3%) by ChatGPT and 18 responses (15%) by Bing, at least one panelist rated them as having serious or extensive shortcomings.

CONCLUSIONS:

AI chatbots provide multiple opportunities for innovating health care. This analysis suggests a critical need, particularly around cancer prognostication, for continual refinement to limit misleading counseling, confusion, and emotional distress to patients and families.
Assuntos

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Médicos / Neoplasias da Próstata Tipo de estudo: Prognostic_studies Limite: Humans / Male País como assunto: America do norte Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Médicos / Neoplasias da Próstata Tipo de estudo: Prognostic_studies Limite: Humans / Male País como assunto: America do norte Idioma: En Ano de publicação: 2024 Tipo de documento: Article