Your browser doesn't support javascript.
loading
Assessing Accuracy of ChatGPT on Addressing Helicobacter pylori Infection-Related Questions: A National Survey and Comparative Study.
Hu, Yi; Lai, Yongkang; Liao, Foqiang; Shu, Xu; Zhu, Yin; Du, Yi-Qi; Lu, Nong-Hua.
Afiliação
  • Hu Y; Department of Gastroenterology, Jiangxi Medical College, The First Affiliated Hospital, Digestive Disease Hospital, Nanchang University, Nanchang, Jiangxi, China.
  • Lai Y; Department of Surgery, The Chinese University of Hong Kong, Hong Kong, China.
  • Liao F; Department of Gastroenterology, Jiangxi Medical College, The First Affiliated Hospital, Digestive Disease Hospital, Nanchang University, Nanchang, Jiangxi, China.
  • Shu X; Department of Gastroenterology, Changhai Hospital, Naval Medical University, Shanghai, China.
  • Zhu Y; Department of Gastroenterology, Jiangxi Medical College, Ganzhou People's Hospital, Nanchang University, Nanchang, China.
  • Du YQ; Department of Gastroenterology, Jiangxi Medical College, The First Affiliated Hospital, Digestive Disease Hospital, Nanchang University, Nanchang, Jiangxi, China.
  • Lu NH; Department of Gastroenterology, Jiangxi Medical College, The First Affiliated Hospital, Digestive Disease Hospital, Nanchang University, Nanchang, Jiangxi, China.
Helicobacter ; 29(4): e13116, 2024.
Article em En | MEDLINE | ID: mdl-39080910
ABSTRACT

BACKGROUND:

ChatGPT is a novel and online large-scale language model used as a source providing up-to-date and useful health-related knowledges to patients and clinicians. However, its performance on Helicobacter pylori infection-related questions remain unknown. This study aimed to evaluate the accuracy of ChatGPT's responses on H. pylori-related questions compared with that of gastroenterologists during the same period.

METHODS:

Twenty-five H. pylori-related questions from five domains Indication, Diagnostics, Treatment, Gastric cancer and prevention, and Gut Microbiota were selected based on the Maastricht VI Consensus report. Each question was tested three times with ChatGPT3.5 and ChatGPT4. Two independent H. pylori experts assessed the responses from ChatGPT, with discrepancies resolved by a third reviewer. Simultaneously, a nationwide survey with the same questions was conducted among 1279 gastroenterologists and 154 medical students. The accuracy of responses from ChatGPT3.5 and ChatGPT4 was compared with that of gastroenterologists.

RESULTS:

Overall, both ChatGPT3.5 and ChatGPT4 demonstrated high accuracy, with median accuracy rates of 92% for each of the three responses, surpassing the accuracy of nationwide gastroenterologists (median 80%) and equivalent to that of senior gastroenterologists. Compared with ChatGPT3.5, ChatGPT4 provided more concise responses with the same accuracy. ChatGPT3.5 performed well in the Indication, Treatment, and Gut Microbiota domains, whereas ChatGPT4 excelled in Diagnostics, Gastric cancer and prevention, and Gut Microbiota domains.

CONCLUSION:

ChatGPT exhibited high accuracy and reproducibility in addressing H. pylori-related questions except the decision for H. pylori treatment, performing at the level of senior gastroenterologists and could serve as an auxiliary information tool for assisting patients and clinicians.
Assuntos
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Helicobacter pylori / Infecções por Helicobacter Limite: Adult / Female / Humans / Male / Middle aged Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Helicobacter pylori / Infecções por Helicobacter Limite: Adult / Female / Humans / Male / Middle aged Idioma: En Ano de publicação: 2024 Tipo de documento: Article