Your browser doesn't support javascript.
loading
ChatGPT on guidelines: Providing contextual knowledge to GPT allows it to provide advice on appropriate colonoscopy intervals.
Lim, Daniel Yan Zheng; Tan, Yu Bin; Koh, Jonathan Tian En; Tung, Joshua Yi Min; Sng, Gerald Gui Ren; Tan, Damien Meng Yew; Tan, Chee-Kiat.
Afiliación
  • Lim DYZ; Department of Gastroenterology and Hepatology, Singapore General Hospital, Singapore.
  • Tan YB; Medicine Academic Clininical Programme, Duke-NUS Medical School, Singapore, Singapore.
  • Koh JTE; Department of Gastroenterology and Hepatology, Singapore General Hospital, Singapore.
  • Tung JYM; Department of Gastroenterology and Hepatology, Singapore General Hospital, Singapore.
  • Sng GGR; Department of Urology, Singapore General Hospital, Singapore.
  • Tan DMY; Department of Endocrinology, Singapore General Hospital, Singapore.
  • Tan CK; Department of Gastroenterology and Hepatology, Singapore General Hospital, Singapore.
J Gastroenterol Hepatol ; 39(1): 81-106, 2024 Jan.
Article en En | MEDLINE | ID: mdl-37855067
ABSTRACT
BACKGROUND AND

AIM:

Colonoscopy is commonly used in screening and surveillance for colorectal cancer. Multiple different guidelines provide recommendations on the interval between colonoscopies. This can be challenging for non-specialist healthcare providers to navigate. Large language models like ChatGPT are a potential tool for parsing patient histories and providing advice. However, the standard GPT model is not designed for medical use and can hallucinate. One way to overcome these challenges is to provide contextual information with medical guidelines to help the model respond accurately to queries. Our study compares the standard GPT4 against a contextualized model provided with relevant screening guidelines. We evaluated whether the models could provide correct advice for screening and surveillance intervals for colonoscopy.

METHODS:

Relevant guidelines pertaining to colorectal cancer screening and surveillance were formulated into a knowledge base for GPT. We tested 62 example case scenarios (three times each) on standard GPT4 and on a contextualized model with the knowledge base.

RESULTS:

The contextualized GPT4 model outperformed the standard GPT4 in all domains. No high-risk features were missed, and only two cases had hallucination of additional high-risk features. A correct interval to colonoscopy was provided in the majority of cases. Guidelines were appropriately cited in almost all cases.

CONCLUSIONS:

A contextualized GPT4 model could identify high-risk features and quote appropriate guidelines without significant hallucination. It gave a correct interval to the next colonoscopy in the majority of cases. This provides proof of concept that ChatGPT with appropriate refinement can serve as an accurate physician assistant.
Asunto(s)
Palabras clave

Texto completo: 1 Bases de datos: MEDLINE Asunto principal: Neoplasias Colorrectales / Colonoscopía Límite: Humans Idioma: En Revista: J Gastroenterol Hepatol Asunto de la revista: GASTROENTEROLOGIA Año: 2024 Tipo del documento: Article País de afiliación: Singapur

Texto completo: 1 Bases de datos: MEDLINE Asunto principal: Neoplasias Colorrectales / Colonoscopía Límite: Humans Idioma: En Revista: J Gastroenterol Hepatol Asunto de la revista: GASTROENTEROLOGIA Año: 2024 Tipo del documento: Article País de afiliación: Singapur