RESUMO
BACKGROUND AND AIM: Colonoscopy is commonly used in screening and surveillance for colorectal cancer. Multiple different guidelines provide recommendations on the interval between colonoscopies. This can be challenging for non-specialist healthcare providers to navigate. Large language models like ChatGPT are a potential tool for parsing patient histories and providing advice. However, the standard GPT model is not designed for medical use and can hallucinate. One way to overcome these challenges is to provide contextual information with medical guidelines to help the model respond accurately to queries. Our study compares the standard GPT4 against a contextualized model provided with relevant screening guidelines. We evaluated whether the models could provide correct advice for screening and surveillance intervals for colonoscopy. METHODS: Relevant guidelines pertaining to colorectal cancer screening and surveillance were formulated into a knowledge base for GPT. We tested 62 example case scenarios (three times each) on standard GPT4 and on a contextualized model with the knowledge base. RESULTS: The contextualized GPT4 model outperformed the standard GPT4 in all domains. No high-risk features were missed, and only two cases had hallucination of additional high-risk features. A correct interval to colonoscopy was provided in the majority of cases. Guidelines were appropriately cited in almost all cases. CONCLUSIONS: A contextualized GPT4 model could identify high-risk features and quote appropriate guidelines without significant hallucination. It gave a correct interval to the next colonoscopy in the majority of cases. This provides proof of concept that ChatGPT with appropriate refinement can serve as an accurate physician assistant.
Assuntos
Colonoscopia , Neoplasias Colorretais , Humanos , Neoplasias Colorretais/diagnóstico , Neoplasias Colorretais/prevenção & controle , Neoplasias Colorretais/epidemiologia , Fatores de Risco , Detecção Precoce de Câncer , AlucinaçõesRESUMO
BACKGROUND: Discharge letters are a critical component in the continuity of care between specialists and primary care providers. However, these letters are time-consuming to write, underprioritized in comparison to direct clinical care, and are often tasked to junior doctors. Prior studies assessing the quality of discharge summaries written for inpatient hospital admissions show inadequacies in many domains. Large language models such as GPT have the ability to summarize large volumes of unstructured free text such as electronic medical records and have the potential to automate such tasks, providing time savings and consistency in quality. OBJECTIVE: The aim of this study was to assess the performance of GPT-4 in generating discharge letters written from urology specialist outpatient clinics to primary care providers and to compare their quality against letters written by junior clinicians. METHODS: Fictional electronic records were written by physicians simulating 5 common urology outpatient cases with long-term follow-up. Records comprised simulated consultation notes, referral letters and replies, and relevant discharge summaries from inpatient admissions. GPT-4 was tasked to write discharge letters for these cases with a specified target audience of primary care providers who would be continuing the patient's care. Prompts were written for safety, content, and style. Concurrently, junior clinicians were provided with the same case records and instructional prompts. GPT-4 output was assessed for instances of hallucination. A blinded panel of primary care physicians then evaluated the letters using a standardized questionnaire tool. RESULTS: GPT-4 outperformed human counterparts in information provision (mean 4.32, SD 0.95 vs 3.70, SD 1.27; P=.03) and had no instances of hallucination. There were no statistically significant differences in the mean clarity (4.16, SD 0.95 vs 3.68, SD 1.24; P=.12), collegiality (4.36, SD 1.00 vs 3.84, SD 1.22; P=.05), conciseness (3.60, SD 1.12 vs 3.64, SD 1.27; P=.71), follow-up recommendations (4.16, SD 1.03 vs 3.72, SD 1.13; P=.08), and overall satisfaction (3.96, SD 1.14 vs 3.62, SD 1.34; P=.36) between the letters generated by GPT-4 and humans, respectively. CONCLUSIONS: Discharge letters written by GPT-4 had equivalent quality to those written by junior clinicians, without any hallucinations. This study provides a proof of concept that large language models can be useful and safe tools in clinical documentation.
Assuntos
Alta do Paciente , Humanos , Alta do Paciente/normas , Registros Eletrônicos de Saúde/normas , Método Simples-Cego , IdiomaRESUMO
BACKGROUND: Frailty is an important predictor of health outcomes, characterized by increased vulnerability due to physiological decline. The Clinical Frailty Scale (CFS) is commonly used for frailty assessment but may be influenced by rater bias. Use of artificial intelligence (AI), particularly Large Language Models (LLMs) offers a promising method for efficient and reliable frailty scoring. METHODS: The study utilized seven standardized patient scenarios to evaluate the consistency and reliability of CFS scoring by OpenAI's GPT-3.5-turbo model. Two methods were tested: a basic prompt and an instruction-tuned prompt incorporating CFS definition, a directive for accurate responses, and temperature control. The outputs were compared using the Mann-Whitney U test and Fleiss' Kappa for inter-rater reliability. The outputs were compared with historic human scores of the same scenarios. RESULTS: The LLM's median scores were similar to human raters, with differences of no more than one point. Significant differences in score distributions were observed between the basic and instruction-tuned prompts in five out of seven scenarios. The instruction-tuned prompt showed high inter-rater reliability (Fleiss' Kappa of 0.887) and produced consistent responses in all scenarios. Difficulty in scoring was noted in scenarios with less explicit information on activities of daily living (ADLs). CONCLUSIONS: This study demonstrates the potential of LLMs in consistently scoring clinical frailty with high reliability. It demonstrates that prompt engineering via instruction-tuning can be a simple but effective approach for optimizing LLMs in healthcare applications. The LLM may overestimate frailty scores when less information about ADLs is provided, possibly as it is less subject to implicit assumptions and extrapolation than humans. Future research could explore the integration of LLMs in clinical research and frailty-related outcome prediction.
RESUMO
BACKGROUND: The optimal management of patients with end-stage renal disease (ESRD) on dialysis with severe coronary artery disease (CAD) has not been determined. METHODS: Between 2013 and 2017, all patients with ESRD on dialysis who had left main (LM) disease, triple vessel disease (TVD) and/or severe CAD for consideration of coronary artery bypass graft (CABG) were included. Patients were divided into 3 groups based on final treatment modality: CABG, percutaneous coronary intervention (PCI), optimal medical therapy (OMT). Outcome measures include in-hospital, 180-day, 1-year and overall mortality and major adverse cardiac events (MACE). RESULTS: In total, 418 patients were included (CABG 11.0%, PCI 65.6%, OMT 23.4%). Overall, 1-year mortality and MACE rates were 27.5% and 55.0% respectively. Patients who underwent CABG were significantly younger, more likely to have LM disease and have no prior heart failure. In this non-randomized setting, treatment modality did not impact on 1-year mortality, although the CABG group had significantly lower 1-year MACE rates (CABG 32.6%, PCI 57.3%, OMT 59.2%; CABG vs. OMT p < 0.01, CABG vs. PCI p < 0.001). Independent predictors of overall mortality include STEMI presentation (HR 2.31, 95% CI 1.38-3.86), prior heart failure (HR 1.84, 95% CI 1.22-2.75), LM disease (HR 1.71, 95% CI 1.26-2.31), NSTE-ACS presentation (HR 1.40, 95% CI 1.03-1.91) and increased age (HR 1.02, 95% CI 1.01-1.04). CONCLUSION: Treatment decisions for patients with severe CAD with ESRD on dialysis are complex. Understanding independent predictors of mortality and MACE in specific treatment subgroups may provide valuable insights into the selection of optimal treatment options.
Assuntos
Doença da Artéria Coronariana , Insuficiência Cardíaca , Falência Renal Crônica , Intervenção Coronária Percutânea , Humanos , Doença da Artéria Coronariana/complicações , Doença da Artéria Coronariana/diagnóstico , Doença da Artéria Coronariana/cirurgia , Diálise Renal , Intervenção Coronária Percutânea/efeitos adversos , Resultado do Tratamento , Falência Renal Crônica/epidemiologia , Falência Renal Crônica/terapia , Insuficiência Cardíaca/etiologiaRESUMO
Introduction: Elevated low-density lipoprotein cholesterol (LDL-C) is an important risk factor for atherosclerotic cardiovascular disease (ASCVD). Direct LDL-C measurement is not widely performed. LDL-C is routinely calculated using the Friedewald equation (FLDL), which is inaccurate at high triglyceride (TG) or low LDL-C levels. We aimed to compare this routine method with other estimation methods in patients with type 2 diabetes mellitus (T2DM), who typically have elevated TG levels and ASCVD risk. Method: We performed a retrospective cohort study on T2DM patients from a multi-institutional diabetes registry in Singapore from 2013 to 2020. LDL-C values estimated by the equations: FLDL, Martin/Hopkins (MLDL) and Sampson (SLDL) were compared using measures of agreement and correlation. Subgroup analysis comparing estimated LDL-C with directly measured LDL-C (DLDL) was conducted in patients from a single institution. Estimated LDL-C was considered discordant if LDL-C was <1.8mmol/L for the index equation and ≥1.8mmol/L for the comparator. Results: A total of 154,877 patients were included in the final analysis, and 11,475 patients in the subgroup analysis. All 3 equations demonstrated strong overall correlation and goodness-of-fit. Discordance was 4.21% for FLDL-SLDL and 6.55% for FLDL-MLDL. In the subgroup analysis, discordance was 21.57% for DLDL-FLDL, 17.31% for DLDL-SLDL and 14.44% for DLDL-MLDL. All discordance rates increased at TG levels >4.5mmol/L. Conclusion: We demonstrated strong correlations between newer methods of LDL-C estimation, FLDL, and DLDL. At higher TG concentrations, no equation performed well. The Martin/Hopkins equation had the least discordance with DLDL, and may minimise misclassification compared with the FLDL and SLDL.