ABSTRACT
BACKGROUND: Postoperative gastrointestinal leak and venous thromboembolism (VTE) are devastating complications of bariatric surgery. The performance of currently available predictive models for these complications remains wanting, while machine learning has shown promise to improve on traditional modeling approaches. The purpose of this study was to compare the ability of two machine learning strategies, artificial neural networks (ANNs), and gradient boosting machines (XGBs) to conventional models using logistic regression (LR) in predicting leak and VTE after bariatric surgery. METHODS: ANN, XGB, and LR prediction models for leak and VTE among adults undergoing initial elective weight loss surgery were trained and validated using preoperative data from 2015 to 2017 from Metabolic and Bariatric Surgery Accreditation and Quality Improvement Program database. Data were randomly split into training, validation, and testing populations. Model performance was measured by the area under the receiver operating characteristic curve (AUC) on the testing data for each model. RESULTS: The study cohort contained 436,807 patients. The incidences of leak and VTE were 0.70% and 0.46%. ANN (AUC 0.75, 95% CI 0.73-0.78) was the best-performing model for predicting leak, followed by XGB (AUC 0.70, 95% CI 0.68-0.72) and then LR (AUC 0.63, 95% CI 0.61-0.65, p < 0.001 for all comparisons). In detecting VTE, ANN, and XGB, LR achieved similar AUCs of 0.65 (95% CI 0.63-0.68), 0.67 (95% CI 0.64-0.70), and 0.64 (95% CI 0.61-0.66), respectively; the performance difference between XGB and LR was statistically significant (p = 0.001). CONCLUSIONS: ANN and XGB outperformed traditional LR in predicting leak. These results suggest that ML has the potential to improve risk stratification for bariatric surgery, especially as techniques to extract more granular data from medical records improve. Further studies investigating the merits of machine learning to improve patient selection and risk management in bariatric surgery are warranted.
Subject(s)
Anastomotic Leak/etiology , Bariatric Surgery/adverse effects , Machine Learning , Postoperative Complications/etiology , Venous Thromboembolism/etiology , Adult , Cohort Studies , Databases, Factual , Diagnosis, Computer-Assisted , Humans , Logistic Models , Neural Networks, ComputerABSTRACT
Background: Women continue to have worse Coronary Artery Disease (CAD) outcomes than men. The causes of this discrepancy have yet to be fully elucidated. The main objective of this study is to detect gender discrepancies in the diagnosis and treatment of CAD. Methods: We used data analytics to risk stratify ~32,000 patients with CAD of the total 960,129 patients treated at the UCSF Medical Center over an 8 year period. We implemented a multidimensional data analytics framework to trace patients from admission through treatment to create a path of events. Events are any medications or noninvasive and invasive procedures. The time between events for a similar set of paths was calculated. Then, the average waiting time for each step of the treatment was calculated. Finally, we applied statistical analysis to determine differences in time between diagnosis and treatment steps for men and women. Results: There is a significant time difference from the first time of admission to diagnostic Cardiac Catheterization between genders (p-value = 0.000119), while the time difference from diagnostic Cardiac Catheterization to CABG is not statistically significant. Conclusion: Women had a significantly longer interval between their first physician encounter indicative of CAD and their first diagnostic cardiac catheterization compared to men. Avoiding this delay in diagnosis may provide more timely treatment and a better outcome for patients at risk. Finally, we conclude by discussing the impact of the study on improving patient care with early detection and managing individual patients at risk of rapid progression of CAD.
ABSTRACT
Early detection plays a key role to enhance the outcome for Coronary Artery Disease. We utilized a big data analytics platform on â¼32,000 patients to trace patients from the first encounter to CAD treatment. There are significant gender-based differences in patients younger than 60 from the time of the first encounter to Coronary Artery Bypass Grafting with a p-value=0.03. This recognition makes significant changes in outcome by avoiding delay in treatment.
Subject(s)
Coronary Artery Disease , Coronary Artery Bypass/adverse effects , Coronary Artery Disease/diagnosis , Coronary Artery Disease/surgery , Data Science , Electronic Health Records , Female , Humans , Risk Factors , Time-to-Treatment , Treatment OutcomeABSTRACT
Early prediction of whether a liver allograft will be utilized for transplantation may allow better resource deployment during donor management and improve organ allocation. The national donor management goals (DMG) registry contains critical care data collected during donor management. We developed a machine learning model to predict transplantation of a liver graft based on data from the DMG registry. METHODS: Several machine learning classifiers were trained to predict transplantation of a liver graft. We utilized 127 variables available in the DMG dataset. We included data from potential deceased organ donors between April 2012 and January 2019. The outcome was defined as liver recovery for transplantation in the operating room. The prediction was made based on data available 12-18 h after the time of authorization for transplantation. The data were randomly separated into training (60%), validation (20%), and test sets (20%). We compared the performance of our models to the Liver Discard Risk Index. RESULTS: Of 13 629 donors in the dataset, 9255 (68%) livers were recovered and transplanted, 1519 recovered but used for research or discarded, 2855 were not recovered. The optimized gradient boosting machine classifier achieved an area under the curve of the receiver operator characteristic of 0.84 on the test set, outperforming all other classifiers. CONCLUSIONS: This model predicts successful liver recovery for transplantation in the operating room, using data available early during donor management. It performs favorably when compared to existing models. It may provide real-time decision support during organ donor management and transplant logistics.
ABSTRACT
Until recently, astronaut blood samples were collected in-flight, transported to earth on the Space Shuttle, and analyzed in terrestrial laboratories. If humans are to travel beyond low Earth orbit, a transition towards space-ready, point-of-care (POC) testing is required. Such testing needs to be comprehensive, easy to perform in a reduced-gravity environment, and unaffected by the stresses of launch and spaceflight. Countless POC devices have been developed to mimic laboratory scale counterparts, but most have narrow applications and few have demonstrable use in an in-flight, reduced-gravity environment. In fact, demonstrations of biomedical diagnostics in reduced gravity are limited altogether, making component choice and certain logistical challenges difficult to approach when seeking to test new technology. To help fill the void, we are presenting a modular method for the construction and operation of a prototype blood diagnostic device and its associated parabolic flight test rig that meet the standards for flight-testing onboard a parabolic flight, reduced-gravity aircraft. The method first focuses on rig assembly for in-flight, reduced-gravity testing of a flow cytometer and a companion microfluidic mixing chip. Components are adaptable to other designs and some custom components, such as a microvolume sample loader and the micromixer may be of particular interest. The method then shifts focus to flight preparation, by offering guidelines and suggestions to prepare for a successful flight test with regard to user training, development of a standard operating procedure (SOP), and other issues. Finally, in-flight experimental procedures specific to our demonstrations are described.