ABSTRACT
BACKGROUND: Current classification for acute kidney injury (AKI) in critically ill patients with sepsis relies only on its severity-measured by maximum creatinine which overlooks inherent complexities and longitudinal evaluation of this heterogenous syndrome. The role of classification of AKI based on early creatinine trajectories is unclear. METHODS: This retrospective study identified patients with Sepsis-3 who developed AKI within 48-h of intensive care unit admission using Medical Information Mart for Intensive Care-IV database. We used latent class mixed modelling to identify early creatinine trajectory-based classes of AKI in critically ill patients with sepsis. Our primary outcome was development of acute kidney disease (AKD). Secondary outcomes were composite of AKD or all-cause in-hospital mortality by day 7, and AKD or all-cause in-hospital mortality by hospital discharge. We used multivariable regression to assess impact of creatinine trajectory-based classification on outcomes, and eICU database for external validation. RESULTS: Among 4197 patients with AKI in critically ill patients with sepsis, we identified eight creatinine trajectory-based classes with distinct characteristics. Compared to the class with transient AKI, the class that showed severe AKI with mild improvement but persistence had highest adjusted risks for developing AKD (OR 5.16; 95% CI 2.87-9.24) and composite 7-day outcome (HR 4.51; 95% CI 2.69-7.56). The class that demonstrated late mild AKI with persistence and worsening had highest risks for developing composite hospital discharge outcome (HR 2.04; 95% CI 1.41-2.94). These associations were similar on external validation. CONCLUSIONS: These 8 classes of AKI in critically ill patients with sepsis, stratified by early creatinine trajectories, were good predictors for key outcomes in patients with AKI in critically ill patients with sepsis independent of their AKI staging.
Subject(s)
Acute Kidney Injury , Creatinine , Critical Illness , Machine Learning , Sepsis , Humans , Acute Kidney Injury/blood , Acute Kidney Injury/diagnosis , Acute Kidney Injury/etiology , Acute Kidney Injury/classification , Male , Sepsis/blood , Sepsis/complications , Sepsis/classification , Female , Retrospective Studies , Creatinine/blood , Creatinine/analysis , Middle Aged , Aged , Machine Learning/trends , Intensive Care Units/statistics & numerical data , Intensive Care Units/organization & administration , Biomarkers/blood , Biomarkers/analysis , Hospital MortalityABSTRACT
Remote monitoring and artificial intelligence will become common and intertwined in anesthesiology by 2050. In the intraoperative period, technology will lead to the development of integrated monitoring systems that will integrate multiple data streams and allow anesthesiologists to track patients more effectively. This will free up anesthesiologists to focus on more complex tasks, such as managing risk and making value-based decisions. This will also enable the continued integration of remote monitoring and control towers having profound effects on coverage and practice models. In the PACU and ICU, the technology will lead to the development of early warning systems that can identify patients who are at risk of complications, enabling early interventions and more proactive care. The integration of augmented reality will allow for better integration of diverse types of data and better decision-making. Postoperatively, the proliferation of wearable devices that can monitor patient vital signs and track their progress will allow patients to be discharged from the hospital sooner and receive care at home. This will require increased use of telemedicine, which will allow patients to consult with doctors remotely. All of these advances will require changes to legal and regulatory frameworks that will enable new workflows that are different from those familiar to today's providers.
Subject(s)
Artificial Intelligence , Telemedicine , Humans , Monitoring, Physiologic , Vital Signs , AnesthesiologistsABSTRACT
BACKGROUND: Substantial effort has been directed toward demonstrating uses of predictive models in health care. However, implementation of these models into clinical practice may influence patient outcomes, which in turn are captured in electronic health record data. As a result, deployed models may affect the predictive ability of current and future models. OBJECTIVE: To estimate changes in predictive model performance with use through 3 common scenarios: model retraining, sequentially implementing 1 model after another, and intervening in response to a model when 2 are simultaneously implemented. DESIGN: Simulation of model implementation and use in critical care settings at various levels of intervention effectiveness and clinician adherence. Models were either trained or retrained after simulated implementation. SETTING: Admissions to the intensive care unit (ICU) at Mount Sinai Health System (New York, New York) and Beth Israel Deaconess Medical Center (Boston, Massachusetts). PATIENTS: 130 000 critical care admissions across both health systems. INTERVENTION: Across 3 scenarios, interventions were simulated at varying levels of clinician adherence and effectiveness. MEASUREMENTS: Statistical measures of performance, including threshold-independent (area under the curve) and threshold-dependent measures. RESULTS: At fixed 90% sensitivity, in scenario 1 a mortality prediction model lost 9% to 39% specificity after retraining once and in scenario 2 a mortality prediction model lost 8% to 15% specificity when created after the implementation of an acute kidney injury (AKI) prediction model; in scenario 3, models for AKI and mortality prediction implemented simultaneously, each led to reduced effective accuracy of the other by 1% to 28%. LIMITATIONS: In real-world practice, the effectiveness of and adherence to model-based recommendations are rarely known in advance. Only binary classifiers for tabular ICU admissions data were simulated. CONCLUSION: In simulated ICU settings, a universally effective model-updating approach for maintaining model performance does not seem to exist. Model use may have to be recorded to maintain viability of predictive modeling. PRIMARY FUNDING SOURCE: National Center for Advancing Translational Sciences.
Subject(s)
Acute Kidney Injury , Artificial Intelligence , Humans , Intensive Care Units , Critical Care , Delivery of Health CareABSTRACT
BACKGROUND: The mechanism for recording International Classification of Diseases (ICD) and diagnosis related groups (DRG) codes in a patient's chart is through a certified medical coder who manually reviews the medical record at the completion of an admission. High-acuity ICD codes justify DRG modifiers, indicating the need for escalated hospital resources. In this manuscript, we demonstrate that value of rules-based computer algorithms that audit for omission of administrative codes and quantifying the downstream effects with regard to financial impacts and demographic findings did not indicate significant disparities. METHODS: All study data were acquired via the UCLA Department of Anesthesiology and Perioperative Medicine's Perioperative Data Warehouse. The DataMart is a structured reporting schema that contains all the relevant clinical data entered into the EPIC (EPIC Systems, Verona, WI) electronic health record. Computer algorithms were created for eighteen disease states that met criteria for DRG modifiers. Each algorithm was run against all hospital admissions with completed billing from 2019. The algorithms scanned for the existence of disease, appropriate ICD coding, and DRG modifier appropriateness. Secondarily, the potential financial impact of ICD omissions was estimated by payor class and an analysis of ICD miscoding was done by ethnicity, sex, age, and financial class. RESULTS: Data from 34,104 hospital admissions were analyzed from January 1, 2019, to December 31, 2019. 11,520 (32.9%) hospital admissions were algorithm positive for a disease state with no corresponding ICD code. 1,990 (5.8%) admissions were potentially eligible for DRG modification/upgrade with an estimated lost revenue of $22,680,584.50. ICD code omission rates compared against reference groups (private payors, Caucasians, middle-aged patients) demonstrated significant p-values < 0.05; similarly significant p-value where demonstrated when comparing patients of opposite sexes. CONCLUSIONS: We successfully used rules-based algorithms and raw structured EHR data to identify omitted ICD codes from inpatient medical record claims. These missing ICD codes often had downstream effects such as inaccurate DRG modifiers and missed reimbursement. Embedding augmented intelligence into this problematic workflow has the potential for improvements in administrative data, but more importantly, improvements in administrative data accuracy and financial outcomes.
Subject(s)
Algorithms , Comorbidity , Diagnosis-Related Groups , International Classification of Diseases , Humans , Retrospective Studies , Software , Electronic Health Records/standards , Male , Female , Middle Aged , AdultABSTRACT
BACKGROUND AND AIMS: Ectopic lipid storage is implicated in type 2 diabetes pathogenesis; hence, exercise to deplete stores (i.e., at the intensity that allows for maximal rate of lipid oxidation; MLO) might be optimal for restoring metabolic health. This intensity ("Fatmax") is estimated during incremental exercise ("Fatmax test"). However, in "the field" general recommendations exist regarding a range of percentages of maximal heart rate (HR) to elicit MLO. The degree to which this range is aligned with measured Fatmax has not been investigated. We compared measured HR at Fatmax, with maximal HR percentages within the typically recommended range in a sample of 26 individuals (Female: n = 11, European ancestry: n = 17). METHODS AND RESULTS: Subjects completed a modified Fatmax test with a 5-min warmup, followed by incremental stages starting at 15 W with work rate increased by 15 W every 5 min until termination criteria were reached. Pulmonary gas exchange was recorded and average values for VË o2 and VË co2 for the final minute of each stage were used to estimate substrate-oxidation rates. We modeled lipid-oxidation kinetics using a sinusoidal model and expressed MLO relative to peak VË o2 and HR. Bland-Altman analysis demonstrated lack of concordance between HR at Fatmax and at 50%, 70%, and 80% of age-predicted maximum with a mean difference of 23 b·min-1. CONCLUSION: Our results indicate that estimated "fat-burning" heart rate zones are inappropriate for prescribing exercise to elicit MLO and we recommend direct individual exercise lipid oxidation measurements to elicit these values.
ABSTRACT
BACKGROUND: A single laboratory range for all individuals may fail to take into account underlying physiologic differences based on sex and genetic factors. We hypothesized that laboratory distributions differ based on self-reported sex and ethnicity and that ranges stratified by these factors better correlate with postoperative mortality and acute kidney injury (AKI). METHODS: Results from metabolic panels, complete blood counts, and coagulation panels for patients in outpatient encounters were identified from our electronic health record. Patients were grouped based on self-reported sex (2 groups) and ethnicity (6 groups). Stratified ranges were set to be the 2.5th/97.5th percentile for each sex/ethnic group. For patients undergoing procedures, each patient/laboratory result was classified as normal/abnormal using the stratified and nonstratified (traditional) ranges; overlap in the definitions was assessed between the 2 classifications by looking for the percentage of agreement in result classifications of normal/abnormal using the 2 methods. To assess which definitions of normal are most associated with adverse postoperative outcomes, the odds ratio (OR) for each outcome/laboratory result pair was assessed, and the frequency that the confidence intervals of ORs for the stratified versus nonstratified range did not overlap was examined. RESULTS: Among the 300 unique combinations (race × sex × laboratory type), median proportion overlap (meaning patient was either "normal" or "abnormal" for both methodologies) was 0.86 [q1, 0.80; q3, 0.89]. All laboratory results except 6 overlapped at least 80% of the time. The frequency of overlap did not differ among the racial/ethnic groups. In cases where the ORs were different, the stratified range was better associated with both AKI and mortality (P < .001). There was no trend of bias toward any specific sex/ethnic group. CONCLUSIONS: Baseline "normal" laboratory values differ across sex and ethnic groups, and ranges stratified by these groups are better associated with postoperative AKI and mortality as compared to the standard reference ranges.
Subject(s)
Acute Kidney Injury , Ethnicity , Humans , Retrospective Studies , Reference Values , Patient Reported Outcome MeasuresABSTRACT
BACKGROUND: Visual analytics is the science of analytical reasoning supported by interactive visual interfaces called dashboards. In this report, we describe our experience addressing the challenges in visual analytics of anesthesia electronic health record (EHR) data using a commercially available business intelligence (BI) platform. As a primary outcome, we discuss some performance metrics of the dashboards, and as a secondary outcome, we outline some operational enhancements and financial savings associated with deploying the dashboards. METHODS: Data were transferred from the EHR to our departmental servers using several parallel processes. A custom structured query language (SQL) query was written to extract the relevant data fields and to clean the data. Tableau was used to design multiple dashboards for clinical operation, performance improvement, and business management. RESULTS: Before deployment of the dashboards, detailed case counts and attributions were available for the operating rooms (ORs) from perioperative services; however, the same level of detail was not available for non-OR locations. Deployment of the yearly case count dashboards provided near-real-time case count information from both central and non-OR locations among multiple campuses, which was not previously available. The visual presentation of monthly data for each year allowed us to recognize seasonality in case volumes and adjust our supply chain to prevent shortages. The dashboards highlighted the systemwide volume of cases in our endoscopy suites, which allowed us to target these supplies for pricing negotiations, with an estimated annual cost savings of $250,000. Our central venous pressure (CVP) dashboard enabled us to provide individual practitioner feedback, thus increasing our monthly CVP checklist compliance from approximately 92% to 99%. CONCLUSIONS: The customization and visualization of EHR data are both possible and worthwhile for the leveraging of information into easily comprehensible and actionable data for the improvement of health care provision and practice management. Limitations inherent to EHR data presentation make this customization necessary, and continued open access to the underlying data set is essential.
Subject(s)
Anesthesia , Anesthesiology , Electronic Health Records , Benchmarking , Operating RoomsABSTRACT
BACKGROUND: The introduction of electronic health records (EHRs) has helped physicians access relevant medical information on their patients. However, the design of EHRs can make it hard for clinicians to easily find, review, and document all of the relevant data, leading to documentation that is not fully reflective of the complete history. We hypothesized that the incidence of undocumented key comorbid diseases (atrial fibrillation [afib], congestive heart failure [CHF], chronic obstructive pulmonary disease [COPD], diabetes, and chronic kidney disease [CKD]) in the anesthesia preoperative evaluation was associated with increased postoperative length of stay (LOS) and mortality. METHODS: Charts of patients >18 years who received anesthesia in an inpatient facility were reviewed in this retrospective study. For each disease, a precise algorithm was developed to look for key structured data (medications, lab results, structured medical history, etc) in the EHR. Additionally, the checkboxes from the anesthesia preoperative evaluation were queried to determine the presence or absence of the documentation of the disease. Differences in mortality were modeled with logistic regression, and LOS was analyzed using linear regression. RESULTS: A total of 91,011 cases met inclusion criteria (age 18-89 years; 52% women, 48% men; 70% admitted from home). Agreement between the algorithms and the preoperative note was >84% for all comorbidities other than chronic pain (63.5%). The algorithm-detected disease not documented by the anesthesia team in 34.5% of cases for chronic pain (vs 1.9% of cases where chronic pain was documented but not detected by the algorithm), 4.0% of cases for diabetes (vs 2.1%), 4.3% of cases for CHF (vs 0.7%), 4.3% of cases for COPD (vs 1.1%), 7.7% of cases for afib (vs 0.3%), and 10.8% of cases for CKD (vs 1.7%). To assess the association of missed documentation with outcomes, we compared patients where the disease was detected by the algorithm but not documented (A+/P-) with patients where the disease was documented (A+/P+). For all diseases except chronic pain, the missed documentation was associated with a longer LOS. For mortality, the discrepancy was associated with increased mortality for afib, while the differences were insignificant for the other diseases. For each missed disease, the odds of mortality increased 1.52 (95% confidence interval [CI], 1.42-1.63) and the LOS increased by approximately 11%, geometric mean ratio of 1.11 (95% CI, 1.10-1.12). CONCLUSIONS: Anesthesia preoperative evaluations not infrequently fail to document disease for which there is evidence of disease in the EHR data. This missed documentation is associated with an increased LOS and mortality in perioperative patients.
Subject(s)
Anesthesia/adverse effects , Documentation , Electronic Health Records , Length of Stay , Postoperative Complications/etiology , Preoperative Care/adverse effects , Adolescent , Adult , Aged , Aged, 80 and over , Algorithms , Anesthesia/mortality , Checklist , Comorbidity , Data Mining , Data Warehousing , Female , Humans , Male , Middle Aged , Postoperative Complications/mortality , Postoperative Complications/therapy , Preoperative Care/mortality , Retrospective Studies , Risk Assessment , Risk Factors , Time Factors , Treatment Outcome , Workflow , Young AdultABSTRACT
BACKGROUND: Many hospitals have replaced their legacy anesthesia information management system with an enterprise-wide electronic health record system. Integrating the anesthesia data within the context of the global hospital information infrastructure has created substantive challenges for many organizations. A process to build a perioperative data warehouse from Epic was recently published from the University of California Los Angeles (UCLA), but the generalizability of that process is unknown. We describe the implementation of their process at the University of Miami (UM). METHODS: The UCLA process was tested at UM, and performance was evaluated following the configuration of a reporting server and transfer of the required Clarity tables to that server. Modifications required for the code to execute correctly in the UM environment were identified and implemented, including the addition of locally specified elements in the database. RESULTS: The UCLA code to build the base tables in the perioperative data warehouse executed correctly after minor modifications to match the local server and database architecture at UM. The 26 stored procedures in the UCLA process all ran correctly using the default settings provided and populated the base tables. After modification of the item lists to reflect the UM implementation of Epic (eg, medications, laboratory tests, physiologic monitors, and anesthesia machine parameters), the UCLA code ran correctly and populated the base tables. The data from those tables were used successfully to populate the existing perioperative data warehouse at UM, which housed data from the legacy anesthesia information management system of the institution. The time to pull data from Epic and populate the perioperative data warehouse was 197 ± 47 minutes (standard deviation [SD]) on weekdays and 260 ± 56 minutes (SD) on weekend days, measured over 100 consecutive days. The longer times on weekends reflect the simultaneous execution of database maintenance tasks on the reporting server. The UCLA extract process has been in production at UM for the past 18 months and has been invaluable for quality assurance, business process, and research activities. CONCLUSIONS: The data schema developed at UCLA proved to be a practical and scalable method to extract information from the Epic electronic health system database into the perioperative data warehouse in use at UM. Implementing the process developed at UCLA to build a comprehensive perioperative data warehouse from Epic is an extensible process that other hospitals seeking more efficient access to their electronic health record data should consider.
Subject(s)
Data Warehousing , Database Management Systems , Electronic Health Records , Hospital Information Systems , Access to Information , Data Mining , Databases, Factual , Humans , Perioperative CareABSTRACT
BACKGROUND: Acute kidney injury (AKI) has been well documented in adults after noncardiac surgery and demonstrated to be associated with adverse outcomes. We report the prevalence of AKI after pediatric noncardiac surgery, the perioperative factors associated with postoperative AKI, and the association of AKI with postoperative outcomes in children undergoing noncardiac surgery. METHODS: Patients ≤18 years of age who underwent noncardiac surgery with serum creatinine during the 12 months preceding surgery and no history of end-stage renal disease were included in this retrospective observational study at a single tertiary academic hospital. Patients were evaluated during the first 7 days after surgery for development of any stage of AKI, according to Kidney Disease: Improving Global Outcomes (KDIGO) criteria. Patients were classified into stages of KDIGO AKI for the purposes of describing prevalence. For further analyses, patients were grouped into those who developed any stage of AKI postoperatively and those who did not. Additionally, the time point at which each patient was first diagnosed with stage I AKI or greater was also assessed. Pre-, intra-, and postoperative factors were compared between the 2 groups. A multivariable Cox proportional hazards regression model was created to examine the time to first diagnosis of AKI using all nonredundant covariates. Analysis of the association of AKI with postoperative outcomes, mortality and 30-day readmission, was undertaken utilizing propensity score-matched controls and a multivariable Cox proportional hazards regression model. RESULTS: A total of 25,203 cases between 2013 and 2018 occurred; 8924 met inclusion criteria. Among this cohort, the observed prevalence of postoperative AKI was 3.2% (288 cases; confidence interval [CI], 2.9-3.6). The multivariable Cox model showed American Society of Anesthesiologists (ASA) status to be associated with the development of postoperative AKI. Several other factors, including intraoperative hypotension, were significantly associated with postoperative AKI in univariable models but found not to be significantly associated after adjustment. The multivariable Cox analyses with propensity-matched controls showed an estimated hazard ratio of 3.28 for mortality (CI, 1.71-6.32, P < .001) and 1.55 for 30-day readmission (CI, 1.08-2.23, P = .018) in children who developed AKI versus those who did not. CONCLUSIONS: In children undergoing noncardiac surgery, postoperative AKI occurred in 3.2% of patients. Several factors, including intraoperative hypotension, were significantly associated with postoperative AKI in univariable models. After adjustment, only ASA status was found to be significantly associated with AKI in children after noncardiac surgery. Postoperative AKI was found to be associated with significantly higher rates of mortality and 30-day readmission in multivariable, time-varying models with propensity-matched controls.
Subject(s)
Acute Kidney Injury/epidemiology , Surgical Procedures, Operative/adverse effects , Acute Kidney Injury/blood , Acute Kidney Injury/diagnosis , Adolescent , Age Factors , Biomarkers/blood , Child , Child, Preschool , Creatinine/blood , Female , Humans , Infant , Los Angeles/epidemiology , Male , Prevalence , Propensity Score , Retrospective Studies , Risk Assessment , Risk Factors , Time Factors , Treatment OutcomeABSTRACT
BACKGROUND: Although prediction of hospital readmissions has been studied in medical patients, it has received relatively little attention in surgical patient populations. Published predictors require information only available at the moment of discharge. The authors hypothesized that machine learning approaches can be leveraged to accurately predict readmissions in postoperative patients from the emergency department. Further, the authors hypothesize that these approaches can accurately predict the risk of readmission much sooner than hospital discharge. METHODS: Using a cohort of surgical patients at a tertiary care academic medical center, surgical, demographic, lab, medication, care team, and current procedural terminology data were extracted from the electronic health record. The primary outcome was whether there existed a future hospital readmission originating from the emergency department within 30 days of surgery. Secondarily, the time interval from surgery to the prediction was analyzed at 0, 12, 24, 36, 48, and 60 h. Different machine learning models for predicting the primary outcome were evaluated with respect to the area under the receiver-operator characteristic curve metric using different permutations of the available features. RESULTS: Surgical hospital admissions (N = 34,532) from April 2013 to December 2016 were included in the analysis. Surgical and demographic features led to moderate discrimination for prediction after discharge (area under the curve: 0.74 to 0.76), whereas medication, consulting team, and current procedural terminology features did not improve the discrimination. Lab features improved discrimination, with gradient-boosted trees attaining the best performance (area under the curve: 0.866, SD 0.006). This performance was sustained during temporal validation with 2017 to 2018 data (area under the curve: 0.85 to 0.88). Lastly, the discrimination of the predictions calculated 36 h after surgery (area under the curve: 0.88 to 0.89) nearly matched those from time of discharge. CONCLUSIONS: A machine learning approach to predicting postoperative readmission can produce hospital-specific models for accurately predicting 30-day readmissions via the emergency department. Moreover, these predictions can be confidently calculated at 36 h after surgery without consideration of discharge-level data.
Subject(s)
Machine Learning , Patient Readmission , Emergency Service, Hospital , Hospitalization , Humans , Patient DischargeABSTRACT
BACKGROUND: Rapid, preoperative identification of patients with the highest risk for medical complications is necessary to ensure that limited infrastructure and human resources are directed towards those most likely to benefit. Existing risk scores either lack specificity at the patient level or utilise the American Society of Anesthesiologists (ASA) physical status classification, which requires a clinician to review the chart. METHODS: We report on the use of machine learning algorithms, specifically random forests, to create a fully automated score that predicts postoperative in-hospital mortality based solely on structured data available at the time of surgery. Electronic health record data from 53 097 surgical patients (2.01% mortality rate) who underwent general anaesthesia between April 1, 2013 and December 10, 2018 in a large US academic medical centre were used to extract 58 preoperative features. RESULTS: Using a random forest classifier we found that automatically obtained preoperative features (area under the curve [AUC] of 0.932, 95% confidence interval [CI] 0.910-0.951) outperforms Preoperative Score to Predict Postoperative Mortality (POSPOM) scores (AUC of 0.660, 95% CI 0.598-0.722), Charlson comorbidity scores (AUC of 0.742, 95% CI 0.658-0.812), and ASA physical status (AUC of 0.866, 95% CI 0.829-0.897). Including the ASA physical status with the preoperative features achieves an AUC of 0.936 (95% CI 0.917-0.955). CONCLUSIONS: This automated score outperforms the ASA physical status score, the Charlson comorbidity score, and the POSPOM score for predicting in-hospital mortality. Additionally, we integrate this score with a previously published postoperative score to demonstrate the extent to which patient risk changes during the perioperative period.
Subject(s)
Electronic Health Records/statistics & numerical data , Health Status , Hospital Mortality , Machine Learning , Postoperative Complications/diagnosis , Adolescent , Adult , Aged , Aged, 80 and over , California , Comorbidity , Female , Humans , Male , Middle Aged , Preoperative Period , Risk Assessment , Risk Factors , Young AdultABSTRACT
BACKGROUND: Previous work in the field of medical informatics has shown that rules-based algorithms can be created to identify patients with various medical conditions; however, these techniques have not been compared to actual clinician notes nor has the ability to predict complications been tested. We hypothesize that a rules-based algorithm can successfully identify patients with the diseases in the Revised Cardiac Risk Index (RCRI). METHODS: Patients undergoing surgery at the University of California, Los Angeles Health System between April 1, 2013 and July 1, 2016 and who had at least 2 previous office visits were included. For each disease in the RCRI except renal failure-congestive heart failure, ischemic heart disease, cerebrovascular disease, and diabetes mellitus-diagnosis algorithms were created based on diagnostic and standard clinical treatment criteria. For each disease state, the prevalence of the disease as determined by the algorithm, International Classification of Disease (ICD) code, and anesthesiologist's preoperative note were determined. Additionally, 400 American Society of Anesthesiologists classes III and IV cases were randomly chosen for manual review by an anesthesiologist. The sensitivity, specificity, accuracy, positive predictive value, negative predictive value, and area under the receiver operating characteristic curve were determined using the manual review as a gold standard. Last, the ability of the RCRI as calculated by each of the methods to predict in-hospital mortality was determined, and the time necessary to run the algorithms was calculated. RESULTS: A total of 64,151 patients met inclusion criteria for the study. In general, the incidence of definite or likely disease determined by the algorithms was higher than that detected by the anesthesiologist. Additionally, in all disease states, the prevalence of disease was always lowest for the ICD codes, followed by the preoperative note, followed by the algorithms. In the subset of patients for whom the records were manually reviewed, the algorithms were generally the most sensitive and the ICD codes the most specific. When computing the modified RCRI using each of the methods, the modified RCRI from the algorithms predicted in-hospital mortality with an area under the receiver operating characteristic curve of 0.70 (0.67-0.73), which compared to 0.70 (0.67-0.72) for ICD codes and 0.64 (0.61-0.67) for the preoperative note. On average, the algorithms took 12.64 ± 1.20 minutes to run on 1.4 million patients. CONCLUSIONS: Rules-based algorithms for disease in the RCRI can be created that perform with a similar discriminative ability as compared to physician notes and ICD codes but with significantly increased economies of scale.
Subject(s)
Medical Informatics/methods , Myocardial Infarction/diagnosis , Risk Assessment/methods , Adult , Aged , Algorithms , Anesthesiology , Area Under Curve , Comorbidity , Databases, Factual , Diabetes Complications/therapy , Electronic Health Records , Female , Heart Failure/complications , Humans , Incidence , Male , Middle Aged , Myocardial Infarction/epidemiology , Myocardial Ischemia/complications , Pattern Recognition, Automated , Postoperative Complications/epidemiology , Prevalence , ROC Curve , Renal Insufficiency/complications , Risk Factors , SoftwareABSTRACT
BACKGROUND: Affecting nearly 30% of all surgical patients, postoperative nausea and vomiting (PONV) can lead to patient dissatisfaction, prolonged recovery times, and unanticipated hospital admissions. There are well-established, evidence-based guidelines for the prevention of PONV; yet physicians inconsistently adhere to them. We hypothesized that an electronic medical record-based clinical decision support (CDS) approach that incorporates a new PONV pathway, education initiative, and personalized feedback reporting system can decrease the incidence of PONV. METHODS: Two years of data, from February 17, 2015 to February 16, 2016, was acquired from our customized University of California Los Angeles Anesthesiology perioperative data warehouse. We queried the entire subpopulation of surgical cases that received general anesthesia with volatile anesthetics, were ≥12 years of age, and spent time recovering in any of the postanesthesia care units (PACUs). We then defined PONV as the administration of an antiemetic medication during the aforementioned PACU recovery. Our CDS system incorporated additional PONV-specific questions to the preoperative evaluation form, creation of a real-time intraoperative pathway compliance indicator, initiation of preoperative PONV risk alerts, and individualized emailed reports sent weekly to clinical providers. The association between the intervention and PONV was assessed by comparing the slopes from the incidence of PONV pre/postintervention as well as comparing observed incidences in the postintervention period to what we expected if the preintervention slope would have continued using interrupted time series analysis regression models after matching the groups on PONV-specific risk factors. RESULTS: After executing the PONV risk-balancing algorithm, the final cohort contained 36,796 cases, down from the 40,831 that met inclusion criteria. The incidence of PONV before the intervention was estimated to be 19.1% (95% confidence interval [CI], 17.9%-20.2%) the week before the intervention. Directly after implementation of the CDS, the total incidence decreased to 16.9% (95% CI, 15.2%-18.5%; P = .007). Within the high-risk population, the decrease in the incidence of PONV went from 29.3% (95% CI, 27.6%-31.1%) to 23.5% (95% CI, 20.5%-26.5%; P < .001). There was no significant difference in the PONV incidence slopes over the entire pre/postintervention periods in the high- or low-risk groups, despite an abrupt decline in the PONV incidence for high-risk patients within the first month of the CDS implementation. CONCLUSIONS: We demonstrate an approach to reduce PONV using individualized emails and anesthesia-specific CDS tools integrated directly into a commercial electronic medical record. We found an associated decrease in the PACU administration of rescue antiemetics for our high-risk patient population.
Subject(s)
Antiemetics/administration & dosage , Postoperative Nausea and Vomiting/drug therapy , Adolescent , Adult , Aged , Algorithms , Anesthesia, General , Child , Comparative Effectiveness Research , Data Collection , Decision Support Systems, Clinical , Dexamethasone/administration & dosage , Electronic Health Records , Feedback , Female , Humans , Incidence , Interrupted Time Series Analysis , Los Angeles , Male , Middle Aged , Ondansetron/administration & dosage , Propensity Score , Quality Improvement , Risk , Scopolamine/administration & dosage , Young AdultABSTRACT
WHAT WE ALREADY KNOW ABOUT THIS TOPIC: WHAT THIS ARTICLE TELLS US THAT IS NEW: BACKGROUND:: The authors tested the hypothesis that deep neural networks trained on intraoperative features can predict postoperative in-hospital mortality. METHODS: The data used to train and validate the algorithm consists of 59,985 patients with 87 features extracted at the end of surgery. Feed-forward networks with a logistic output were trained using stochastic gradient descent with momentum. The deep neural networks were trained on 80% of the data, with 20% reserved for testing. The authors assessed improvement of the deep neural network by adding American Society of Anesthesiologists (ASA) Physical Status Classification and robustness of the deep neural network to a reduced feature set. The networks were then compared to ASA Physical Status, logistic regression, and other published clinical scores including the Surgical Apgar, Preoperative Score to Predict Postoperative Mortality, Risk Quantification Index, and the Risk Stratification Index. RESULTS: In-hospital mortality in the training and test sets were 0.81% and 0.73%. The deep neural network with a reduced feature set and ASA Physical Status classification had the highest area under the receiver operating characteristics curve, 0.91 (95% CI, 0.88 to 0.93). The highest logistic regression area under the curve was found with a reduced feature set and ASA Physical Status (0.90, 95% CI, 0.87 to 0.93). The Risk Stratification Index had the highest area under the receiver operating characteristics curve, at 0.97 (95% CI, 0.94 to 0.99). CONCLUSIONS: Deep neural networks can predict in-hospital mortality based on automatically extractable intraoperative data, but are not (yet) superior to existing methods.
Subject(s)
Hospital Mortality/trends , Machine Learning/trends , Neural Networks, Computer , Postoperative Complications/diagnosis , Postoperative Complications/mortality , Adult , Aged , Female , Humans , Male , Middle Aged , Predictive Value of Tests , ROC CurveABSTRACT
Big data, smart data, predictive analytics, and other similar terms are ubiquitous in the lay and scientific literature. However, despite the frequency of usage, these terms are often poorly understood, and evidence of their disruption to clinical care is hard to find. This article aims to address these issues by first defining and elucidating the term big data, exploring the ways in which modern medical data, both inside and outside the electronic medical record, meet the established definitions of big data. We then define the term smart data and discuss the transformations necessary to make big data into smart data. Finally, we examine the ways in which this transition from big to smart data will affect what we do in research, retrospective work, and ultimately patient care.
Subject(s)
Artificial Intelligence , Big Data , Data Mining/methods , Databases, Factual , Electronic Health Records , Medical Informatics/methods , Humans , Terminology as TopicABSTRACT
BACKGROUND: Perioperative hypothermia may increase the incidences of wound infection, blood loss, transfusion, and cardiac morbidity. US national quality programs for perioperative normothermia specify the presence of at least 1 "body temperature" ≥35.5°C during the interval from 30 minutes before to 15 minutes after the anesthesia end time. Using data from 4 academic hospitals, we evaluated timing and measurement considerations relevant to the current requirements to guide hospitals wishing to report perioperative temperature measures using electronic data sources. METHODS: Anesthesia information management system databases from 4 hospitals were queried to obtain intraoperative temperatures and intervals to the anesthesia end time from discontinuation of temperature monitoring, end of surgery, and extubation. Inclusion criteria included age >16 years, use of a tracheal tube or supraglottic airway, and case duration ≥60 minutes. The end-of-case temperature was determined as the maximum intraoperative temperature recorded within 30 minutes before the anesthesia end time (ie, the temperature that would be used for reporting purposes). The fractions of cases with intervals >30 minutes between the last intraoperative temperature and the anesthesia end time were determined. RESULTS: Among the hospitals, averages (binned by quarters) of 34.5% to 59.5% of cases had intraoperative temperature monitoring discontinued >30 minutes before the anesthesia end time. Even if temperature measurement had been continued until extubation, averages of 5.9% to 20.8% of cases would have exceeded the allowed 30-minute window. Averages of 8.9% to 21.3% of cases had end-of-case intraoperative temperatures <35.5°C (ie, a quality measure failure). CONCLUSIONS: Because of timing considerations, a substantial fraction of cases would have been ineligible to use the end-of-case intraoperative temperature for national quality program reporting. Thus, retrieval of postanesthesia care unit temperatures would have been necessary. A substantive percentage of cases had end-of-case intraoperative temperatures below the 35.5°C threshold, also requiring postoperative measurement to determine whether the quality measure was satisfied. Institutions considering reporting national quality measures for perioperative normothermia should consider the technical and logistical issues identified to achieve a high level of compliance based on the specified regulatory language.