Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 32
1.
J Am Med Inform Assoc ; 31(6): 1322-1330, 2024 May 20.
Article En | MEDLINE | ID: mdl-38679906

OBJECTIVES: To compare and externally validate popular deep learning model architectures and data transformation methods for variable-length time series data in 3 clinical tasks (clinical deterioration, severe acute kidney injury [AKI], and suspected infection). MATERIALS AND METHODS: This multicenter retrospective study included admissions at 2 medical centers that spanned 2007-2022. Distinct datasets were created for each clinical task, with 1 site used for training and the other for testing. Three feature engineering methods (normalization, standardization, and piece-wise linear encoding with decision trees [PLE-DTs]) and 3 architectures (long short-term memory/gated recurrent unit [LSTM/GRU], temporal convolutional network, and time-distributed wrapper with convolutional neural network [TDW-CNN]) were compared in each clinical task. Model discrimination was evaluated using the area under the precision-recall curve (AUPRC) and the area under the receiver operating characteristic curve (AUROC). RESULTS: The study comprised 373 825 admissions for training and 256 128 admissions for testing. LSTM/GRU models tied with TDW-CNN models with both obtaining the highest mean AUPRC in 2 tasks, and LSTM/GRU had the highest mean AUROC across all tasks (deterioration: 0.81, AKI: 0.92, infection: 0.87). PLE-DT with LSTM/GRU achieved the highest AUPRC in all tasks. DISCUSSION: When externally validated in 3 clinical tasks, the LSTM/GRU model architecture with PLE-DT transformed data demonstrated the highest AUPRC in all tasks. Multiple models achieved similar performance when evaluated using AUROC. CONCLUSION: The LSTM architecture performs as well or better than some newer architectures, and PLE-DT may enhance the AUPRC in variable-length time series data for predicting clinical outcomes during external validation.


Deep Learning , Humans , Retrospective Studies , Acute Kidney Injury , Neural Networks, Computer , ROC Curve , Male , Datasets as Topic , Female , Middle Aged
2.
medRxiv ; 2024 Mar 19.
Article En | MEDLINE | ID: mdl-38562803

Rationale: Early detection of clinical deterioration using early warning scores may improve outcomes. However, most implemented scores were developed using logistic regression, only underwent retrospective internal validation, and were not tested in important patient subgroups. Objectives: To develop a gradient boosted machine model (eCARTv5) for identifying clinical deterioration and then validate externally, test prospectively, and evaluate across patient subgroups. Methods: All adult patients hospitalized on the wards in seven hospitals from 2008- 2022 were used to develop eCARTv5, with demographics, vital signs, clinician documentation, and laboratory values utilized to predict intensive care unit transfer or death in the next 24 hours. The model was externally validated retrospectively in 21 hospitals from 2009-2023 and prospectively in 10 hospitals from February to May 2023. eCARTv5 was compared to the Modified Early Warning Score (MEWS) and the National Early Warning Score (NEWS) using the area under the receiver operating characteristic curve (AUROC). Measurements and Main Results: The development cohort included 901,491 admissions, the retrospective validation cohort included 1,769,461 admissions, and the prospective validation cohort included 46,330 admissions. In retrospective validation, eCART had the highest AUROC (0.835; 95%CI 0.834, 0.835), followed by NEWS (0.766 (95%CI 0.766, 0.767)), and MEWS (0.704 (95%CI 0.703, 0.704)). eCART's performance remained high (AUROC ≥0.80) across a range of patient demographics, clinical conditions, and during prospective validation. Conclusions: We developed eCARTv5, which accurately identifies early clinical deterioration in hospitalized ward patients. Our model performed better than the NEWS and MEWS retrospectively, prospectively, and across a range of subgroups.

3.
J Am Med Inform Assoc ; 31(6): 1291-1302, 2024 May 20.
Article En | MEDLINE | ID: mdl-38587875

OBJECTIVE: The timely stratification of trauma injury severity can enhance the quality of trauma care but it requires intense manual annotation from certified trauma coders. The objective of this study is to develop machine learning models for the stratification of trauma injury severity across various body regions using clinical text and structured electronic health records (EHRs) data. MATERIALS AND METHODS: Our study utilized clinical documents and structured EHR variables linked with the trauma registry data to create 2 machine learning models with different approaches to representing text. The first one fuses concept unique identifiers (CUIs) extracted from free text with structured EHR variables, while the second one integrates free text with structured EHR variables. Temporal validation was undertaken to ensure the models' temporal generalizability. Additionally, analyses to assess the variable importance were conducted. RESULTS: Both models demonstrated impressive performance in categorizing leg injuries, achieving high accuracy with macro-F1 scores of over 0.8. Additionally, they showed considerable accuracy, with macro-F1 scores exceeding or near 0.7, in assessing injuries in the areas of the chest and head. We showed in our variable importance analysis that the most important features in the model have strong face validity in determining clinically relevant trauma injuries. DISCUSSION: The CUI-based model achieves comparable performance, if not higher, compared to the free-text-based model, with reduced complexity. Furthermore, integrating structured EHR data improves performance, particularly when the text modalities are insufficiently indicative. CONCLUSIONS: Our multi-modal, multiclass models can provide accurate stratification of trauma injury severity and clinically relevant interpretations.


Electronic Health Records , Machine Learning , Wounds and Injuries , Humans , Wounds and Injuries/classification , Injury Severity Score , Registries , Trauma Severity Indices , Natural Language Processing
4.
Crit Care Explor ; 6(3): e1066, 2024 Mar.
Article En | MEDLINE | ID: mdl-38505174

OBJECTIVES: Alcohol withdrawal syndrome (AWS) may progress to require high-intensity care. Approaches to identify hospitalized patients with AWS who received higher level of care have not been previously examined. This study aimed to examine the utility of Clinical Institute Withdrawal Assessment Alcohol Revised (CIWA-Ar) for alcohol scale scores and medication doses for alcohol withdrawal management in identifying patients who received high-intensity care. DESIGN: A multicenter observational cohort study of hospitalized adults with alcohol withdrawal. SETTING: University of Chicago Medical Center and University of Wisconsin Hospital. PATIENTS: Inpatient encounters between November 2008 and February 2022 with a CIWA-Ar score greater than 0 and benzodiazepine or barbiturate administered within the first 24 hours. The primary composite outcome was patients who progressed to high-intensity care (intermediate care or ICU). INTERVENTIONS: None. MAIN RESULTS: Among the 8742 patients included in the study, 37.5% (n = 3280) progressed to high-intensity care. The odds ratio for the composite outcome increased above 1.0 when the CIWA-Ar score was 24. The sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) at this threshold were 0.12 (95% CI, 0.11-0.13), 0.95 (95% CI, 0.94-0.95), 0.58 (95% CI, 0.54-0.61), and 0.64 (95% CI, 0.63-0.65), respectively. The OR increased above 1.0 at a 24-hour lorazepam milligram equivalent dose cutoff of 15 mg. The sensitivity, specificity, PPV, and NPV at this threshold were 0.16 (95% CI, 0.14-0.17), 0.96 (95% CI, 0.95-0.96), 0.68 (95% CI, 0.65-0.72), and 0.65 (95% CI, 0.64-0.66), respectively. CONCLUSIONS: Neither CIWA-Ar scores nor medication dose cutoff points were effective measures for identifying patients with alcohol withdrawal who received high-intensity care. Research studies for examining outcomes in patients who deteriorate with AWS will require better methods for cohort identification.

5.
medRxiv ; 2024 Feb 06.
Article En | MEDLINE | ID: mdl-38370788

OBJECTIVE: Timely intervention for clinically deteriorating ward patients requires that care teams accurately diagnose and treat their underlying medical conditions. However, the most common diagnoses leading to deterioration and the relevant therapies provided are poorly characterized. Therefore, we aimed to determine the diagnoses responsible for clinical deterioration, the relevant diagnostic tests ordered, and the treatments administered among high-risk ward patients using manual chart review. DESIGN: Multicenter retrospective observational study. SETTING: Inpatient medical-surgical wards at four health systems from 2006-2020 PATIENTS: Randomly selected patients (1,000 from each health system) with clinical deterioration, defined by reaching the 95th percentile of a validated early warning score, electronic Cardiac Arrest Risk Triage (eCART), were included. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Clinical deterioration was confirmed by a trained reviewer or marked as a false alarm if no deterioration occurred for each patient. For true deterioration events, the condition causing deterioration, relevant diagnostic tests ordered, and treatments provided were collected. Of the 4,000 included patients, 2,484 (62%) had clinical deterioration confirmed by chart review. Sepsis was the most common cause of deterioration (41%; n=1,021), followed by arrhythmia (19%; n=473), while liver failure had the highest in-hospital mortality (41%). The most common diagnostic tests ordered were complete blood counts (47% of events), followed by chest x-rays (42%), and cultures (40%), while the most common medication orders were antimicrobials (46%), followed by fluid boluses (34%), and antiarrhythmics (19%). CONCLUSIONS: We found that sepsis was the most common cause of deterioration, while liver failure had the highest mortality. Complete blood counts and chest x-rays were the most common diagnostic tests ordered, and antimicrobials and fluid boluses were the most common medication interventions. These results provide important insights for clinical decision-making at the bedside, training of rapid response teams, and the development of institutional treatment pathways for clinical deterioration. KEY POINTS: Question: What are the most common diagnoses, diagnostic test orders, and treatments for ward patients experiencing clinical deterioration? Findings: In manual chart review of 2,484 encounters with deterioration across four health systems, we found that sepsis was the most common cause of clinical deterioration, followed by arrythmias, while liver failure had the highest mortality. Complete blood counts and chest x-rays were the most common diagnostic test orders, while antimicrobials and fluid boluses were the most common treatments. Meaning: Our results provide new insights into clinical deterioration events, which can inform institutional treatment pathways, rapid response team training, and patient care.

6.
Resusc Plus ; 17: 100540, 2024 Mar.
Article En | MEDLINE | ID: mdl-38260119

Background and Objective: The Children's Early Warning Tool (CEWT), developed in Australia, is widely used in many countries to monitor the risk of deterioration in hospitalized children. Our objective was to compare CEWT prediction performance against a version of the Bedside Pediatric Early Warning Score (Bedside PEWS), Between the Flags (BTF), and the pediatric Calculated Assessment of Risk and Triage (pCART). Methods: We conducted a retrospective observational study of all patient admissions to the Comer Children's Hospital at the University of Chicago between 2009-2019. We compared performance for predicting the primary outcome of a direct ward-to-intensive care unit (ICU) transfer within the next 12 h using the area under the receiver operating characteristic curve (AUC). Alert rates at various score thresholds were also compared. Results: Of 50,815 ward admissions, 1,874 (3.7%) experienced the primary outcome. Among patients in Cohort 1 (years 2009-2017, on which the machine learning-based pCART was trained), CEWT performed slightly worse than Bedside PEWS but better than BTF (CEWT AUC 0.74 vs. Bedside PEWS 0.76, P < 0.001; vs. BTF 0.66, P < 0.001), while pCART performed best for patients in Cohort 2 (years 2018-2019, pCART AUC 0.84 vs. CEWT AUC 0.79, P < 0.001; vs. BTF AUC 0.67, P < 0.001; vs. Bedside PEWS 0.80, P < 0.001). Sensitivity, specificity, and positive predictive values varied across all four tools at the examined thresholds for alerts. Conclusion: CEWT has good discrimination for predicting which patients will likely be transferred to the ICU, while pCART performed the best.

7.
JAMIA Open ; 6(4): ooad109, 2023 Dec.
Article En | MEDLINE | ID: mdl-38144168

Objectives: To develop and externally validate machine learning models using structured and unstructured electronic health record data to predict postoperative acute kidney injury (AKI) across inpatient settings. Materials and Methods: Data for adult postoperative admissions to the Loyola University Medical Center (2009-2017) were used for model development and admissions to the University of Wisconsin-Madison (2009-2020) were used for validation. Structured features included demographics, vital signs, laboratory results, and nurse-documented scores. Unstructured text from clinical notes were converted into concept unique identifiers (CUIs) using the clinical Text Analysis and Knowledge Extraction System. The primary outcome was the development of Kidney Disease Improvement Global Outcomes stage 2 AKI within 7 days after leaving the operating room. We derived unimodal extreme gradient boosting machines (XGBoost) and elastic net logistic regression (GLMNET) models using structured-only data and multimodal models combining structured data with CUI features. Model comparison was performed using the receiver operating characteristic curve (AUROC), with Delong's test for statistical differences. Results: The study cohort included 138 389 adult patient admissions (mean [SD] age 58 [16] years; 11 506 [8%] African-American; and 70 826 [51%] female) across the 2 sites. Of those, 2959 (2.1%) developed stage 2 AKI or higher. Across all data types, XGBoost outperformed GLMNET (mean AUROC 0.81 [95% confidence interval (CI), 0.80-0.82] vs 0.78 [95% CI, 0.77-0.79]). The multimodal XGBoost model incorporating CUIs parameterized as term frequency-inverse document frequency (TF-IDF) showed the highest discrimination performance (AUROC 0.82 [95% CI, 0.81-0.83]) over unimodal models (AUROC 0.79 [95% CI, 0.78-0.80]). Discussion: A multimodality approach with structured data and TF-IDF weighting of CUIs increased model performance over structured data-only models. Conclusion: These findings highlight the predictive power of CUIs when merged with structured data for clinical prediction models, which may improve the detection of postoperative AKI.

8.
Am J Respir Crit Care Med ; 207(10): 1300-1309, 2023 05 15.
Article En | MEDLINE | ID: mdl-36449534

Rationale: Despite etiologic and severity heterogeneity in neutropenic sepsis, management is often uniform. Understanding host response clinical subphenotypes might inform treatment strategies for neutropenic sepsis. Objectives: In this retrospective two-hospital study, we analyzed whether temperature trajectory modeling could identify distinct, clinically relevant subphenotypes among oncology patients with neutropenia and suspected infection. Methods: Among adult oncologic admissions with neutropenia and blood cultures within 24 hours, a previously validated model classified patients' initial 72-hour temperature trajectories into one of four subphenotypes. We analyzed subphenotypes' independent relationships with hospital mortality and bloodstream infection using multivariable models. Measurements and Main Results: Patients (primary cohort n = 1,145, validation cohort n = 6,564) fit into one of four temperature subphenotypes. "Hyperthermic slow resolvers" (pooled n = 1,140 [14.8%], mortality n = 104 [9.1%]) and "hypothermic" encounters (n = 1,612 [20.9%], mortality n = 138 [8.6%]) had higher mortality than "hyperthermic fast resolvers" (n = 1,314 [17.0%], mortality n = 47 [3.6%]) and "normothermic" (n = 3,643 [47.3%], mortality n = 196 [5.4%]) encounters (P < 0.001). Bloodstream infections were more common among hyperthermic slow resolvers (n = 248 [21.8%]) and hyperthermic fast resolvers (n = 240 [18.3%]) than among hypothermic (n = 188 [11.7%]) or normothermic (n = 418 [11.5%]) encounters (P < 0.001). Adjusted for confounders, hyperthermic slow resolvers had increased adjusted odds for mortality (primary cohort odds ratio, 1.91 [P = 0.03]; validation cohort odds ratio, 2.19 [P < 0.001]) and bloodstream infection (primary odds ratio, 1.54 [P = 0.04]; validation cohort odds ratio, 2.15 [P < 0.001]). Conclusions: Temperature trajectory subphenotypes were independently associated with important outcomes among hospitalized patients with neutropenia in two independent cohorts.


Neoplasms , Neutropenia , Sepsis , Adult , Humans , Retrospective Studies , Temperature , Neutropenia/complications , Sepsis/complications , Fever , Neoplasms/complications , Neoplasms/therapy
9.
Front Pediatr ; 11: 1284672, 2023.
Article En | MEDLINE | ID: mdl-38188917

Introduction: Critical deterioration in hospitalized children, defined as ward to pediatric intensive care unit (PICU) transfer followed by mechanical ventilation (MV) or vasoactive infusion (VI) within 12 h, has been used as a primary metric to evaluate the effectiveness of clinical interventions or quality improvement initiatives. We explore the association between critical events (CEs), i.e., MV or VI events, within the first 48 h of PICU transfer from the ward or emergency department (ED) and in-hospital mortality. Methods: We conducted a retrospective study of a cohort of PICU transfers from the ward or the ED at two tertiary-care academic hospitals. We determined the association between mortality and occurrence of CEs within 48 h of PICU transfer after adjusting for age, gender, hospital, and prior comorbidities. Results: Experiencing a CE within 48 h of PICU transfer was associated with an increased risk of mortality [OR 12.40 (95% CI: 8.12-19.23, P < 0.05)]. The increased risk of mortality was highest in the first 12 h [OR 11.32 (95% CI: 7.51-17.15, P < 0.05)] but persisted in the 12-48 h time interval [OR 2.84 (95% CI: 1.40-5.22, P < 0.05)]. Varying levels of risk were observed when considering ED or ward transfers only, when considering different age groups, and when considering individual 12-h time intervals. Discussion: We demonstrate that occurrence of a CE within 48 h of PICU transfer was associated with mortality after adjusting for confounders. Studies focusing on the impact of quality improvement efforts may benefit from using CEs within 48 h of PICU transfer as an additional evaluation metric, provided these events could have been influenced by the initiative.

10.
Int J Chron Obstruct Pulmon Dis ; 17: 2701-2709, 2022.
Article En | MEDLINE | ID: mdl-36299799

Background: Chronic obstructive pulmonary disease (COPD) is a leading cause of hospital readmissions. Few existing tools use electronic health record (EHR) data to forecast patients' readmission risk during index hospitalizations. Objective: We used machine learning and in-hospital data to model 90-day risk for and cause of readmission among inpatients with acute exacerbations of COPD (AE-COPD). Design: Retrospective cohort study. Participants: Adult patients admitted for AE-COPD at the University of Chicago Medicine between November 7, 2008 and December 31, 2018 meeting International Classification of Diseases (ICD)-9 or -10 criteria consistent with AE-COPD were included. Methods: Random forest models were fit to predict readmission risk and respiratory-related readmission cause. Predictor variables included demographics, comorbidities, and EHR data from patients' index hospital stays. Models were derived on 70% of observations and validated on a 30% holdout set. Performance of the readmission risk model was compared to that of the HOSPITAL score. Results: Among 3238 patients admitted for AE-COPD, 1103 patients were readmitted within 90 days. Of the readmission causes, 61% (n = 672) were respiratory-related and COPD (n = 452) was the most common. Our readmission risk model had a significantly higher area under the receiver operating characteristic curve (AUROC) (0.69 [0.66, 0.73]) compared to the HOSPITAL score (0.63 [0.59, 0.67]; p = 0.002). The respiratory-related readmission cause model had an AUROC of 0.73 [0.68, 0.79]. Conclusion: Our models improve on current tools by predicting 90-day readmission risk and cause at the time of discharge from index admissions for AE-COPD. These models could be used to identify patients at higher risk of readmission and direct tailored post-discharge transition of care interventions that lower readmission risk.


Patient Readmission , Pulmonary Disease, Chronic Obstructive , Adult , Humans , Pulmonary Disease, Chronic Obstructive/diagnosis , Pulmonary Disease, Chronic Obstructive/epidemiology , Pulmonary Disease, Chronic Obstructive/therapy , Retrospective Studies , Aftercare , Patient Discharge , Logistic Models , Risk Factors , Hospitalization , Machine Learning
11.
J Am Med Inform Assoc ; 29(10): 1696-1704, 2022 09 12.
Article En | MEDLINE | ID: mdl-35869954

OBJECTIVES: Early identification of infection improves outcomes, but developing models for early identification requires determining infection status with manual chart review, limiting sample size. Therefore, we aimed to compare semi-supervised and transfer learning algorithms with algorithms based solely on manual chart review for identifying infection in hospitalized patients. MATERIALS AND METHODS: This multicenter retrospective study of admissions to 6 hospitals included "gold-standard" labels of infection from manual chart review and "silver-standard" labels from nonchart-reviewed patients using the Sepsis-3 infection criteria based on antibiotic and culture orders. "Gold-standard" labeled admissions were randomly allocated to training (70%) and testing (30%) datasets. Using patient characteristics, vital signs, and laboratory data from the first 24 hours of admission, we derived deep learning and non-deep learning models using transfer learning and semi-supervised methods. Performance was compared in the gold-standard test set using discrimination and calibration metrics. RESULTS: The study comprised 432 965 admissions, of which 2724 underwent chart review. In the test set, deep learning and non-deep learning approaches had similar discrimination (area under the receiver operating characteristic curve of 0.82). Semi-supervised and transfer learning approaches did not improve discrimination over models fit using only silver- or gold-standard data. Transfer learning had the best calibration (unreliability index P value: .997, Brier score: 0.173), followed by self-learning gradient boosted machine (P value: .67, Brier score: 0.170). DISCUSSION: Deep learning and non-deep learning models performed similarly for identifying infection, as did models developed using Sepsis-3 and manual chart review labels. CONCLUSION: In a multicenter study of almost 3000 chart-reviewed patients, semi-supervised and transfer learning models showed similar performance for model discrimination as baseline XGBoost, while transfer learning improved calibration.


Machine Learning , Sepsis , Humans , ROC Curve , Retrospective Studies , Sepsis/diagnosis
12.
BMC Pregnancy Childbirth ; 22(1): 295, 2022 Apr 06.
Article En | MEDLINE | ID: mdl-35387624

BACKGROUND: Early warning scores are designed to identify hospitalized patients who are at high risk of clinical deterioration. Although many general scores have been developed for the medical-surgical wards, specific scores have also been developed for obstetric patients due to differences in normal vital sign ranges and potential complications in this unique population. The comparative performance of general and obstetric early warning scores for predicting deterioration and infection on the maternal wards is not known. METHODS: This was an observational cohort study at the University of Chicago that included patients hospitalized on obstetric wards from November 2008 to December 2018. Obstetric scores (modified early obstetric warning system (MEOWS), maternal early warning criteria (MEWC), and maternal early warning trigger (MEWT)), paper-based general scores (Modified Early Warning Score (MEWS) and National Early Warning Score (NEWS), and a general score developed using machine learning (electronic Cardiac Arrest Risk Triage (eCART) score) were compared using the area under the receiver operating characteristic score (AUC) for predicting ward to intensive care unit (ICU) transfer and/or death and new infection. RESULTS: A total of 19,611 patients were included, with 43 (0.2%) experiencing deterioration (ICU transfer and/or death) and 88 (0.4%) experiencing an infection. eCART had the highest discrimination for deterioration (p < 0.05 for all comparisons), with an AUC of 0.86, followed by MEOWS (0.74), NEWS (0.72), MEWC (0.71), MEWS (0.70), and MEWT (0.65). MEWC, MEWT, and MEOWS had higher accuracy than MEWS and NEWS but lower accuracy than eCART at specific cut-off thresholds. For predicting infection, eCART (AUC 0.77) had the highest discrimination. CONCLUSIONS: Within the limitations of our retrospective study, eCART had the highest accuracy for predicting deterioration and infection in our ante- and postpartum patient population. Maternal early warning scores were more accurate than MEWS and NEWS. While institutional choice of an early warning system is complex, our results have important implications for the risk stratification of maternal ward patients, especially since the low prevalence of events means that small improvements in accuracy can lead to large decreases in false alarms.


Clinical Deterioration , Early Warning Score , Heart Arrest , Female , Heart Arrest/diagnosis , Humans , Intensive Care Units , Pregnancy , ROC Curve , Retrospective Studies , Risk Assessment/methods
13.
Crit Care Med ; 49(10): 1694-1705, 2021 10 01.
Article En | MEDLINE | ID: mdl-33938715

OBJECTIVES: Early antibiotic administration is a central component of sepsis guidelines, and delays may increase mortality. However, prior studies have examined the delay to first antibiotic administration as a single time period even though it contains two distinct processes: antibiotic ordering and antibiotic delivery, which can each be targeted for improvement through different interventions. The objective of this study was to characterize and compare patients who experienced order or delivery delays, investigate the association of each delay type with mortality, and identify novel patient subphenotypes with elevated risk of harm from delays. DESIGN: Retrospective analysis of multicenter inpatient data. SETTING: Two tertiary care medical centers (2008-2018, 2006-2017) and four community-based hospitals (2008-2017). PATIENTS: All patients admitted through the emergency department who met clinical criteria for infection. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Patient demographics, vitals, laboratory values, medication order and administration times, and in-hospital survival data were obtained from the electronic health record. Order and delivery delays were calculated for each admission. Adjusted logistic regression models were used to examine the relationship between each delay and in-hospital mortality. Causal forests, a machine learning method, was used to identify a high-risk subgroup. A total of 60,817 admissions were included, and delays occurred in 58% of patients. Each additional hour of order delay (odds ratio, 1.04; 95% CI, 1.03-1.05) and delivery delay (odds ratio, 1.05; 95% CI, 1.02-1.08) was associated with increased mortality. A patient subgroup identified by causal forests with higher comorbidity burden, greater organ dysfunction, and abnormal initial lactate measurements had a higher risk of death associated with delays (odds ratio, 1.07; 95% CI, 1.06-1.09 vs odds ratio, 1.02; 95% CI, 1.01-1.03). CONCLUSIONS: Delays in antibiotic ordering and drug delivery are both associated with a similar increase in mortality. A distinct subgroup of high-risk patients exist who could be targeted for more timely therapy.


Anti-Bacterial Agents/administration & dosage , Phenotype , Sepsis/genetics , Time-to-Treatment/statistics & numerical data , Aged , Aged, 80 and over , Anti-Bacterial Agents/therapeutic use , Emergency Service, Hospital/organization & administration , Emergency Service, Hospital/statistics & numerical data , Female , Hospitalization/statistics & numerical data , Humans , Illinois/epidemiology , Male , Middle Aged , Prospective Studies , Retrospective Studies , Sepsis/drug therapy , Sepsis/physiopathology , Time Factors
14.
Front Med (Lausanne) ; 8: 611989, 2021.
Article En | MEDLINE | ID: mdl-33898475

Rationale: Identifying patients hospitalized for acute exacerbations of COPD (AECOPD) who are at high risk for readmission is challenging. Traditional markers of disease severity such as pulmonary function have limited utility in predicting readmission. Handgrip strength, a component of the physical frailty phenotype, may be a simple tool to help predict readmission. Objective(s): To investigate if handgrip strength, a component of the physical frailty phenotype and surrogate for weakness, is a predictive biomarker of COPD readmission. Methods: This was a prospective, observational study of patients admitted to the inpatient general medicine unit at the University of Chicago Medicine, US. This study evaluated age, sex, ethnicity, degree of obstructive lung disease by spirometry (FEV1 percent predicted), and physical frailty phenotype (components include handgrip strength and walk speed). The primary outcome was all-cause hospital readmission within 30 days of discharge. Results: Of 381 eligible patients with AECOPD, 70 participants agreed to consent to participate in this study. Twelve participants (17%) were readmitted within 30 days of discharge. Weak grip at index hospitalization, defined as grip strength lower than previously established cut-points for sex and body mass index (BMI), was predictive of readmission (OR 11.2, 95% CI 1.3, 93.2, p = 0.03). Degree of airway obstruction (FEV1 percent predicted) did not predict readmission (OR 1.0, 95% CI 0.95, 1.1, p = 0.7). No non-frail patients were readmitted. Conclusions: At a single academic center weak grip strength was associated with increased 30-day readmission. Future studies should investigate whether geriatric measures can help risk-stratify patients for likelihood of readmission after admission for AECOPD.

15.
Crit Care Med ; 49(7): e673-e682, 2021 07 01.
Article En | MEDLINE | ID: mdl-33861547

OBJECTIVES: Recent sepsis studies have defined patients as "infected" using a combination of culture and antibiotic orders rather than billing data. However, the accuracy of these definitions is unclear. We aimed to compare the accuracy of different established criteria for identifying infected patients using detailed chart review. DESIGN: Retrospective observational study. SETTING: Six hospitals from three health systems in Illinois. PATIENTS: Adult admissions with blood culture or antibiotic orders, or Angus International Classification of Diseases infection codes and death were eligible for study inclusion as potentially infected patients. Nine-hundred to 1,000 of these admissions were randomly selected from each health system for chart review, and a proportional number of patients who did not meet chart review eligibility criteria were also included and deemed not infected. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: The accuracy of published billing code criteria by Angus et al and electronic health record criteria by Rhee et al and Seymour et al (Sepsis-3) was determined using the manual chart review results as the gold standard. A total of 5,215 patients were included, with 2,874 encounters analyzed via chart review and a proportional 2,341 added who did not meet chart review eligibility criteria. In the study cohort, 27.5% of admissions had at least one infection. This was most similar to the percentage of admissions with blood culture orders (26.8%), Angus infection criteria (28.7%), and the Sepsis-3 criteria (30.4%). Sepsis-3 criteria was the most sensitive (81%), followed by Angus (77%) and Rhee (52%), while Rhee (97%) and Angus (90%) were more specific than the Sepsis-3 criteria (89%). Results were similar for patients with organ dysfunction during their admission. CONCLUSIONS: Published criteria have a wide range of accuracy for identifying infected patients, with the Sepsis-3 criteria being the most sensitive and Rhee criteria being the most specific. These findings have important implications for studies investigating the burden of sepsis on a local and national level.


Data Accuracy , Electronic Health Records/standards , Infections/epidemiology , Information Storage and Retrieval/methods , Adult , Aged , Anti-Bacterial Agents/therapeutic use , Antibiotic Prophylaxis/statistics & numerical data , Blood Culture , Chicago/epidemiology , False Positive Reactions , Female , Humans , Infections/diagnosis , International Classification of Diseases , Male , Middle Aged , Organ Dysfunction Scores , Patient Admission/statistics & numerical data , Prevalence , Retrospective Studies , Sensitivity and Specificity , Sepsis/diagnosis
18.
Crit Care Med ; 48(11): 1645-1653, 2020 11.
Article En | MEDLINE | ID: mdl-32947475

OBJECTIVES: We recently found that distinct body temperature trajectories of infected patients correlated with survival. Understanding the relationship between the temperature trajectories and the host immune response to infection could allow us to immunophenotype patients at the bedside using temperature. The objective was to identify whether temperature trajectories have consistent associations with specific cytokine responses in two distinct cohorts of infected patients. DESIGN: Prospective observational study. SETTING: Large academic medical center between 2013 and 2019. SUBJECTS: Two cohorts of infected patients: 1) patients in the ICU with septic shock and 2) hospitalized patients with Staphylococcus aureus bacteremia. INTERVENTIONS: Clinical data (including body temperature) and plasma cytokine concentrations were measured. Patients were classified into four temperature trajectory subphenotypes using their temperature measurements in the first 72 hours from the onset of infection. Log-transformed cytokine levels were standardized to the mean and compared with the subphenotypes in both cohorts. MEASUREMENTS AND MAIN RESULTS: The cohorts consisted of 120 patients with septic shock (cohort 1) and 88 patients with S. aureus bacteremia (cohort 2). Patients from both cohorts were classified into one of four previously validated temperature subphenotypes: "hyperthermic, slow resolvers" (n = 19 cohort 1; n = 13 cohort 2), "hyperthermic, fast resolvers" (n = 18 C1; n = 24 C2), "normothermic" (n = 54 C1; n = 31 C2), and "hypothermic" (n = 29 C1; n = 20 C2). Both "hyperthermic, slow resolvers" and "hyperthermic, fast resolvers" had high levels of G-CSF, CCL2, and interleukin-10 compared with the "hypothermic" group when controlling for cohort and timing of cytokine measurement (p < 0.05). In contrast to the "hyperthermic, slow resolvers," the "hyperthermic, fast resolvers" showed significant decreases in the levels of several cytokines over a 24-hour period, including interleukin-1RA, interleukin-6, interleukin-8, G-CSF, and M-CSF (p < 0.001). CONCLUSIONS: Temperature trajectory subphenotypes are associated with consistent cytokine profiles in two distinct cohorts of infected patients. These subphenotypes could play a role in the bedside identification of cytokine profiles in patients with sepsis.


Body Temperature/physiology , Immunity/immunology , Sepsis/immunology , Aged , Bacteremia/immunology , Bacteremia/physiopathology , Body Temperature/immunology , Cytokines/blood , Female , Fever/immunology , Fever/physiopathology , Humans , Immunity/physiology , Male , Middle Aged , Prospective Studies , Sepsis/physiopathology , Shock, Septic/immunology , Shock, Septic/physiopathology , Staphylococcal Infections/immunology , Staphylococcal Infections/physiopathology
19.
Crit Care Med ; 48(11): e1020-e1028, 2020 11.
Article En | MEDLINE | ID: mdl-32796184

OBJECTIVES: Bacteremia and fungemia can cause life-threatening illness with high mortality rates, which increase with delays in antimicrobial therapy. The objective of this study is to develop machine learning models to predict blood culture results at the time of the blood culture order using routine data in the electronic health record. DESIGN: Retrospective analysis of a large, multicenter inpatient data. SETTING: Two academic tertiary medical centers between the years 2007 and 2018. SUBJECTS: All hospitalized patients who received a blood culture during hospitalization. INTERVENTIONS: The dataset was partitioned temporally into development and validation cohorts: the logistic regression and gradient boosting machine models were trained on the earliest 80% of hospital admissions and validated on the most recent 20%. MEASUREMENTS AND MAIN RESULTS: There were 252,569 blood culture days-defined as nonoverlapping 24-hour periods in which one or more blood cultures were ordered. In the validation cohort, there were 50,514 blood culture days, with 3,762 cases of bacteremia (7.5%) and 370 cases of fungemia (0.7%). The gradient boosting machine model for bacteremia had significantly higher area under the receiver operating characteristic curve (0.78 [95% CI 0.77-0.78]) than the logistic regression model (0.73 [0.72-0.74]) (p < 0.001). The model identified a high-risk group with over 30 times the occurrence rate of bacteremia in the low-risk group (27.4% vs 0.9%; p < 0.001). Using the low-risk cut-off, the model identifies bacteremia with 98.7% sensitivity. The gradient boosting machine model for fungemia had high discrimination (area under the receiver operating characteristic curve 0.88 [95% CI 0.86-0.90]). The high-risk fungemia group had 252 fungemic cultures compared with one fungemic culture in the low-risk group (5.0% vs 0.02%; p < 0.001). Further, the high-risk group had a mortality rate 60 times higher than the low-risk group (28.2% vs 0.4%; p < 0.001). CONCLUSIONS: Our novel models identified patients at low and high-risk for bacteremia and fungemia using routinely collected electronic health record data. Further research is needed to evaluate the cost-effectiveness and impact of model implementation in clinical practice.


Bacteremia/diagnosis , Electronic Health Records/statistics & numerical data , Fungemia/diagnosis , Machine Learning , Aged , Bacteremia/blood , Bacteremia/etiology , Bacteremia/microbiology , Blood Culture , Female , Fungemia/blood , Fungemia/etiology , Fungemia/microbiology , Hospitalization/statistics & numerical data , Humans , Male , Middle Aged , Models, Statistical , Reproducibility of Results , Retrospective Studies , Risk Factors
20.
JAMA Netw Open ; 3(8): e2012892, 2020 08 03.
Article En | MEDLINE | ID: mdl-32780123

Importance: Acute kidney injury (AKI) is associated with increased morbidity and mortality in hospitalized patients. Current methods to identify patients at high risk of AKI are limited, and few prediction models have been externally validated. Objective: To internally and externally validate a machine learning risk score to detect AKI in hospitalized patients. Design, Setting, and Participants: This diagnostic study included 495 971 adult hospital admissions at the University of Chicago (UC) from 2008 to 2016 (n = 48 463), at Loyola University Medical Center (LUMC) from 2007 to 2017 (n = 200 613), and at NorthShore University Health System (NUS) from 2006 to 2016 (n = 246 895) with serum creatinine (SCr) measurements. Patients with an SCr concentration at admission greater than 3.0 mg/dL, with a prior diagnostic code for chronic kidney disease stage 4 or higher, or who received kidney replacement therapy within 48 hours of admission were excluded. A simplified version of a previously published gradient boosted machine AKI prediction algorithm was used; it was validated internally among patients at UC and externally among patients at NUS and LUMC. Main Outcomes and Measures: Prediction of Kidney Disease Improving Global Outcomes SCr-defined stage 2 AKI within a 48-hour interval was the primary outcome. Discrimination was assessed by the area under the receiver operating characteristic curve (AUC). Results: The study included 495 971 adult admissions (mean [SD] age, 63 [18] years; 87 689 [17.7%] African American; and 266 866 [53.8%] women) across 3 health systems. The development of stage 2 or higher AKI occurred in 15 664 of 48 463 patients (3.4%) in the UC cohort, 5711 of 200 613 (2.8%) in the LUMC cohort, and 3499 of 246 895 (1.4%) in the NUS cohort. In the UC cohort, 332 patients (0.7%) required kidney replacement therapy compared with 672 patients (0.3%) in the LUMC cohort and 440 patients (0.2%) in the NUS cohort. The AUCs for predicting at least stage 2 AKI in the next 48 hours were 0.86 (95% CI, 0.86-0.86) in the UC cohort, 0.85 (95% CI, 0.84-0.85) in the LUMC cohort, and 0.86 (95% CI, 0.86-0.86) in the NUS cohort. The AUCs for receipt of kidney replacement therapy within 48 hours were 0.96 (95% CI, 0.96-0.96) in the UC cohort, 0.95 (95% CI, 0.94-0.95) in the LUMC cohort, and 0.95 (95% CI, 0.94-0.95) in the NUS cohort. In time-to-event analysis, a probability cutoff of at least 0.057 predicted the onset of stage 2 AKI a median (IQR) of 27 (6.5-93) hours before the eventual doubling in SCr concentrations in the UC cohort, 34.5 (19-85) hours in the NUS cohort, and 39 (19-108) hours in the LUMC cohort. Conclusions and Relevance: In this study, the machine learning algorithm demonstrated excellent discrimination in both internal and external validation, supporting its generalizability and potential as a clinical decision support tool to improve AKI detection and outcomes.


Acute Kidney Injury/diagnosis , Acute Kidney Injury/epidemiology , Machine Learning , Risk Assessment/methods , Adult , Aged , Aged, 80 and over , Female , Humans , Male , Middle Aged , Models, Statistical , ROC Curve , Retrospective Studies , Risk Factors
...