Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 35
Filtrar
1.
Crit Care Explor ; 6(10): e1161, 2024 Oct 01.
Artículo en Inglés | MEDLINE | ID: mdl-39356139

RESUMEN

IMPORTANCE: Timely intervention for clinically deteriorating ward patients requires that care teams accurately diagnose and treat their underlying medical conditions. However, the most common diagnoses leading to deterioration and the relevant therapies provided are poorly characterized. OBJECTIVES: We aimed to determine the diagnoses responsible for clinical deterioration, the relevant diagnostic tests ordered, and the treatments administered among high-risk ward patients using manual chart review. DESIGN, SETTING, AND PARTICIPANTS: This was a multicenter retrospective observational study in inpatient medical-surgical wards at four health systems from 2006 to 2020. Randomly selected patients (1000 from each health system) with clinical deterioration, defined by reaching the 95th percentile of a validated early warning score, electronic Cardiac Arrest Risk Triage, were included. MAIN OUTCOMES AND MEASURES: Clinical deterioration was confirmed by a trained reviewer or marked as a false alarm if no deterioration occurred for each patient. For true deterioration events, the condition causing deterioration, relevant diagnostic tests ordered, and treatments provided were collected. RESULTS: Of the 4000 included patients, 2484 (62%) had clinical deterioration confirmed by chart review. Sepsis was the most common cause of deterioration (41%; n = 1021), followed by arrhythmia (19%; n = 473), while liver failure had the highest in-hospital mortality (41%). The most common diagnostic tests ordered were complete blood counts (47% of events), followed by chest radiographs (42%) and cultures (40%), while the most common medication orders were antimicrobials (46%), followed by fluid boluses (34%) and antiarrhythmics (19%). CONCLUSIONS AND RELEVANCE: We found that sepsis was the most common cause of deterioration, while liver failure had the highest mortality. Complete blood counts and chest radiographs were the most common diagnostic tests ordered, and antimicrobials and fluid boluses were the most common medication interventions. These results provide important insights for clinical decision-making at the bedside, training of rapid response teams, and the development of institutional treatment pathways for clinical deterioration.


Asunto(s)
Deterioro Clínico , Humanos , Estudios Retrospectivos , Masculino , Femenino , Anciano , Persona de Mediana Edad , Mortalidad Hospitalaria , Sepsis/diagnóstico , Sepsis/mortalidad , Sepsis/terapia , Puntuación de Alerta Temprana , Pruebas Diagnósticas de Rutina , Anciano de 80 o más Años
2.
medRxiv ; 2024 Sep 04.
Artículo en Inglés | MEDLINE | ID: mdl-39281737

RESUMEN

Background: Critical illness, or acute organ failure requiring life support, threatens over five million American lives annually. Electronic health record (EHR) data are a source of granular information that could generate crucial insights into the nature and optimal treatment of critical illness. However, data management, security, and standardization are barriers to large-scale critical illness EHR studies. Methods: A consortium of critical care physicians and data scientists from eight US healthcare systems developed the Common Longitudinal Intensive Care Unit (ICU) data Format (CLIF), an open-source database format that harmonizes a minimum set of ICU Data Elements for use in critical illness research. We created a pipeline to process adult ICU EHR data at each site. After development and iteration, we conducted two proof-of-concept studies with a federated research architecture: 1) an external validation of an in-hospital mortality prediction model for critically ill patients and 2) an assessment of 72-hour temperature trajectories and their association with mechanical ventilation and in-hospital mortality using group-based trajectory models. Results: We converted longitudinal data from 94,356 critically ill patients treated in 2020-2021 (mean age 60.6 years [standard deviation 17.2], 30% Black, 7% Hispanic, 45% female) across 8 health systems and 33 hospitals into the CLIF format, The in-hospital mortality prediction model performed well in the health system where it was derived (0.81 AUC, 0.06 Brier score). Performance across CLIF consortium sites varied (AUCs: 0.74-0.83, Brier scores: 0.06-0.01), and demonstrated some degradation in predictive capability. Temperature trajectories were similar across health systems. Hypothermic and hyperthermic-slow-resolver patients consistently had the highest mortality. Conclusions: CLIF facilitates efficient, rigorous, and reproducible critical care research. Our federated case studies showcase CLIF's potential for disease sub-phenotyping and clinical decision-support evaluation. Future applications include pragmatic EHR-based trials, target trial emulations, foundational multi-modal AI models of critical illness, and real-time critical care quality dashboards.

3.
Crit Care Explor ; 6(7): e1116, 2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-39028867

RESUMEN

BACKGROUND AND OBJECTIVE: To develop the COVid Veteran (COVet) score for clinical deterioration in Veterans hospitalized with COVID-19 and further validate this model in both Veteran and non-Veteran samples. No such score has been derived and validated while incorporating a Veteran sample. DERIVATION COHORT: Adults (age ≥ 18 yr) hospitalized outside the ICU with a diagnosis of COVID-19 for model development to the Veterans Health Administration (VHA) (n = 80 hospitals). VALIDATION COHORT: External validation occurred in a VHA cohort of 34 hospitals, as well as six non-Veteran health systems for further external validation (n = 21 hospitals) between 2020 and 2023. PREDICTION MODEL: eXtreme Gradient Boosting machine learning methods were used, and performance was assessed using the area under the receiver operating characteristic curve and compared with the National Early Warning Score (NEWS). The primary outcome was transfer to the ICU or death within 24 hours of each new variable observation. Model predictor variables included demographics, vital signs, structured flowsheet data, and laboratory values. RESULTS: A total of 96,908 admissions occurred during the study period, of which 59,897 were in the Veteran sample and 37,011 were in the non-Veteran sample. During external validation in the Veteran sample, the model demonstrated excellent discrimination, with an area under the receiver operating characteristic curve of 0.88. This was significantly higher than NEWS (0.79; p < 0.01). In the non-Veteran sample, the model also demonstrated excellent discrimination (0.86 vs. 0.79 for NEWS; p < 0.01). The top three variables of importance were eosinophil percentage, mean oxygen saturation in the prior 24-hour period, and worst mental status in the prior 24-hour period. CONCLUSIONS: We used machine learning methods to develop and validate a highly accurate early warning score in both Veterans and non-Veterans hospitalized with COVID-19. The model could lead to earlier identification and therapy, which may improve outcomes.


Asunto(s)
COVID-19 , Aprendizaje Automático , Veteranos , Humanos , COVID-19/diagnóstico , COVID-19/epidemiología , Masculino , Femenino , Persona de Mediana Edad , Veteranos/estadística & datos numéricos , Anciano , Medición de Riesgo/métodos , Estados Unidos/epidemiología , Hospitalización/estadística & datos numéricos , Adulto , Unidades de Cuidados Intensivos , Curva ROC , Estudios de Cohortes
4.
medRxiv ; 2024 Oct 03.
Artículo en Inglés | MEDLINE | ID: mdl-38562803

RESUMEN

OBJECTIVE: Early detection of clinical deterioration using machine learning early warning scores may improve outcomes. However, most implemented scores were developed using logistic regression, only underwent retrospective validation, and were not tested in important subgroups. Our objective was to develop and prospectively validate a gradient boosted machine model (eCARTv5) for identifying clinical deterioration on the wards. DESIGN: Multicenter retrospective and prospective observational study. SETTING: Inpatient admissions to the medical-surgical wards at seven hospitals in three health systems for model development (2006-2022) and at 21 hospitals from three health systems for retrospective (2009-2023) and prospective (2023-2024) external validation. PATIENTS: All adult patients hospitalized at each participating health system during the study years. INTERVENTIONS: None MEASUREMENTS AND MAIN RESULTS: Predictor variables (demographics, vital signs, documentation, and laboratory values) were used in a gradient boosted trees algorithm to predict intensive care unit transfer or death in the next 24 hours. The developed model (eCART) was compared to the Modified Early Warning Score (MEWS) and the National Early Warning Score (NEWS) using the area under the receiver operating characteristic curve (AUROC). The development cohort included 901,491 admissions, the retrospective validation cohort included 1,769,461 admissions, and the prospective validation cohort included 205,946 admissions. In retrospective validation, eCART had the highest AUROC (0.835; 95%CI 0.834, 0.835), followed by NEWS (0.766 (95%CI 0.766, 0.767)), and MEWS (0.704 (95%CI 0.703, 0.704)). eCART's performance remained high (AUROC ≥0.80) across a range of patient demographics, clinical conditions, and during prospective validation. CONCLUSIONS: We developed eCART, which performed better than the NEWS and MEWS retrospectively, prospectively, and across a range of subgroups. These results served as the foundation for Food and Drug Administration clearance for its use in identifying deterioration in hospitalized ward patients.

5.
J Am Med Inform Assoc ; 31(6): 1322-1330, 2024 May 20.
Artículo en Inglés | MEDLINE | ID: mdl-38679906

RESUMEN

OBJECTIVES: To compare and externally validate popular deep learning model architectures and data transformation methods for variable-length time series data in 3 clinical tasks (clinical deterioration, severe acute kidney injury [AKI], and suspected infection). MATERIALS AND METHODS: This multicenter retrospective study included admissions at 2 medical centers that spanned 2007-2022. Distinct datasets were created for each clinical task, with 1 site used for training and the other for testing. Three feature engineering methods (normalization, standardization, and piece-wise linear encoding with decision trees [PLE-DTs]) and 3 architectures (long short-term memory/gated recurrent unit [LSTM/GRU], temporal convolutional network, and time-distributed wrapper with convolutional neural network [TDW-CNN]) were compared in each clinical task. Model discrimination was evaluated using the area under the precision-recall curve (AUPRC) and the area under the receiver operating characteristic curve (AUROC). RESULTS: The study comprised 373 825 admissions for training and 256 128 admissions for testing. LSTM/GRU models tied with TDW-CNN models with both obtaining the highest mean AUPRC in 2 tasks, and LSTM/GRU had the highest mean AUROC across all tasks (deterioration: 0.81, AKI: 0.92, infection: 0.87). PLE-DT with LSTM/GRU achieved the highest AUPRC in all tasks. DISCUSSION: When externally validated in 3 clinical tasks, the LSTM/GRU model architecture with PLE-DT transformed data demonstrated the highest AUPRC in all tasks. Multiple models achieved similar performance when evaluated using AUROC. CONCLUSION: The LSTM architecture performs as well or better than some newer architectures, and PLE-DT may enhance the AUPRC in variable-length time series data for predicting clinical outcomes during external validation.


Asunto(s)
Aprendizaje Profundo , Femenino , Humanos , Masculino , Persona de Mediana Edad , Lesión Renal Aguda , Conjuntos de Datos como Asunto , Redes Neurales de la Computación , Estudios Retrospectivos , Curva ROC
6.
J Am Med Inform Assoc ; 31(6): 1291-1302, 2024 May 20.
Artículo en Inglés | MEDLINE | ID: mdl-38587875

RESUMEN

OBJECTIVE: The timely stratification of trauma injury severity can enhance the quality of trauma care but it requires intense manual annotation from certified trauma coders. The objective of this study is to develop machine learning models for the stratification of trauma injury severity across various body regions using clinical text and structured electronic health records (EHRs) data. MATERIALS AND METHODS: Our study utilized clinical documents and structured EHR variables linked with the trauma registry data to create 2 machine learning models with different approaches to representing text. The first one fuses concept unique identifiers (CUIs) extracted from free text with structured EHR variables, while the second one integrates free text with structured EHR variables. Temporal validation was undertaken to ensure the models' temporal generalizability. Additionally, analyses to assess the variable importance were conducted. RESULTS: Both models demonstrated impressive performance in categorizing leg injuries, achieving high accuracy with macro-F1 scores of over 0.8. Additionally, they showed considerable accuracy, with macro-F1 scores exceeding or near 0.7, in assessing injuries in the areas of the chest and head. We showed in our variable importance analysis that the most important features in the model have strong face validity in determining clinically relevant trauma injuries. DISCUSSION: The CUI-based model achieves comparable performance, if not higher, compared to the free-text-based model, with reduced complexity. Furthermore, integrating structured EHR data improves performance, particularly when the text modalities are insufficiently indicative. CONCLUSIONS: Our multi-modal, multiclass models can provide accurate stratification of trauma injury severity and clinically relevant interpretations.


Asunto(s)
Registros Electrónicos de Salud , Aprendizaje Automático , Heridas y Lesiones , Humanos , Heridas y Lesiones/clasificación , Puntaje de Gravedad del Traumatismo , Sistema de Registros , Índices de Gravedad del Trauma , Procesamiento de Lenguaje Natural
7.
Crit Care Explor ; 6(3): e1066, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38505174

RESUMEN

OBJECTIVES: Alcohol withdrawal syndrome (AWS) may progress to require high-intensity care. Approaches to identify hospitalized patients with AWS who received higher level of care have not been previously examined. This study aimed to examine the utility of Clinical Institute Withdrawal Assessment Alcohol Revised (CIWA-Ar) for alcohol scale scores and medication doses for alcohol withdrawal management in identifying patients who received high-intensity care. DESIGN: A multicenter observational cohort study of hospitalized adults with alcohol withdrawal. SETTING: University of Chicago Medical Center and University of Wisconsin Hospital. PATIENTS: Inpatient encounters between November 2008 and February 2022 with a CIWA-Ar score greater than 0 and benzodiazepine or barbiturate administered within the first 24 hours. The primary composite outcome was patients who progressed to high-intensity care (intermediate care or ICU). INTERVENTIONS: None. MAIN RESULTS: Among the 8742 patients included in the study, 37.5% (n = 3280) progressed to high-intensity care. The odds ratio for the composite outcome increased above 1.0 when the CIWA-Ar score was 24. The sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) at this threshold were 0.12 (95% CI, 0.11-0.13), 0.95 (95% CI, 0.94-0.95), 0.58 (95% CI, 0.54-0.61), and 0.64 (95% CI, 0.63-0.65), respectively. The OR increased above 1.0 at a 24-hour lorazepam milligram equivalent dose cutoff of 15 mg. The sensitivity, specificity, PPV, and NPV at this threshold were 0.16 (95% CI, 0.14-0.17), 0.96 (95% CI, 0.95-0.96), 0.68 (95% CI, 0.65-0.72), and 0.65 (95% CI, 0.64-0.66), respectively. CONCLUSIONS: Neither CIWA-Ar scores nor medication dose cutoff points were effective measures for identifying patients with alcohol withdrawal who received high-intensity care. Research studies for examining outcomes in patients who deteriorate with AWS will require better methods for cohort identification.

8.
medRxiv ; 2024 Feb 06.
Artículo en Inglés | MEDLINE | ID: mdl-38370788

RESUMEN

OBJECTIVE: Timely intervention for clinically deteriorating ward patients requires that care teams accurately diagnose and treat their underlying medical conditions. However, the most common diagnoses leading to deterioration and the relevant therapies provided are poorly characterized. Therefore, we aimed to determine the diagnoses responsible for clinical deterioration, the relevant diagnostic tests ordered, and the treatments administered among high-risk ward patients using manual chart review. DESIGN: Multicenter retrospective observational study. SETTING: Inpatient medical-surgical wards at four health systems from 2006-2020 PATIENTS: Randomly selected patients (1,000 from each health system) with clinical deterioration, defined by reaching the 95th percentile of a validated early warning score, electronic Cardiac Arrest Risk Triage (eCART), were included. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Clinical deterioration was confirmed by a trained reviewer or marked as a false alarm if no deterioration occurred for each patient. For true deterioration events, the condition causing deterioration, relevant diagnostic tests ordered, and treatments provided were collected. Of the 4,000 included patients, 2,484 (62%) had clinical deterioration confirmed by chart review. Sepsis was the most common cause of deterioration (41%; n=1,021), followed by arrhythmia (19%; n=473), while liver failure had the highest in-hospital mortality (41%). The most common diagnostic tests ordered were complete blood counts (47% of events), followed by chest x-rays (42%), and cultures (40%), while the most common medication orders were antimicrobials (46%), followed by fluid boluses (34%), and antiarrhythmics (19%). CONCLUSIONS: We found that sepsis was the most common cause of deterioration, while liver failure had the highest mortality. Complete blood counts and chest x-rays were the most common diagnostic tests ordered, and antimicrobials and fluid boluses were the most common medication interventions. These results provide important insights for clinical decision-making at the bedside, training of rapid response teams, and the development of institutional treatment pathways for clinical deterioration. KEY POINTS: Question: What are the most common diagnoses, diagnostic test orders, and treatments for ward patients experiencing clinical deterioration? Findings: In manual chart review of 2,484 encounters with deterioration across four health systems, we found that sepsis was the most common cause of clinical deterioration, followed by arrythmias, while liver failure had the highest mortality. Complete blood counts and chest x-rays were the most common diagnostic test orders, while antimicrobials and fluid boluses were the most common treatments. Meaning: Our results provide new insights into clinical deterioration events, which can inform institutional treatment pathways, rapid response team training, and patient care.

9.
Resusc Plus ; 17: 100540, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38260119

RESUMEN

Background and Objective: The Children's Early Warning Tool (CEWT), developed in Australia, is widely used in many countries to monitor the risk of deterioration in hospitalized children. Our objective was to compare CEWT prediction performance against a version of the Bedside Pediatric Early Warning Score (Bedside PEWS), Between the Flags (BTF), and the pediatric Calculated Assessment of Risk and Triage (pCART). Methods: We conducted a retrospective observational study of all patient admissions to the Comer Children's Hospital at the University of Chicago between 2009-2019. We compared performance for predicting the primary outcome of a direct ward-to-intensive care unit (ICU) transfer within the next 12 h using the area under the receiver operating characteristic curve (AUC). Alert rates at various score thresholds were also compared. Results: Of 50,815 ward admissions, 1,874 (3.7%) experienced the primary outcome. Among patients in Cohort 1 (years 2009-2017, on which the machine learning-based pCART was trained), CEWT performed slightly worse than Bedside PEWS but better than BTF (CEWT AUC 0.74 vs. Bedside PEWS 0.76, P < 0.001; vs. BTF 0.66, P < 0.001), while pCART performed best for patients in Cohort 2 (years 2018-2019, pCART AUC 0.84 vs. CEWT AUC 0.79, P < 0.001; vs. BTF AUC 0.67, P < 0.001; vs. Bedside PEWS 0.80, P < 0.001). Sensitivity, specificity, and positive predictive values varied across all four tools at the examined thresholds for alerts. Conclusion: CEWT has good discrimination for predicting which patients will likely be transferred to the ICU, while pCART performed the best.

10.
JAMIA Open ; 6(4): ooad109, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-38144168

RESUMEN

Objectives: To develop and externally validate machine learning models using structured and unstructured electronic health record data to predict postoperative acute kidney injury (AKI) across inpatient settings. Materials and Methods: Data for adult postoperative admissions to the Loyola University Medical Center (2009-2017) were used for model development and admissions to the University of Wisconsin-Madison (2009-2020) were used for validation. Structured features included demographics, vital signs, laboratory results, and nurse-documented scores. Unstructured text from clinical notes were converted into concept unique identifiers (CUIs) using the clinical Text Analysis and Knowledge Extraction System. The primary outcome was the development of Kidney Disease Improvement Global Outcomes stage 2 AKI within 7 days after leaving the operating room. We derived unimodal extreme gradient boosting machines (XGBoost) and elastic net logistic regression (GLMNET) models using structured-only data and multimodal models combining structured data with CUI features. Model comparison was performed using the receiver operating characteristic curve (AUROC), with Delong's test for statistical differences. Results: The study cohort included 138 389 adult patient admissions (mean [SD] age 58 [16] years; 11 506 [8%] African-American; and 70 826 [51%] female) across the 2 sites. Of those, 2959 (2.1%) developed stage 2 AKI or higher. Across all data types, XGBoost outperformed GLMNET (mean AUROC 0.81 [95% confidence interval (CI), 0.80-0.82] vs 0.78 [95% CI, 0.77-0.79]). The multimodal XGBoost model incorporating CUIs parameterized as term frequency-inverse document frequency (TF-IDF) showed the highest discrimination performance (AUROC 0.82 [95% CI, 0.81-0.83]) over unimodal models (AUROC 0.79 [95% CI, 0.78-0.80]). Discussion: A multimodality approach with structured data and TF-IDF weighting of CUIs increased model performance over structured data-only models. Conclusion: These findings highlight the predictive power of CUIs when merged with structured data for clinical prediction models, which may improve the detection of postoperative AKI.

11.
Am J Respir Crit Care Med ; 207(10): 1300-1309, 2023 05 15.
Artículo en Inglés | MEDLINE | ID: mdl-36449534

RESUMEN

Rationale: Despite etiologic and severity heterogeneity in neutropenic sepsis, management is often uniform. Understanding host response clinical subphenotypes might inform treatment strategies for neutropenic sepsis. Objectives: In this retrospective two-hospital study, we analyzed whether temperature trajectory modeling could identify distinct, clinically relevant subphenotypes among oncology patients with neutropenia and suspected infection. Methods: Among adult oncologic admissions with neutropenia and blood cultures within 24 hours, a previously validated model classified patients' initial 72-hour temperature trajectories into one of four subphenotypes. We analyzed subphenotypes' independent relationships with hospital mortality and bloodstream infection using multivariable models. Measurements and Main Results: Patients (primary cohort n = 1,145, validation cohort n = 6,564) fit into one of four temperature subphenotypes. "Hyperthermic slow resolvers" (pooled n = 1,140 [14.8%], mortality n = 104 [9.1%]) and "hypothermic" encounters (n = 1,612 [20.9%], mortality n = 138 [8.6%]) had higher mortality than "hyperthermic fast resolvers" (n = 1,314 [17.0%], mortality n = 47 [3.6%]) and "normothermic" (n = 3,643 [47.3%], mortality n = 196 [5.4%]) encounters (P < 0.001). Bloodstream infections were more common among hyperthermic slow resolvers (n = 248 [21.8%]) and hyperthermic fast resolvers (n = 240 [18.3%]) than among hypothermic (n = 188 [11.7%]) or normothermic (n = 418 [11.5%]) encounters (P < 0.001). Adjusted for confounders, hyperthermic slow resolvers had increased adjusted odds for mortality (primary cohort odds ratio, 1.91 [P = 0.03]; validation cohort odds ratio, 2.19 [P < 0.001]) and bloodstream infection (primary odds ratio, 1.54 [P = 0.04]; validation cohort odds ratio, 2.15 [P < 0.001]). Conclusions: Temperature trajectory subphenotypes were independently associated with important outcomes among hospitalized patients with neutropenia in two independent cohorts.


Asunto(s)
Neoplasias , Neutropenia , Sepsis , Adulto , Humanos , Estudios Retrospectivos , Temperatura , Neutropenia/complicaciones , Sepsis/complicaciones , Fiebre , Neoplasias/complicaciones , Neoplasias/terapia
12.
Front Pediatr ; 11: 1284672, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38188917

RESUMEN

Introduction: Critical deterioration in hospitalized children, defined as ward to pediatric intensive care unit (PICU) transfer followed by mechanical ventilation (MV) or vasoactive infusion (VI) within 12 h, has been used as a primary metric to evaluate the effectiveness of clinical interventions or quality improvement initiatives. We explore the association between critical events (CEs), i.e., MV or VI events, within the first 48 h of PICU transfer from the ward or emergency department (ED) and in-hospital mortality. Methods: We conducted a retrospective study of a cohort of PICU transfers from the ward or the ED at two tertiary-care academic hospitals. We determined the association between mortality and occurrence of CEs within 48 h of PICU transfer after adjusting for age, gender, hospital, and prior comorbidities. Results: Experiencing a CE within 48 h of PICU transfer was associated with an increased risk of mortality [OR 12.40 (95% CI: 8.12-19.23, P < 0.05)]. The increased risk of mortality was highest in the first 12 h [OR 11.32 (95% CI: 7.51-17.15, P < 0.05)] but persisted in the 12-48 h time interval [OR 2.84 (95% CI: 1.40-5.22, P < 0.05)]. Varying levels of risk were observed when considering ED or ward transfers only, when considering different age groups, and when considering individual 12-h time intervals. Discussion: We demonstrate that occurrence of a CE within 48 h of PICU transfer was associated with mortality after adjusting for confounders. Studies focusing on the impact of quality improvement efforts may benefit from using CEs within 48 h of PICU transfer as an additional evaluation metric, provided these events could have been influenced by the initiative.

13.
Int J Chron Obstruct Pulmon Dis ; 17: 2701-2709, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36299799

RESUMEN

Background: Chronic obstructive pulmonary disease (COPD) is a leading cause of hospital readmissions. Few existing tools use electronic health record (EHR) data to forecast patients' readmission risk during index hospitalizations. Objective: We used machine learning and in-hospital data to model 90-day risk for and cause of readmission among inpatients with acute exacerbations of COPD (AE-COPD). Design: Retrospective cohort study. Participants: Adult patients admitted for AE-COPD at the University of Chicago Medicine between November 7, 2008 and December 31, 2018 meeting International Classification of Diseases (ICD)-9 or -10 criteria consistent with AE-COPD were included. Methods: Random forest models were fit to predict readmission risk and respiratory-related readmission cause. Predictor variables included demographics, comorbidities, and EHR data from patients' index hospital stays. Models were derived on 70% of observations and validated on a 30% holdout set. Performance of the readmission risk model was compared to that of the HOSPITAL score. Results: Among 3238 patients admitted for AE-COPD, 1103 patients were readmitted within 90 days. Of the readmission causes, 61% (n = 672) were respiratory-related and COPD (n = 452) was the most common. Our readmission risk model had a significantly higher area under the receiver operating characteristic curve (AUROC) (0.69 [0.66, 0.73]) compared to the HOSPITAL score (0.63 [0.59, 0.67]; p = 0.002). The respiratory-related readmission cause model had an AUROC of 0.73 [0.68, 0.79]. Conclusion: Our models improve on current tools by predicting 90-day readmission risk and cause at the time of discharge from index admissions for AE-COPD. These models could be used to identify patients at higher risk of readmission and direct tailored post-discharge transition of care interventions that lower readmission risk.


Asunto(s)
Readmisión del Paciente , Enfermedad Pulmonar Obstructiva Crónica , Adulto , Humanos , Enfermedad Pulmonar Obstructiva Crónica/diagnóstico , Enfermedad Pulmonar Obstructiva Crónica/epidemiología , Enfermedad Pulmonar Obstructiva Crónica/terapia , Estudios Retrospectivos , Cuidados Posteriores , Alta del Paciente , Modelos Logísticos , Factores de Riesgo , Hospitalización , Aprendizaje Automático
14.
J Am Med Inform Assoc ; 29(10): 1696-1704, 2022 09 12.
Artículo en Inglés | MEDLINE | ID: mdl-35869954

RESUMEN

OBJECTIVES: Early identification of infection improves outcomes, but developing models for early identification requires determining infection status with manual chart review, limiting sample size. Therefore, we aimed to compare semi-supervised and transfer learning algorithms with algorithms based solely on manual chart review for identifying infection in hospitalized patients. MATERIALS AND METHODS: This multicenter retrospective study of admissions to 6 hospitals included "gold-standard" labels of infection from manual chart review and "silver-standard" labels from nonchart-reviewed patients using the Sepsis-3 infection criteria based on antibiotic and culture orders. "Gold-standard" labeled admissions were randomly allocated to training (70%) and testing (30%) datasets. Using patient characteristics, vital signs, and laboratory data from the first 24 hours of admission, we derived deep learning and non-deep learning models using transfer learning and semi-supervised methods. Performance was compared in the gold-standard test set using discrimination and calibration metrics. RESULTS: The study comprised 432 965 admissions, of which 2724 underwent chart review. In the test set, deep learning and non-deep learning approaches had similar discrimination (area under the receiver operating characteristic curve of 0.82). Semi-supervised and transfer learning approaches did not improve discrimination over models fit using only silver- or gold-standard data. Transfer learning had the best calibration (unreliability index P value: .997, Brier score: 0.173), followed by self-learning gradient boosted machine (P value: .67, Brier score: 0.170). DISCUSSION: Deep learning and non-deep learning models performed similarly for identifying infection, as did models developed using Sepsis-3 and manual chart review labels. CONCLUSION: In a multicenter study of almost 3000 chart-reviewed patients, semi-supervised and transfer learning models showed similar performance for model discrimination as baseline XGBoost, while transfer learning improved calibration.


Asunto(s)
Aprendizaje Automático , Sepsis , Humanos , Curva ROC , Estudios Retrospectivos , Sepsis/diagnóstico
15.
BMC Pregnancy Childbirth ; 22(1): 295, 2022 Apr 06.
Artículo en Inglés | MEDLINE | ID: mdl-35387624

RESUMEN

BACKGROUND: Early warning scores are designed to identify hospitalized patients who are at high risk of clinical deterioration. Although many general scores have been developed for the medical-surgical wards, specific scores have also been developed for obstetric patients due to differences in normal vital sign ranges and potential complications in this unique population. The comparative performance of general and obstetric early warning scores for predicting deterioration and infection on the maternal wards is not known. METHODS: This was an observational cohort study at the University of Chicago that included patients hospitalized on obstetric wards from November 2008 to December 2018. Obstetric scores (modified early obstetric warning system (MEOWS), maternal early warning criteria (MEWC), and maternal early warning trigger (MEWT)), paper-based general scores (Modified Early Warning Score (MEWS) and National Early Warning Score (NEWS), and a general score developed using machine learning (electronic Cardiac Arrest Risk Triage (eCART) score) were compared using the area under the receiver operating characteristic score (AUC) for predicting ward to intensive care unit (ICU) transfer and/or death and new infection. RESULTS: A total of 19,611 patients were included, with 43 (0.2%) experiencing deterioration (ICU transfer and/or death) and 88 (0.4%) experiencing an infection. eCART had the highest discrimination for deterioration (p < 0.05 for all comparisons), with an AUC of 0.86, followed by MEOWS (0.74), NEWS (0.72), MEWC (0.71), MEWS (0.70), and MEWT (0.65). MEWC, MEWT, and MEOWS had higher accuracy than MEWS and NEWS but lower accuracy than eCART at specific cut-off thresholds. For predicting infection, eCART (AUC 0.77) had the highest discrimination. CONCLUSIONS: Within the limitations of our retrospective study, eCART had the highest accuracy for predicting deterioration and infection in our ante- and postpartum patient population. Maternal early warning scores were more accurate than MEWS and NEWS. While institutional choice of an early warning system is complex, our results have important implications for the risk stratification of maternal ward patients, especially since the low prevalence of events means that small improvements in accuracy can lead to large decreases in false alarms.


Asunto(s)
Deterioro Clínico , Puntuación de Alerta Temprana , Paro Cardíaco , Femenino , Paro Cardíaco/diagnóstico , Humanos , Unidades de Cuidados Intensivos , Embarazo , Curva ROC , Estudios Retrospectivos , Medición de Riesgo/métodos
16.
Crit Care Med ; 49(10): 1694-1705, 2021 10 01.
Artículo en Inglés | MEDLINE | ID: mdl-33938715

RESUMEN

OBJECTIVES: Early antibiotic administration is a central component of sepsis guidelines, and delays may increase mortality. However, prior studies have examined the delay to first antibiotic administration as a single time period even though it contains two distinct processes: antibiotic ordering and antibiotic delivery, which can each be targeted for improvement through different interventions. The objective of this study was to characterize and compare patients who experienced order or delivery delays, investigate the association of each delay type with mortality, and identify novel patient subphenotypes with elevated risk of harm from delays. DESIGN: Retrospective analysis of multicenter inpatient data. SETTING: Two tertiary care medical centers (2008-2018, 2006-2017) and four community-based hospitals (2008-2017). PATIENTS: All patients admitted through the emergency department who met clinical criteria for infection. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Patient demographics, vitals, laboratory values, medication order and administration times, and in-hospital survival data were obtained from the electronic health record. Order and delivery delays were calculated for each admission. Adjusted logistic regression models were used to examine the relationship between each delay and in-hospital mortality. Causal forests, a machine learning method, was used to identify a high-risk subgroup. A total of 60,817 admissions were included, and delays occurred in 58% of patients. Each additional hour of order delay (odds ratio, 1.04; 95% CI, 1.03-1.05) and delivery delay (odds ratio, 1.05; 95% CI, 1.02-1.08) was associated with increased mortality. A patient subgroup identified by causal forests with higher comorbidity burden, greater organ dysfunction, and abnormal initial lactate measurements had a higher risk of death associated with delays (odds ratio, 1.07; 95% CI, 1.06-1.09 vs odds ratio, 1.02; 95% CI, 1.01-1.03). CONCLUSIONS: Delays in antibiotic ordering and drug delivery are both associated with a similar increase in mortality. A distinct subgroup of high-risk patients exist who could be targeted for more timely therapy.


Asunto(s)
Antibacterianos/administración & dosificación , Fenotipo , Sepsis/genética , Tiempo de Tratamiento/estadística & datos numéricos , Anciano , Anciano de 80 o más Años , Antibacterianos/uso terapéutico , Servicio de Urgencia en Hospital/organización & administración , Servicio de Urgencia en Hospital/estadística & datos numéricos , Femenino , Hospitalización/estadística & datos numéricos , Humanos , Illinois/epidemiología , Masculino , Persona de Mediana Edad , Estudios Prospectivos , Estudios Retrospectivos , Sepsis/tratamiento farmacológico , Sepsis/fisiopatología , Factores de Tiempo
17.
Crit Care Med ; 49(7): e673-e682, 2021 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-33861547

RESUMEN

OBJECTIVES: Recent sepsis studies have defined patients as "infected" using a combination of culture and antibiotic orders rather than billing data. However, the accuracy of these definitions is unclear. We aimed to compare the accuracy of different established criteria for identifying infected patients using detailed chart review. DESIGN: Retrospective observational study. SETTING: Six hospitals from three health systems in Illinois. PATIENTS: Adult admissions with blood culture or antibiotic orders, or Angus International Classification of Diseases infection codes and death were eligible for study inclusion as potentially infected patients. Nine-hundred to 1,000 of these admissions were randomly selected from each health system for chart review, and a proportional number of patients who did not meet chart review eligibility criteria were also included and deemed not infected. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: The accuracy of published billing code criteria by Angus et al and electronic health record criteria by Rhee et al and Seymour et al (Sepsis-3) was determined using the manual chart review results as the gold standard. A total of 5,215 patients were included, with 2,874 encounters analyzed via chart review and a proportional 2,341 added who did not meet chart review eligibility criteria. In the study cohort, 27.5% of admissions had at least one infection. This was most similar to the percentage of admissions with blood culture orders (26.8%), Angus infection criteria (28.7%), and the Sepsis-3 criteria (30.4%). Sepsis-3 criteria was the most sensitive (81%), followed by Angus (77%) and Rhee (52%), while Rhee (97%) and Angus (90%) were more specific than the Sepsis-3 criteria (89%). Results were similar for patients with organ dysfunction during their admission. CONCLUSIONS: Published criteria have a wide range of accuracy for identifying infected patients, with the Sepsis-3 criteria being the most sensitive and Rhee criteria being the most specific. These findings have important implications for studies investigating the burden of sepsis on a local and national level.


Asunto(s)
Exactitud de los Datos , Registros Electrónicos de Salud/normas , Infecciones/epidemiología , Almacenamiento y Recuperación de la Información/métodos , Adulto , Anciano , Antibacterianos/uso terapéutico , Profilaxis Antibiótica/estadística & datos numéricos , Cultivo de Sangre , Chicago/epidemiología , Reacciones Falso Positivas , Femenino , Humanos , Infecciones/diagnóstico , Clasificación Internacional de Enfermedades , Masculino , Persona de Mediana Edad , Puntuaciones en la Disfunción de Órganos , Admisión del Paciente/estadística & datos numéricos , Prevalencia , Estudios Retrospectivos , Sensibilidad y Especificidad , Sepsis/diagnóstico
18.
Front Med (Lausanne) ; 8: 611989, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33898475

RESUMEN

Rationale: Identifying patients hospitalized for acute exacerbations of COPD (AECOPD) who are at high risk for readmission is challenging. Traditional markers of disease severity such as pulmonary function have limited utility in predicting readmission. Handgrip strength, a component of the physical frailty phenotype, may be a simple tool to help predict readmission. Objective(s): To investigate if handgrip strength, a component of the physical frailty phenotype and surrogate for weakness, is a predictive biomarker of COPD readmission. Methods: This was a prospective, observational study of patients admitted to the inpatient general medicine unit at the University of Chicago Medicine, US. This study evaluated age, sex, ethnicity, degree of obstructive lung disease by spirometry (FEV1 percent predicted), and physical frailty phenotype (components include handgrip strength and walk speed). The primary outcome was all-cause hospital readmission within 30 days of discharge. Results: Of 381 eligible patients with AECOPD, 70 participants agreed to consent to participate in this study. Twelve participants (17%) were readmitted within 30 days of discharge. Weak grip at index hospitalization, defined as grip strength lower than previously established cut-points for sex and body mass index (BMI), was predictive of readmission (OR 11.2, 95% CI 1.3, 93.2, p = 0.03). Degree of airway obstruction (FEV1 percent predicted) did not predict readmission (OR 1.0, 95% CI 0.95, 1.1, p = 0.7). No non-frail patients were readmitted. Conclusions: At a single academic center weak grip strength was associated with increased 30-day readmission. Future studies should investigate whether geriatric measures can help risk-stratify patients for likelihood of readmission after admission for AECOPD.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...