Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 112
Filtrar
1.
Am J Respir Crit Care Med ; 207(10): 1300-1309, 2023 05 15.
Artículo en Inglés | MEDLINE | ID: mdl-36449534

RESUMEN

Rationale: Despite etiologic and severity heterogeneity in neutropenic sepsis, management is often uniform. Understanding host response clinical subphenotypes might inform treatment strategies for neutropenic sepsis. Objectives: In this retrospective two-hospital study, we analyzed whether temperature trajectory modeling could identify distinct, clinically relevant subphenotypes among oncology patients with neutropenia and suspected infection. Methods: Among adult oncologic admissions with neutropenia and blood cultures within 24 hours, a previously validated model classified patients' initial 72-hour temperature trajectories into one of four subphenotypes. We analyzed subphenotypes' independent relationships with hospital mortality and bloodstream infection using multivariable models. Measurements and Main Results: Patients (primary cohort n = 1,145, validation cohort n = 6,564) fit into one of four temperature subphenotypes. "Hyperthermic slow resolvers" (pooled n = 1,140 [14.8%], mortality n = 104 [9.1%]) and "hypothermic" encounters (n = 1,612 [20.9%], mortality n = 138 [8.6%]) had higher mortality than "hyperthermic fast resolvers" (n = 1,314 [17.0%], mortality n = 47 [3.6%]) and "normothermic" (n = 3,643 [47.3%], mortality n = 196 [5.4%]) encounters (P < 0.001). Bloodstream infections were more common among hyperthermic slow resolvers (n = 248 [21.8%]) and hyperthermic fast resolvers (n = 240 [18.3%]) than among hypothermic (n = 188 [11.7%]) or normothermic (n = 418 [11.5%]) encounters (P < 0.001). Adjusted for confounders, hyperthermic slow resolvers had increased adjusted odds for mortality (primary cohort odds ratio, 1.91 [P = 0.03]; validation cohort odds ratio, 2.19 [P < 0.001]) and bloodstream infection (primary odds ratio, 1.54 [P = 0.04]; validation cohort odds ratio, 2.15 [P < 0.001]). Conclusions: Temperature trajectory subphenotypes were independently associated with important outcomes among hospitalized patients with neutropenia in two independent cohorts.


Asunto(s)
Neoplasias , Neutropenia , Sepsis , Adulto , Humanos , Estudios Retrospectivos , Temperatura , Neutropenia/complicaciones , Sepsis/complicaciones , Fiebre , Neoplasias/complicaciones , Neoplasias/terapia
2.
Crit Care Med ; 50(9): 1339-1347, 2022 09 01.
Artículo en Inglés | MEDLINE | ID: mdl-35452010

RESUMEN

OBJECTIVES: To determine the impact of a machine learning early warning risk score, electronic Cardiac Arrest Risk Triage (eCART), on mortality for elevated-risk adult inpatients. DESIGN: A pragmatic pre- and post-intervention study conducted over the same 10-month period in 2 consecutive years. SETTING: Four-hospital community-academic health system. PATIENTS: All adult patients admitted to a medical-surgical ward. INTERVENTIONS: During the baseline period, clinicians were blinded to eCART scores. During the intervention period, scores were presented to providers. Scores greater than or equal to 95th percentile were designated high risk prompting a physician assessment for ICU admission. Scores between the 89th and 95th percentiles were designated intermediate risk, triggering a nurse-directed workflow that included measuring vital signs every 2 hours and contacting a physician to review the treatment plan. MEASUREMENTS AND MAIN RESULTS: The primary outcome was all-cause inhospital mortality. Secondary measures included vital sign assessment within 2 hours, ICU transfer rate, and time to ICU transfer. A total of 60,261 patients were admitted during the study period, of which 6,681 (11.1%) met inclusion criteria (baseline period n = 3,191, intervention period n = 3,490). The intervention period was associated with a significant decrease in hospital mortality for the main cohort (8.8% vs 13.9%; p < 0.0001; adjusted odds ratio [OR], 0.60 [95% CI, 0.52-0.71]). A significant decrease in mortality was also seen for the average-risk cohort not subject to the intervention (0.49% vs 0.26%; p < 0.05; adjusted OR, 0.53 [95% CI, 0.41-0.74]). In subgroup analysis, the benefit was seen in both high- (17.9% vs 23.9%; p = 0.001) and intermediate-risk (2.0% vs 4.0 %; p = 0.005) patients. The intervention period was also associated with a significant increase in ICU transfers, decrease in time to ICU transfer, and increase in vital sign reassessment within 2 hours. CONCLUSIONS: Implementation of a machine learning early warning score-driven protocol was associated with reduced inhospital mortality, likely driven by earlier and more frequent ICU transfer.


Asunto(s)
Puntuación de Alerta Temprana , Paro Cardíaco , Adulto , Paro Cardíaco/diagnóstico , Paro Cardíaco/terapia , Mortalidad Hospitalaria , Humanos , Unidades de Cuidados Intensivos , Aprendizaje Automático , Signos Vitales
3.
BMC Pregnancy Childbirth ; 22(1): 295, 2022 Apr 06.
Artículo en Inglés | MEDLINE | ID: mdl-35387624

RESUMEN

BACKGROUND: Early warning scores are designed to identify hospitalized patients who are at high risk of clinical deterioration. Although many general scores have been developed for the medical-surgical wards, specific scores have also been developed for obstetric patients due to differences in normal vital sign ranges and potential complications in this unique population. The comparative performance of general and obstetric early warning scores for predicting deterioration and infection on the maternal wards is not known. METHODS: This was an observational cohort study at the University of Chicago that included patients hospitalized on obstetric wards from November 2008 to December 2018. Obstetric scores (modified early obstetric warning system (MEOWS), maternal early warning criteria (MEWC), and maternal early warning trigger (MEWT)), paper-based general scores (Modified Early Warning Score (MEWS) and National Early Warning Score (NEWS), and a general score developed using machine learning (electronic Cardiac Arrest Risk Triage (eCART) score) were compared using the area under the receiver operating characteristic score (AUC) for predicting ward to intensive care unit (ICU) transfer and/or death and new infection. RESULTS: A total of 19,611 patients were included, with 43 (0.2%) experiencing deterioration (ICU transfer and/or death) and 88 (0.4%) experiencing an infection. eCART had the highest discrimination for deterioration (p < 0.05 for all comparisons), with an AUC of 0.86, followed by MEOWS (0.74), NEWS (0.72), MEWC (0.71), MEWS (0.70), and MEWT (0.65). MEWC, MEWT, and MEOWS had higher accuracy than MEWS and NEWS but lower accuracy than eCART at specific cut-off thresholds. For predicting infection, eCART (AUC 0.77) had the highest discrimination. CONCLUSIONS: Within the limitations of our retrospective study, eCART had the highest accuracy for predicting deterioration and infection in our ante- and postpartum patient population. Maternal early warning scores were more accurate than MEWS and NEWS. While institutional choice of an early warning system is complex, our results have important implications for the risk stratification of maternal ward patients, especially since the low prevalence of events means that small improvements in accuracy can lead to large decreases in false alarms.


Asunto(s)
Deterioro Clínico , Puntuación de Alerta Temprana , Paro Cardíaco , Femenino , Paro Cardíaco/diagnóstico , Humanos , Unidades de Cuidados Intensivos , Embarazo , Curva ROC , Estudios Retrospectivos , Medición de Riesgo/métodos
4.
Crit Care Med ; 49(7): e673-e682, 2021 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-33861547

RESUMEN

OBJECTIVES: Recent sepsis studies have defined patients as "infected" using a combination of culture and antibiotic orders rather than billing data. However, the accuracy of these definitions is unclear. We aimed to compare the accuracy of different established criteria for identifying infected patients using detailed chart review. DESIGN: Retrospective observational study. SETTING: Six hospitals from three health systems in Illinois. PATIENTS: Adult admissions with blood culture or antibiotic orders, or Angus International Classification of Diseases infection codes and death were eligible for study inclusion as potentially infected patients. Nine-hundred to 1,000 of these admissions were randomly selected from each health system for chart review, and a proportional number of patients who did not meet chart review eligibility criteria were also included and deemed not infected. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: The accuracy of published billing code criteria by Angus et al and electronic health record criteria by Rhee et al and Seymour et al (Sepsis-3) was determined using the manual chart review results as the gold standard. A total of 5,215 patients were included, with 2,874 encounters analyzed via chart review and a proportional 2,341 added who did not meet chart review eligibility criteria. In the study cohort, 27.5% of admissions had at least one infection. This was most similar to the percentage of admissions with blood culture orders (26.8%), Angus infection criteria (28.7%), and the Sepsis-3 criteria (30.4%). Sepsis-3 criteria was the most sensitive (81%), followed by Angus (77%) and Rhee (52%), while Rhee (97%) and Angus (90%) were more specific than the Sepsis-3 criteria (89%). Results were similar for patients with organ dysfunction during their admission. CONCLUSIONS: Published criteria have a wide range of accuracy for identifying infected patients, with the Sepsis-3 criteria being the most sensitive and Rhee criteria being the most specific. These findings have important implications for studies investigating the burden of sepsis on a local and national level.


Asunto(s)
Exactitud de los Datos , Registros Electrónicos de Salud/normas , Infecciones/epidemiología , Almacenamiento y Recuperación de la Información/métodos , Adulto , Anciano , Antibacterianos/uso terapéutico , Profilaxis Antibiótica/estadística & datos numéricos , Cultivo de Sangre , Chicago/epidemiología , Reacciones Falso Positivas , Femenino , Humanos , Infecciones/diagnóstico , Clasificación Internacional de Enfermedades , Masculino , Persona de Mediana Edad , Puntuaciones en la Disfunción de Órganos , Admisión del Paciente/estadística & datos numéricos , Prevalencia , Estudios Retrospectivos , Sensibilidad y Especificidad , Sepsis/diagnóstico
5.
Ann Surg ; 269(6): 1059-1063, 2019 06.
Artículo en Inglés | MEDLINE | ID: mdl-31082902

RESUMEN

OBJECTIVE: Assess the accuracy of 3 early warning scores for predicting severe adverse events in postoperative inpatients. SUMMARY OF BACKGROUND DATA: Postoperative clinical deterioration on inpatient hospital services is associated with increased morbidity, mortality, and cost. Early warning scores have been developed to detect inpatient clinical deterioration and trigger rapid response activation, but knowledge regarding the application of early warning scores to postoperative inpatients is limited. METHODS: This was a retrospective cohort study of adult patients hospitalized on the wards after surgical procedures at an urban academic medical center from November, 2008 to January, 2016. The accuracies of the Modified Early Warning Score (MEWS), National Early Warning Score (NEWS), and the electronic cardiac arrest risk triage (eCART) score were compared in predicting severe adverse events (ICU transfer, ward cardiac arrest, or ward death) in the postoperative period using the area under the receiver operating characteristic curve (AUC). RESULTS: Of the 32,537 patient admissions included in the study, 3.8% (n = 1243) experienced a severe adverse outcome after the procedure. The accuracy for predicting the composite outcome was highest for eCART [AUC 0.79 (95% CI: 0.78-0.81)], followed by NEWS [AUC 0.76 (95% CI: 0.75-0.78)], and MEWS [AUC 0.75 (95% CI: 0.73-0.76)]. Of the individual vital signs and labs, maximum respiratory rate was the most predictive (AUC 0.67) and maximum temperature was an inverse predictor (AUC 0.46). CONCLUSION: Early warning scores are predictive of severe adverse events in postoperative patients. eCART is significantly more accurate in this patient population than both NEWS and MEWS.


Asunto(s)
Paro Cardíaco/diagnóstico , Paro Cardíaco/etiología , Complicaciones Posoperatorias/diagnóstico , Complicaciones Posoperatorias/etiología , Triaje , Adulto , Anciano , Registros Electrónicos de Salud , Femenino , Hospitalización , Humanos , Masculino , Persona de Mediana Edad , Valor Predictivo de las Pruebas , Curva ROC , Estudios Retrospectivos , Medición de Riesgo , Signos Vitales
6.
Crit Care Med ; 47(12): e962-e965, 2019 12.
Artículo en Inglés | MEDLINE | ID: mdl-31567342

RESUMEN

OBJECTIVES: Early warning scores were developed to identify high-risk patients on the hospital wards. Research on early warning scores has focused on patients in short-term acute care hospitals, but there are other settings, such as long-term acute care hospitals, where these tools could be useful. However, the accuracy of early warning scores in long-term acute care hospitals is unknown. DESIGN: Observational cohort study. SETTING: Two long-term acute care hospitals in Illinois from January 2002 to September 2017. PATIENTS: Admitted adult long-term acute care hospital patients. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Demographic characteristics, vital signs, laboratory values, nursing flowsheet data, and outcomes data were collected from the electronic health record. The accuracy of individual variables, the Modified Early Warning Score, the National Early Warning Score version 2, and our previously developed electronic Cardiac Arrest Risk Triage score were compared for predicting the need for acute hospital transfer or death using the area under the receiver operating characteristic curve. A total of 12,497 patient admissions were included, with 3,550 experiencing the composite outcome. The median age was 65 (interquartile range, 54-74), 46% were female, and the median length of stay in the long-term acute care hospital was 27 days (interquartile range, 17-40 d), with an 8% in-hospital mortality. Laboratory values were the best predictors, with blood urea nitrogen being the most accurate (area under the receiver operating characteristic curve, 0.63) followed by albumin, bilirubin, and WBC count (area under the receiver operating characteristic curve, 0.61). Systolic blood pressure was the most accurate vital sign (area under the receiver operating characteristic curve, 0.60). Electronic Cardiac Arrest Risk Triage (area under the receiver operating characteristic curve, 0.72) was significantly more accurate than National Early Warning Score version 2 (area under the receiver operating characteristic curve, 0.66) and Modified Early Warning Score (area under the receiver operating characteristic curve, 0.65; p < 0.01 for all pairwise comparisons). CONCLUSIONS: In this retrospective cohort study, we found that the electronic Cardiac Arrest Risk Triage score was significantly more accurate than Modified Early Warning Score and National Early Warning Score version 2 for predicting acute hospital transfer and mortality. Because laboratory values were more predictive than vital signs and the average length of stay in an long-term acute care hospital is much longer than short-term acute hospitals, developing a score specific to the long-term acute care hospital population would likely further improve accuracy, thus allowing earlier identification of high-risk patients for potentially life-saving interventions.


Asunto(s)
Puntuación de Alerta Temprana , Paro Cardíaco/diagnóstico , Medición de Riesgo/métodos , Enfermedad Aguda , Anciano , Estudios de Cohortes , Femenino , Hospitales , Humanos , Cuidados a Largo Plazo , Masculino , Persona de Mediana Edad , Estudios Retrospectivos
7.
Crit Care Med ; 47(10): 1283-1289, 2019 10.
Artículo en Inglés | MEDLINE | ID: mdl-31343475

RESUMEN

OBJECTIVES: To characterize the rapid response team activations, and the patients receiving them, in the American Heart Association-sponsored Get With The Guidelines Resuscitation-Medical Emergency Team cohort between 2005 and 2015. DESIGN: Retrospective multicenter cohort study. SETTING: Three hundred sixty U.S. hospitals. PATIENTS: Consecutive adult patients experiencing rapid response team activation. INTERVENTIONS: Rapid response team activation. MEASUREMENTS AND MAIN RESULTS: The cohort included 402,023 rapid response team activations from 347,401 unique healthcare encounters. Respiratory triggers (38.0%) and cardiac triggers (37.4%) were most common. The most frequent interventions-pulse oximetry (66.5%), other monitoring (59.6%), and supplemental oxygen (62.0%)-were noninvasive. Fluids were the most common medication ordered (19.3%), but new antibiotic orders were rare (1.2%). More than 10% of rapid response teams resulted in code status changes. Hospital mortality was over 14% and increased with subsequent rapid response activations. CONCLUSIONS: Although patients requiring rapid response team activation have high inpatient mortality, most rapid response team activations involve relatively few interventions, which may limit these teams' ability to improve patient outcomes.


Asunto(s)
Servicio de Urgencia en Hospital , Equipo Hospitalario de Respuesta Rápida/estadística & datos numéricos , Sistema de Registros , Resucitación/estadística & datos numéricos , Anciano , Estudios de Cohortes , Femenino , Humanos , Masculino , Persona de Mediana Edad , Guías de Práctica Clínica como Asunto , Estudios Retrospectivos , Estados Unidos
8.
Crit Care Med ; 46(7): 1041-1048, 2018 07.
Artículo en Inglés | MEDLINE | ID: mdl-29293147

RESUMEN

OBJECTIVES: Despite wide adoption of rapid response teams across the United States, predictors of in-hospital mortality for patients receiving rapid response team calls are poorly characterized. Identification of patients at high risk of death during hospitalization could improve triage to intensive care units and prompt timely reevaluations of goals of care. We sought to identify predictors of in-hospital mortality in patients who are subjects of rapid response team calls and to develop and validate a predictive model for death after rapid response team call. DESIGN: Analysis of data from the national Get with the Guidelines-Medical Emergency Team event registry. SETTING: Two-hundred seventy four hospitals participating in Get with the Guidelines-Medical Emergency Team from June 2005 to February 2015. PATIENTS: 282,710 hospitalized adults on surgical or medical wards who were subjects of a rapid response team call. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: The primary outcome was death during hospitalization; candidate predictors included patient demographic- and event-level characteristics. Patients who died after rapid response team were older (median age 72 vs 66 yr), were more likely to be admitted for noncardiac medical illness (70% vs 58%), and had greater median length of stay prior to rapid response team (81 vs 47 hr) (p < 0.001 for all comparisons). The prediction model had an area under the receiver operating characteristic curve of 0.78 (95% CI, 0.78-0.79), with systolic blood pressure, time since admission, and respiratory rate being the most important variables. CONCLUSIONS: Patients who die following rapid response team calls differ significantly from surviving peers. Recognition of these factors could improve postrapid response team triage decisions and prompt timely goals of care discussions.


Asunto(s)
Mortalidad Hospitalaria , Equipo Hospitalario de Respuesta Rápida , Anciano , Anciano de 80 o más Años , Área Bajo la Curva , Femenino , Equipo Hospitalario de Respuesta Rápida/estadística & datos numéricos , Hospitales/estadística & datos numéricos , Humanos , Aprendizaje Automático , Masculino , Persona de Mediana Edad , Modelos Estadísticos , Curva ROC , Factores de Riesgo , Triaje , Estados Unidos/epidemiología
9.
Crit Care Med ; 46(7): 1070-1077, 2018 07.
Artículo en Inglés | MEDLINE | ID: mdl-29596073

RESUMEN

OBJECTIVES: To develop an acute kidney injury risk prediction model using electronic health record data for longitudinal use in hospitalized patients. DESIGN: Observational cohort study. SETTING: Tertiary, urban, academic medical center from November 2008 to January 2016. PATIENTS: All adult inpatients without pre-existing renal failure at admission, defined as first serum creatinine greater than or equal to 3.0 mg/dL, International Classification of Diseases, 9th Edition, code for chronic kidney disease stage 4 or higher or having received renal replacement therapy within 48 hours of first serum creatinine measurement. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Demographics, vital signs, diagnostics, and interventions were used in a Gradient Boosting Machine algorithm to predict serum creatinine-based Kidney Disease Improving Global Outcomes stage 2 acute kidney injury, with 60% of the data used for derivation and 40% for validation. Area under the receiver operator characteristic curve (AUC) was calculated in the validation cohort, and subgroup analyses were conducted across admission serum creatinine, acute kidney injury severity, and hospital location. Among the 121,158 included patients, 17,482 (14.4%) developed any Kidney Disease Improving Global Outcomes acute kidney injury, with 4,251 (3.5%) developing stage 2. The AUC (95% CI) was 0.90 (0.90-0.90) for predicting stage 2 acute kidney injury within 24 hours and 0.87 (0.87-0.87) within 48 hours. The AUC was 0.96 (0.96-0.96) for receipt of renal replacement therapy (n = 821) in the next 48 hours. Accuracy was similar across hospital settings (ICU, wards, and emergency department) and admitting serum creatinine groupings. At a probability threshold of greater than or equal to 0.022, the algorithm had a sensitivity of 84% and a specificity of 85% for stage 2 acute kidney injury and predicted the development of stage 2 a median of 41 hours (interquartile range, 12-141 hr) prior to the development of stage 2 acute kidney injury. CONCLUSIONS: Readily available electronic health record data can be used to predict impending acute kidney injury prior to changes in serum creatinine with excellent accuracy across different patient locations and admission serum creatinine. Real-time use of this model would allow early interventions for those at high risk of acute kidney injury.


Asunto(s)
Lesión Renal Aguda/etiología , Aprendizaje Automático , Lesión Renal Aguda/diagnóstico , Algoritmos , Área Bajo la Curva , Creatinina/sangre , Registros Electrónicos de Salud , Femenino , Humanos , Masculino , Persona de Mediana Edad , Modelos Estadísticos , Curva ROC , Terapia de Reemplazo Renal/estadística & datos numéricos , Reproducibilidad de los Resultados
10.
Am J Respir Crit Care Med ; 195(7): 906-911, 2017 04 01.
Artículo en Inglés | MEDLINE | ID: mdl-27649072

RESUMEN

RATIONALE: The 2016 definitions of sepsis included the quick Sepsis-related Organ Failure Assessment (qSOFA) score to identify high-risk patients outside the intensive care unit (ICU). OBJECTIVES: We sought to compare qSOFA with other commonly used early warning scores. METHODS: All admitted patients who first met the criteria for suspicion of infection in the emergency department (ED) or hospital wards from November 2008 until January 2016 were included. The qSOFA, Systemic Inflammatory Response Syndrome (SIRS), Modified Early Warning Score (MEWS), and the National Early Warning Score (NEWS) were compared for predicting death and ICU transfer. MEASUREMENTS AND MAIN RESULTS: Of the 30,677 included patients, 1,649 (5.4%) died and 7,385 (24%) experienced the composite outcome (death or ICU transfer). Sixty percent (n = 18,523) first met the suspicion criteria in the ED. Discrimination for in-hospital mortality was highest for NEWS (area under the curve [AUC], 0.77; 95% confidence interval [CI], 0.76-0.79), followed by MEWS (AUC, 0.73; 95% CI, 0.71-0.74), qSOFA (AUC, 0.69; 95% CI, 0.67-0.70), and SIRS (AUC, 0.65; 95% CI, 0.63-0.66) (P < 0.01 for all pairwise comparisons). Using the highest non-ICU score of patients, ≥2 SIRS had a sensitivity of 91% and specificity of 13% for the composite outcome compared with 54% and 67% for qSOFA ≥2, 59% and 70% for MEWS ≥5, and 67% and 66% for NEWS ≥8, respectively. Most patients met ≥2 SIRS criteria 17 hours before the combined outcome compared with 5 hours for ≥2 and 17 hours for ≥1 qSOFA criteria. CONCLUSIONS: Commonly used early warning scores are more accurate than the qSOFA score for predicting death and ICU transfer in non-ICU patients. These results suggest that the qSOFA score should not replace general early warning scores when risk-stratifying patients with suspected infection.


Asunto(s)
Puntuaciones en la Disfunción de Órganos , Sepsis/complicaciones , Sepsis/diagnóstico , Síndrome de Respuesta Inflamatoria Sistémica/complicaciones , Síndrome de Respuesta Inflamatoria Sistémica/diagnóstico , Servicio de Urgencia en Hospital , Femenino , Hospitalización , Humanos , Masculino , Persona de Mediana Edad , Curva ROC , Reproducibilidad de los Resultados , Medición de Riesgo , Sensibilidad y Especificidad
11.
Crit Care Med ; 45(10): 1677-1682, 2017 10.
Artículo en Inglés | MEDLINE | ID: mdl-28742548

RESUMEN

OBJECTIVES: Decreased staffing at nighttime is associated with worse outcomes in hospitalized patients. Rapid response teams were developed to decrease preventable harm by providing additional critical care resources to patients with clinical deterioration. We sought to determine whether rapid response team call frequency suffers from decreased utilization at night and how this is associated with patient outcomes. DESIGN: Retrospective analysis of a prospectively collected registry database. SETTING: National registry database of inpatient rapid response team calls. PATIENTS: Index rapid response team calls occurring on the general wards in the American Heart Association Get With The Guidelines-Medical Emergency Team database between 2005 and 2015 were analyzed. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: The primary outcome was inhospital mortality. Patient and event characteristics between the hours with the highest and lowest mortality were compared, and multivariable models adjusting for patient characteristics were fit. A total of 282,710 rapid response team calls from 274 hospitals were included. The lowest frequency of calls occurred in the consecutive 1 AM to 6:59 AM period, with 266 of 274 (97%) hospitals having lower than expected call volumes during those hours. Mortality was highest during the 7 AM hour and lowest during the noon hour (18.8% vs 13.8%; adjusted odds ratio, 1.41 [1.31-1.52]; p < 0.001). Compared with calls at the noon hour, those during the 7 AM hour had more deranged vital signs, were more likely to have a respiratory trigger, and were more likely to have greater than two simultaneous triggers. CONCLUSIONS: Rapid response team activation is less frequent during the early morning and is followed by a spike in mortality in the 7 AM hour. These findings suggest that failure to rescue deteriorating patients is more common overnight. Strategies aimed at improving rapid response team utilization during these vulnerable hours may improve patient outcomes.


Asunto(s)
Mortalidad Hospitalaria , Equipo Hospitalario de Respuesta Rápida , Anciano , Femenino , Paro Cardíaco/epidemiología , Hospitales/estadística & datos numéricos , Humanos , Unidades de Cuidados Intensivos , Masculino , Análisis Multivariante , Cuidados Nocturnos , Garantía de la Calidad de Atención de Salud , Sistema de Registros , Insuficiencia Respiratoria/epidemiología , Estudios Retrospectivos , Factores de Tiempo , Estados Unidos/epidemiología
12.
Crit Care Med ; 45(11): 1805-1812, 2017 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-28737573

RESUMEN

OBJECTIVE: Studies in sepsis are limited by heterogeneity regarding what constitutes suspicion of infection. We sought to compare potential suspicion criteria using antibiotic and culture order combinations in terms of patient characteristics and outcomes. We further sought to determine the impact of differing criteria on the accuracy of sepsis screening tools and early warning scores. DESIGN: Observational cohort study. SETTING: Academic center from November 2008 to January 2016. PATIENTS: Hospitalized patients outside the ICU. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Six criteria were investigated: 1) any culture, 2) blood culture, 3) any culture plus IV antibiotics, 4) blood culture plus IV antibiotics, 5) any culture plus IV antibiotics for at least 4 of 7 days, and 6) blood culture plus IV antibiotics for at least 4 of 7 days. Accuracy of the quick Sepsis-related Organ Failure Assessment score, Sepsis-related Organ Failure Assessment score, systemic inflammatory response syndrome criteria, the National and Modified Early Warning Score, and the electronic Cardiac Arrest Risk Triage score were calculated for predicting ICU transfer or death within 48 hours of meeting suspicion criteria. A total of 53,849 patients met at least one infection criteria. Mortality increased from 3% for group 1 to 9% for group 6 and percentage meeting Angus sepsis criteria increased from 20% to 40%. Across all criteria, score discrimination was lowest for systemic inflammatory response syndrome (median area under the receiver operating characteristic curve, 0.60) and Sepsis-related Organ Failure Assessment score (median area under the receiver operating characteristic curve, 0.62), intermediate for quick Sepsis-related Organ Failure Assessment (median area under the receiver operating characteristic curve, 0.65) and Modified Early Warning Score (median area under the receiver operating characteristic curve 0.67), and highest for National Early Warning Score (median area under the receiver operating characteristic curve 0.71) and electronic Cardiac Arrest Risk Triage (median area under the receiver operating characteristic curve 0.73). CONCLUSIONS: The choice of criteria to define a potentially infected population significantly impacts prevalence of mortality but has little impact on accuracy. Systemic inflammatory response syndrome was the least predictive and electronic Cardiac Arrest Risk Triage the most predictive regardless of how infection was defined.


Asunto(s)
Unidades de Cuidados Intensivos/estadística & datos numéricos , Puntuaciones en la Disfunción de Órganos , Sepsis/mortalidad , Síndrome de Respuesta Inflamatoria Sistémica/mortalidad , Centros Médicos Académicos , Adulto , Anciano , Antibacterianos/administración & dosificación , Técnicas Bacteriológicas , Cultivo de Sangre , Estudios de Cohortes , Diagnóstico Precoz , Femenino , Paro Cardíaco/mortalidad , Humanos , Masculino , Persona de Mediana Edad , Valor Predictivo de las Pruebas , Pronóstico , Curva ROC , Estudios Retrospectivos , Sepsis/diagnóstico , Sepsis/tratamiento farmacológico , Síndrome de Respuesta Inflamatoria Sistémica/diagnóstico , Síndrome de Respuesta Inflamatoria Sistémica/tratamiento farmacológico
15.
Crit Care Med ; 44(8): 1468-73, 2016 08.
Artículo en Inglés | MEDLINE | ID: mdl-27075140

RESUMEN

OBJECTIVE: Failure to detect clinical deterioration in the hospital is common and associated with poor patient outcomes and increased healthcare costs. Our objective was to evaluate the feasibility and accuracy of real-time risk stratification using the electronic Cardiac Arrest Risk Triage score, an electronic health record-based early warning score. DESIGN: We conducted a prospective black-box validation study. Data were transmitted via HL7 feed in real time to an integration engine and database server wherein the scores were calculated and stored without visualization for clinical providers. The high-risk threshold was set a priori. Timing and sensitivity of electronic Cardiac Arrest Risk Triage score activation were compared with standard-of-care Rapid Response Team activation for patients who experienced a ward cardiac arrest or ICU transfer. SETTING: Three general care wards at an academic medical center. PATIENTS: A total of 3,889 adult inpatients. MEASUREMENTS AND MAIN RESULTS: The system generated 5,925 segments during 5,751 admissions. The area under the receiver operating characteristic curve for electronic Cardiac Arrest Risk Triage score was 0.88 for cardiac arrest and 0.80 for ICU transfer, consistent with previously published derivation results. During the study period, eight of 10 patients with a cardiac arrest had high-risk electronic Cardiac Arrest Risk Triage scores, whereas the Rapid Response Team was activated on two of these patients (p < 0.05). Furthermore, electronic Cardiac Arrest Risk Triage score identified 52% (n = 201) of the ICU transfers compared with 34% (n = 129) by the current system (p < 0.001). Patients met the high-risk electronic Cardiac Arrest Risk Triage score threshold a median of 30 hours prior to cardiac arrest or ICU transfer versus 1.7 hours for standard Rapid Response Team activation. CONCLUSIONS: Electronic Cardiac Arrest Risk Triage score identified significantly more cardiac arrests and ICU transfers than standard Rapid Response Team activation and did so many hours in advance.


Asunto(s)
Registros Electrónicos de Salud , Paro Cardíaco/diagnóstico , Equipo Hospitalario de Respuesta Rápida/estadística & datos numéricos , Unidades de Cuidados Intensivos/estadística & datos numéricos , Índice de Severidad de la Enfermedad , Centros Médicos Académicos/estadística & datos numéricos , Adulto , Anciano , Estudios de Factibilidad , Femenino , Humanos , Masculino , Persona de Mediana Edad , Modelos Estadísticos , Estudios Prospectivos , Curva ROC , Reproducibilidad de los Resultados , Medición de Riesgo , Factores de Tiempo , Signos Vitales
16.
Crit Care Med ; 44(2): 368-74, 2016 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-26771782

RESUMEN

OBJECTIVE: Machine learning methods are flexible prediction algorithms that may be more accurate than conventional regression. We compared the accuracy of different techniques for detecting clinical deterioration on the wards in a large, multicenter database. DESIGN: Observational cohort study. SETTING: Five hospitals, from November 2008 until January 2013. PATIENTS: Hospitalized ward patients INTERVENTIONS: None MEASUREMENTS AND MAIN RESULTS: Demographic variables, laboratory values, and vital signs were utilized in a discrete-time survival analysis framework to predict the combined outcome of cardiac arrest, intensive care unit transfer, or death. Two logistic regression models (one using linear predictor terms and a second utilizing restricted cubic splines) were compared to several different machine learning methods. The models were derived in the first 60% of the data by date and then validated in the next 40%. For model derivation, each event time window was matched to a non-event window. All models were compared to each other and to the Modified Early Warning score, a commonly cited early warning score, using the area under the receiver operating characteristic curve (AUC). A total of 269,999 patients were admitted, and 424 cardiac arrests, 13,188 intensive care unit transfers, and 2,840 deaths occurred in the study. In the validation dataset, the random forest model was the most accurate model (AUC, 0.80 [95% CI, 0.80-0.80]). The logistic regression model with spline predictors was more accurate than the model utilizing linear predictors (AUC, 0.77 vs 0.74; p < 0.01), and all models were more accurate than the MEWS (AUC, 0.70 [95% CI, 0.70-0.70]). CONCLUSIONS: In this multicenter study, we found that several machine learning methods more accurately predicted clinical deterioration than logistic regression. Use of detection algorithms derived from these techniques may result in improved identification of critically ill patients on the wards.


Asunto(s)
Paro Cardíaco/mortalidad , Unidades de Cuidados Intensivos/organización & administración , Aprendizaje Automático/estadística & datos numéricos , Modelos Estadísticos , Factores de Edad , Estudios de Cohortes , Técnicas y Procedimientos Diagnósticos , Humanos , Modelos Logísticos , Redes Neurales de la Computación , Curva ROC , Medición de Riesgo , Factores Socioeconómicos , Máquina de Vectores de Soporte , Análisis de Supervivencia , Factores de Tiempo , Signos Vitales
17.
Am J Respir Crit Care Med ; 192(8): 958-64, 2015 Oct 15.
Artículo en Inglés | MEDLINE | ID: mdl-26158402

RESUMEN

RATIONALE: Tools that screen inpatients for sepsis use the systemic inflammatory response syndrome (SIRS) criteria and organ dysfunctions, but most studies of these criteria were performed in intensive care unit or emergency room populations. OBJECTIVES: To determine the incidence and prognostic value of SIRS and organ dysfunctions in a multicenter dataset of hospitalized ward patients. METHODS: Hospitalized ward patients at five hospitals from November 2008 to January 2013 were included. SIRS and organ system dysfunctions were defined using 2001 International Consensus criteria. Patient characteristics and in-hospital mortality were compared among patients meeting two or more SIRS criteria and by the presence or absence of organ system dysfunction. MEASUREMENTS AND MAIN RESULTS: A total of 269,951 patients were included in the study, after excluding 48 patients with missing discharge status. Forty-seven percent (n = 125,841) of the included patients met two or more SIRS criteria at least once during their ward stay. On ward admission, 39,105 (14.5%) patients met two or more SIRS criteria, and patients presenting with SIRS had higher in-hospital mortality than those without SIRS (4.3% vs. 1.2%; P < 0.001). Fourteen percent of patients (n = 36,767) had at least one organ dysfunction at ward admission, and those presenting with organ dysfunction had increased mortality compared with those without organ dysfunction (5.3% vs. 1.1%; P < 0.001). CONCLUSIONS: Almost half of patients hospitalized on the wards developed SIRS at least once during their ward stay. Our findings suggest that screening ward patients using SIRS criteria for identifying those with sepsis would be impractical.


Asunto(s)
Insuficiencia Multiorgánica/epidemiología , Sepsis/diagnóstico , Síndrome de Respuesta Inflamatoria Sistémica/epidemiología , Adulto , Anciano , Anciano de 80 o más Años , Temperatura Corporal , Bases de Datos Factuales , Femenino , Frecuencia Cardíaca , Mortalidad Hospitalaria , Hospitalización , Humanos , Incidencia , Tiempo de Internación , Recuento de Leucocitos , Masculino , Tamizaje Masivo , Persona de Mediana Edad , Insuficiencia Multiorgánica/diagnóstico , Habitaciones de Pacientes , Recuento de Plaquetas , Pronóstico , Frecuencia Respiratoria , Síndrome de Respuesta Inflamatoria Sistémica/diagnóstico
19.
Crit Care Med ; 43(4): 816-22, 2015 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-25559439

RESUMEN

OBJECTIVES: Vital signs and composite scores, such as the Modified Early Warning Score, are used to identify high-risk ward patients and trigger rapid response teams. Although age-related vital sign changes are known to occur, little is known about the differences in vital signs between elderly and nonelderly patients prior to ward cardiac arrest. We aimed to compare the accuracy of vital signs for detecting cardiac arrest between elderly and nonelderly patients. DESIGN: Observational cohort study. SETTING: Five hospitals in the United States. PATIENTS: A total of 269,956 patient admissions to the wards with documented age, including 422 index ward cardiac arrests. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Patient characteristics and vital signs prior to cardiac arrest were compared between elderly (age, 65 yr or older) and nonelderly (age, <65 yr) patients. The area under the receiver operating characteristic curve for vital signs and the Modified Early Warning Score were also compared. Elderly patients had a higher cardiac arrest rate (2.2 vs 1.0 per 1,000 ward admissions; p<0.001) and in-hospital mortality (2.9% vs 0.7%; p<0.001) than nonelderly patients. Within 4 hours of cardiac arrest, elderly patients had significantly lower mean heart rate (88 vs 99 beats/min; p<0.001), diastolic blood pressure (60 vs 66 mm Hg; p=0.007), shock index (0.82 vs 0.93; p<0.001), and Modified Early Warning Score (2.6 vs 3.3; p<0.001) and higher pulse pressure index (0.45 vs 0.41; p<0.001) and temperature (36.4°C vs 36.3°C; p=0.047). The area under the receiver operating characteristic curves for all vital signs and the Modified Early Warning Score were higher for nonelderly patients than elderly patients (Modified Early Warning Score area under the receiver operating characteristic curve 0.85 [95% CI, 0.82-0.88] vs 0.71 [95% CI, 0.68-0.75]; p<0.001). CONCLUSIONS: Vital signs more accurately detect cardiac arrest in nonelderly patients compared with elderly patients, which has important implications for how they are used for identifying critically ill patients. More accurate methods for risk stratification of elderly patients are necessary to decrease the occurrence of this devastating event.


Asunto(s)
Paro Cardíaco/fisiopatología , Signos Vitales , Factores de Edad , Anciano , Presión Sanguínea , Estudios de Cohortes , Femenino , Frecuencia Cardíaca , Humanos , Masculino , Persona de Mediana Edad , Curva ROC
20.
Am J Respir Crit Care Med ; 190(6): 649-55, 2014 Sep 15.
Artículo en Inglés | MEDLINE | ID: mdl-25089847

RESUMEN

RATIONALE: Most ward risk scores were created using subjective opinion in individual hospitals and only use vital signs. OBJECTIVES: To develop and validate a risk score using commonly collected electronic health record data. METHODS: All patients hospitalized on the wards in five hospitals were included in this observational cohort study. Discrete-time survival analysis was used to predict the combined outcome of cardiac arrest (CA), intensive care unit (ICU) transfer, or death on the wards. Laboratory results, vital signs, and demographics were used as predictor variables. The model was developed in the first 60% of the data at each hospital and then validated in the remaining 40%. The final model was compared with the Modified Early Warning Score (MEWS) using the area under the receiver operating characteristic curve and the net reclassification index (NRI). MEASUREMENTS AND MAIN RESULTS: A total of 269,999 patient admissions were included, with 424 CAs, 13,188 ICU transfers, and 2,840 deaths occurring during the study period. The derived model was more accurate than the MEWS in the validation dataset for all outcomes (area under the receiver operating characteristic curve, 0.83 vs. 0.71 for CA; 0.75 vs. 0.68 for ICU transfer; 0.93 vs. 0.88 for death; and 0.77 vs. 0.70 for the combined outcome; P value < 0.01 for all comparisons). This accuracy improvement was seen across all hospitals. The NRI for the electronic Cardiac Arrest Risk Triage compared with the MEWS was 0.28 (0.18-0.38), with a positive NRI of 0.19 (0.09-0.29) and a negative NRI of 0.09 (0.09-0.09). CONCLUSIONS: We developed an accurate ward risk stratification tool using commonly collected electronic health record variables in a large multicenter dataset. Further study is needed to determine whether implementation in real-time would improve patient outcomes.


Asunto(s)
Registros Electrónicos de Salud , Paro Cardíaco/mortalidad , Pacientes Internos/estadística & datos numéricos , Unidades de Cuidados Intensivos/estadística & datos numéricos , Transferencia de Pacientes/estadística & datos numéricos , Medición de Riesgo/métodos , Medición de Riesgo/normas , Adulto , Anciano , Anciano de 80 o más Años , Estudios de Cohortes , Precisión de la Medición Dimensional , Diagnóstico Precoz , Femenino , Equipo Hospitalario de Respuesta Rápida/estadística & datos numéricos , Humanos , Masculino , Persona de Mediana Edad , Modelos Estadísticos , Análisis de Supervivencia
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA