RESUMO
BACKGROUND: Multidrug-resistant organisms (MDROs) frequently contaminate hospital environments. We performed a multicenter, cluster-randomized, crossover trial of 2 methods for monitoring of terminal cleaning effectiveness. METHODS: Six intensive care units (ICUs) at 3 medical centers received both interventions sequentially, in randomized order. Ten surfaces were surveyed each in 5 rooms weekly, after terminal cleaning, with adenosine triphosphate (ATP) monitoring or an ultraviolet fluorescent marker (UV/F). Results were delivered to environmental services staff in real time with failing surfaces recleaned. We measured monthly rates of MDRO infection or colonization, including methicillin-resistant Staphylococcus aureus, Clostridioides difficile, vancomycin-resistant Enterococcus, and MDR gram-negative bacilli (MDR-GNB) during a 12-month baseline period and sequential 6-month intervention periods, separated by a 2-month washout. Primary analysis compared only the randomized intervention periods, whereas secondary analysis included the baseline. RESULTS: The ATP method was associated with a reduction in incidence rate of MDRO infection or colonization compared with the UV/F period (incidence rate ratio [IRR] 0.876; 95% confidence interval [CI], 0.807-0.951; Pâ =â .002). Including the baseline period, the ATP method was associated with reduced infection with MDROs (IRR 0.924; 95% CI, 0.855-0.998; Pâ =â .04), and MDR-GNB infection or colonization (IRR 0.856; 95% CI, 0.825-0.887; Pâ <â .001). The UV/F intervention was not associated with a statistically significant impact on these outcomes. Room turnaround time increased by a median of 1 minute with the ATP intervention and 4.5 minutes with UV/F compared with baseline. CONCLUSIONS: Intensive monitoring of ICU terminal room cleaning with an ATP modality is associated with a reduction of MDRO infection and colonization.
Assuntos
Infecção Hospitalar , Staphylococcus aureus Resistente à Meticilina , Enterococos Resistentes à Vancomicina , Trifosfato de Adenosina , Infecção Hospitalar/epidemiologia , Infecção Hospitalar/prevenção & controle , Farmacorresistência Bacteriana Múltipla , Bactérias Gram-Negativas , Humanos , Unidades de Terapia Intensiva , VancomicinaRESUMO
BACKGROUND: The coronavirus disease 2019 (COVID-19) pandemic continues to surge in the United States and globally. OBJECTIVE: To describe the epidemiology of COVID-19-related critical illness, including trends in outcomes and care delivery. DESIGN: Single-health system, multihospital retrospective cohort study. SETTING: 5 hospitals within the University of Pennsylvania Health System. PATIENTS: Adults with COVID-19-related critical illness who were admitted to an intensive care unit (ICU) with acute respiratory failure or shock during the initial surge of the pandemic. MEASUREMENTS: The primary exposure for outcomes and care delivery trend analyses was longitudinal time during the pandemic. The primary outcome was all-cause 28-day in-hospital mortality. Secondary outcomes were all-cause death at any time, receipt of mechanical ventilation (MV), and readmissions. RESULTS: Among 468 patients with COVID-19-related critical illness, 319 (68.2%) were treated with MV and 121 (25.9%) with vasopressors. Outcomes were notable for an all-cause 28-day in-hospital mortality rate of 29.9%, a median ICU stay of 8 days (interquartile range [IQR], 3 to 17 days), a median hospital stay of 13 days (IQR, 7 to 25 days), and an all-cause 30-day readmission rate (among nonhospice survivors) of 10.8%. Mortality decreased over time, from 43.5% (95% CI, 31.3% to 53.8%) to 19.2% (CI, 11.6% to 26.7%) between the first and last 15-day periods in the core adjusted model, whereas patient acuity and other factors did not change. LIMITATIONS: Single-health system study; use of, or highly dynamic trends in, other clinical interventions were not evaluated, nor were complications. CONCLUSION: Among patients with COVID-19-related critical illness admitted to ICUs of a learning health system in the United States, mortality seemed to decrease over time despite stable patient characteristics. Further studies are necessary to confirm this result and to investigate causal mechanisms. PRIMARY FUNDING SOURCE: Agency for Healthcare Research and Quality.
Assuntos
COVID-19/mortalidade , COVID-19/terapia , Estado Terminal/mortalidade , Estado Terminal/terapia , Pneumonia Viral/mortalidade , Pneumonia Viral/terapia , Choque/mortalidade , Choque/terapia , APACHE , Centros Médicos Acadêmicos , Idoso , Feminino , Mortalidade Hospitalar , Humanos , Unidades de Terapia Intensiva , Tempo de Internação/estatística & dados numéricos , Masculino , Pessoa de Meia-Idade , Pandemias , Readmissão do Paciente/estatística & dados numéricos , Pennsylvania/epidemiologia , Pneumonia Viral/virologia , Respiração Artificial/estatística & dados numéricos , Estudos Retrospectivos , SARS-CoV-2 , Choque/virologia , Taxa de SobrevidaRESUMO
OBJECTIVE: To assess clinician perceptions of a machine learning-based early warning system to predict severe sepsis and septic shock (Early Warning System 2.0). DESIGN: Prospective observational study. SETTING: Tertiary teaching hospital in Philadelphia, PA. PATIENTS: Non-ICU admissions November-December 2016. INTERVENTIONS: During a 6-week study period conducted 5 months after Early Warning System 2.0 alert implementation, nurses and providers were surveyed twice about their perceptions of the alert's helpfulness and impact on care, first within 6 hours of the alert, and again 48 hours after the alert. MEASUREMENTS AND MAIN RESULTS: For the 362 alerts triggered, 180 nurses (50% response rate) and 107 providers (30% response rate) completed the first survey. Of these, 43 nurses (24% response rate) and 44 providers (41% response rate) completed the second survey. Few (24% nurses, 13% providers) identified new clinical findings after responding to the alert. Perceptions of the presence of sepsis at the time of alert were discrepant between nurses (13%) and providers (40%). The majority of clinicians reported no change in perception of the patient's risk for sepsis (55% nurses, 62% providers). A third of nurses (30%) but few providers (9%) reported the alert changed management. Almost half of nurses (42%) but less than a fifth of providers (16%) found the alert helpful at 6 hours. CONCLUSIONS: In general, clinical perceptions of Early Warning System 2.0 were poor. Nurses and providers differed in their perceptions of sepsis and alert benefits. These findings highlight the challenges of achieving acceptance of predictive and machine learning-based sepsis alerts.
Assuntos
Algoritmos , Atitude do Pessoal de Saúde , Sistemas de Apoio a Decisões Clínicas , Aprendizado de Máquina , Sepse/diagnóstico , Choque Séptico/diagnóstico , Diagnóstico por Computador , Registros Eletrônicos de Saúde , Hospitais de Ensino , Humanos , Corpo Clínico Hospitalar , Recursos Humanos de Enfermagem Hospitalar , Padrões de Prática em Enfermagem/estatística & dados numéricos , Padrões de Prática Médica/estatística & dados numéricos , Estudos Prospectivos , Envio de Mensagens de TextoRESUMO
OBJECTIVES: Develop and implement a machine learning algorithm to predict severe sepsis and septic shock and evaluate the impact on clinical practice and patient outcomes. DESIGN: Retrospective cohort for algorithm derivation and validation, pre-post impact evaluation. SETTING: Tertiary teaching hospital system in Philadelphia, PA. PATIENTS: All non-ICU admissions; algorithm derivation July 2011 to June 2014 (n = 162,212); algorithm validation October to December 2015 (n = 10,448); silent versus alert comparison January 2016 to February 2017 (silent n = 22,280; alert n = 32,184). INTERVENTIONS: A random-forest classifier, derived and validated using electronic health record data, was deployed both silently and later with an alert to notify clinical teams of sepsis prediction. MEASUREMENT AND MAIN RESULT: Patients identified for training the algorithm were required to have International Classification of Diseases, 9th Edition codes for severe sepsis or septic shock and a positive blood culture during their hospital encounter with either a lactate greater than 2.2 mmol/L or a systolic blood pressure less than 90 mm Hg. The algorithm demonstrated a sensitivity of 26% and specificity of 98%, with a positive predictive value of 29% and positive likelihood ratio of 13. The alert resulted in a small statistically significant increase in lactate testing and IV fluid administration. There was no significant difference in mortality, discharge disposition, or transfer to ICU, although there was a reduction in time-to-ICU transfer. CONCLUSIONS: Our machine learning algorithm can predict, with low sensitivity but high specificity, the impending occurrence of severe sepsis and septic shock. Algorithm-generated predictive alerts modestly impacted clinical measures. Next steps include describing clinical perception of this tool and optimizing algorithm design and delivery.
Assuntos
Algoritmos , Sistemas de Apoio a Decisões Clínicas , Diagnóstico por Computador , Aprendizado de Máquina , Sepse/diagnóstico , Choque Séptico/diagnóstico , Estudos de Coortes , Registros Eletrônicos de Saúde , Hospitais de Ensino , Humanos , Estudos Retrospectivos , Sensibilidade e Especificidade , Envio de Mensagens de TextoRESUMO
OBJECTIVES: Sepsis is associated with high early and total in-hospital mortality. Despite recent revisions in the diagnostic criteria for sepsis that sought to improve predictive validity for mortality, it remains difficult to identify patients at greatest risk of death. We compared the utility of nine biomarkers to predict mortality in subjects with clinically suspected bacterial sepsis. DESIGN: Cohort study. SETTING: The medical and surgical ICUs at an academic medical center. SUBJECTS: We enrolled 139 subjects who met two or more systemic inflammatory response syndrome (systemic inflammatory response syndrome) criteria and received new broad-spectrum antibacterial therapy. INTERVENTIONS: We assayed nine biomarkers (α-2 macroglobulin, C-reactive protein, ferritin, fibrinogen, haptoglobin, procalcitonin, serum amyloid A, serum amyloid P, and tissue plasminogen activator) at onset of suspected sepsis and 24, 48, and 72 hours thereafter. We compared biomarkers between groups based on both 14-day and total in-hospital mortality and evaluated the predictive validity of single and paired biomarkers via area under the receiver operating characteristic curve. MEASUREMENTS AND MAIN RESULTS: Fourteen-day mortality was 12.9%, and total in-hospital mortality was 29.5%. Serum amyloid P was significantly lower (4/4 timepoints) and tissue plasminogen activator significantly higher (3/4 timepoints) in the 14-day mortality group, and the same pattern held for total in-hospital mortality (Wilcoxon p ≤ 0.046 for all timepoints). Serum amyloid P and tissue plasminogen activator demonstrated the best individual predictive performance for mortality, and combinations of biomarkers including serum amyloid P and tissue plasminogen activator achieved greater predictive performance (area under the receiver operating characteristic curve > 0.76 for 14-d and 0.74 for total mortality). CONCLUSIONS: Combined biomarkers predict risk for 14-day and total mortality among subjects with suspected sepsis. Serum amyloid P and tissue plasminogen activator demonstrated the best discriminatory ability in this cohort.
Assuntos
Estado Terminal/mortalidade , Sepse/mortalidade , Idoso , Biomarcadores/sangue , Proteína C-Reativa/análise , Estudos de Coortes , Ferritinas/sangue , Fibrinogênio/análise , Haptoglobinas/análise , Mortalidade Hospitalar , Humanos , Masculino , Pessoa de Meia-Idade , Valor Preditivo dos Testes , Pró-Calcitonina/sangue , Sepse/sangue , Sepse/diagnóstico , Proteína Amiloide A Sérica/análise , Componente Amiloide P Sérico/análise , Ativador de Plasminogênio Tecidual/sangue , alfa-Macroglobulinas/análiseRESUMO
BACKGROUND: Acute kidney injury often goes unrecognised in its early stages when effective treatment options might be available. We aimed to determine whether an automated electronic alert for acute kidney injury would reduce the severity of such injury and improve clinical outcomes in patients in hospital. METHODS: In this investigator-masked, parallel-group, randomised controlled trial, patients were recruited from the hospital of the University of Pennsylvania in Philadelphia, PA, USA. Eligible participants were adults aged 18 years or older who were in hospital with stage 1 or greater acute kidney injury as defined by Kidney Disease Improving Global Outcomes creatinine-based criteria. Exclusion criteria were initial hospital creatinine 4·0 mg/dL (to convert to µmol/L, multiply by 88·4) or greater, fewer than two creatinine values measured, inability to determine the covering provider, admission to hospice or the observation unit, previous randomisation, or end-stage renal disease. Patients were randomly assigned (1:1) via a computer-generated sequence to receive an acute kidney injury alert (a text-based alert sent to the covering provider and unit pharmacist indicating new acute kidney injury) or usual care, stratified by medical versus surgical admission and intensive care unit versus non-intensive care unit location in blocks of 4-8 participants. The primary outcome was a composite of relative maximum change in creatinine, dialysis, and death at 7 days after randomisation. All analyses were by intention to treat. This study is registered with ClinicalTrials.gov, number NCT01862419. FINDINGS: Between Sept 17, 2013, and April 14, 2014, 23,664 patients were screened. 1201 eligible participants were assigned to the acute kidney injury alert group and 1192 were assigned to the usual care group. Composite relative maximum change in creatinine, dialysis, and death at 7 days did not differ between the alert group and the usual care group (p=0·88), or within any of the four randomisation strata (all p>0·05). At 7 days after randomisation, median maximum relative change in creatinine concentrations was 0·0% (IQR 0·0-18·4) in the alert group and 0·6% (0·0-17·5) in the usual care group (p=0·81); 87 (7·2%) patients in the alert group and 70 (5·9%) patients in usual care group had received dialysis (odds ratio 1·25 [95% CI 0·90-1·74]; p=0·18); and 71 (5·9%) patients in the alert group and 61 (5·1%) patients in the usual care group had died (1·16 [0·81-1·68]; p=0·40). INTERPRETATION: An electronic alert system for acute kidney injury did not improve clinical outcomes among patients in hospital. FUNDING: Penn Center for Healthcare Improvement and Patient Safety.
Assuntos
Injúria Renal Aguda/diagnóstico , Registros Eletrônicos de Saúde , Adulto , Idoso , Automação , Biomarcadores/metabolismo , Telefone Celular , Creatinina/metabolismo , Diagnóstico Precoce , Feminino , Hospitalização/estatística & dados numéricos , Humanos , Masculino , Pessoa de Meia-Idade , Prognóstico , Método Simples-Cego , Adulto JovemRESUMO
BACKGROUND: Increasing numbers of intensive care units (ICUs) are adopting the practice of nighttime intensivist staffing despite the lack of experimental evidence of its effectiveness. METHODS: We conducted a 1-year randomized trial in an academic medical ICU of the effects of nighttime staffing with in-hospital intensivists (intervention) as compared with nighttime coverage by daytime intensivists who were available for consultation by telephone (control). We randomly assigned blocks of 7 consecutive nights to the intervention or the control strategy. The primary outcome was patients' length of stay in the ICU. Secondary outcomes were patients' length of stay in the hospital, ICU and in-hospital mortality, discharge disposition, and rates of readmission to the ICU. For length-of-stay outcomes, we performed time-to-event analyses, with data censored at the time of a patient's death or transfer to another ICU. RESULTS: A total of 1598 patients were included in the analyses. The median Acute Physiology and Chronic Health Evaluation (APACHE) III score (in which scores range from 0 to 299, with higher scores indicating more severe illness) was 67 (interquartile range, 47 to 91), the median length of stay in the ICU was 52.7 hours (interquartile range, 29.0 to 113.4), and mortality in the ICU was 18%. Patients who were admitted on intervention days were exposed to nighttime intensivists on more nights than were patients admitted on control days (median, 100% of nights [interquartile range, 67 to 100] vs. median, 0% [interquartile range, 0 to 33]; P<0.001). Nonetheless, intensivist staffing on the night of admission did not have a significant effect on the length of stay in the ICU (rate ratio for the time to ICU discharge, 0.98; 95% confidence interval [CI], 0.88 to 1.09; P=0.72), ICU mortality (relative risk, 1.07; 95% CI, 0.90 to 1.28), or any other end point. Analyses restricted to patients who were admitted at night showed similar results, as did sensitivity analyses that used different definitions of exposure and outcome. CONCLUSIONS: In an academic medical ICU in the United States, nighttime in-hospital intensivist staffing did not improve patient outcomes. (Funded by University of Pennsylvania Health System and others; ClinicalTrials.gov number, NCT01434823.).
Assuntos
Mortalidade Hospitalar , Médicos Hospitalares , Unidades de Terapia Intensiva , Admissão e Escalonamento de Pessoal , Idoso , Feminino , Hospitais Universitários , Humanos , Estimativa de Kaplan-Meier , Tempo de Internação , Masculino , Pessoa de Meia-Idade , Pennsylvania , Recursos HumanosRESUMO
OBJECTIVES: Hospital readmission is common after sepsis, yet the relationship between the index admission and readmission remains poorly understood. We sought to examine the relationship between infection during the index acute care hospitalization and readmission and to identify potentially modifiable factors during the index sepsis hospitalization associated with readmission. DESIGN: In a retrospective cohort study, we evaluated 444 sepsis survivors at risk of an unplanned hospital readmission in 2012. The primary outcome was 30-day unplanned hospital readmission. SETTING: Three hospitals within an academic healthcare system. SUBJECTS: Four hundred forty-four sepsis survivors. MEASUREMENTS AND MAIN RESULTS: Of 444 sepsis survivors, 23.4% (95% CI, 19.6-27.6%) experienced an unplanned 30-day readmission compared with 10.1% (95% CI, 9.6-10.7%) among 11,364 nonsepsis survivors over the same time period. The most common cause for readmission after sepsis was infection (69.2%, 72 of 104). Among infection-related readmissions, 51.4% were categorized as recurrent/unresolved. Patients with sepsis present on their index admission who also developed a hospital-acquired infection ("second hit") were nearly twice as likely to have an unplanned 30-day readmission compared with those who presented with sepsis at admission and did not develop a hospital-acquired infection or those who presented without infection and then developed hospital-acquired sepsis (38.6% vs 22.2% vs 20.0%, p = 0.04). Infection-related hospital readmissions, specifically, were more likely in patients with a "second hit" and patients receiving a longer duration of antibiotics. The use of total parenteral nutrition (p = 0.03), longer duration of antibiotics (p = 0.047), prior hospitalizations, and lower discharge hemoglobin (p = 0.04) were independently associated with hospital readmission. CONCLUSIONS: We confirmed that the majority of unplanned hospital readmissions after sepsis are due to an infection. We found that patients with sepsis at admission who developed a hospital-acquired infection, and those who received a longer duration of antibiotics, appear to be high-risk groups for unplanned, all-cause 30-day readmissions and infection-related 30-day readmissions.
Assuntos
Hospitalização , Readmissão do Paciente/estatística & dados numéricos , Sepse/terapia , Adulto , Idoso , Antibacterianos/uso terapêutico , Esquema de Medicação , Feminino , Humanos , Doença Iatrogênica/prevenção & controle , Modelos Logísticos , Masculino , Pessoa de Meia-Idade , Pennsylvania , Estudos Retrospectivos , Fatores de Risco , Fatores de TempoRESUMO
Sepsis remains a diagnostic challenge in the intensive care unit (ICU), and the use of biomarkers may help in differentiating bacterial sepsis from other causes of systemic inflammatory syndrome (SIRS). The goal of this study was to assess test characteristics of a number of biomarkers for identifying ICU patients with a very low likelihood of bacterial sepsis. A prospective cohort study was conducted in a medical ICU of a university hospital. Immunocompetent patients with presumed bacterial sepsis were consecutively enrolled from January 2012 to May 2013. Concentrations of nine biomarkers (α-2 macroglobulin, C-reactive protein [CRP], ferritin, fibrinogen, haptoglobin, procalcitonin [PCT], serum amyloid A, serum amyloid P, and tissue plasminogen activator) were determined at baseline and at 24 h, 48 h, and 72 h after enrollment. Performance characteristics were calculated for various combinations of biomarkers for discrimination of bacterial sepsis from other causes of SIRS. Seventy patients were included during the study period; 31 (44%) had bacterial sepsis, and 39 (56%) had other causes of SIRS. PCT and CRP values were significantly higher at all measured time points in patients with bacterial sepsis. A number of combinations of PCT and CRP, using various cutoff values and measurement time points, demonstrated high negative predictive values (81.1% to 85.7%) and specificities (63.2% to 79.5%) for diagnosing bacterial sepsis. Combinations of PCT and CRP demonstrated a high ability to discriminate bacterial sepsis from other causes of SIRS in medical ICU patients. Future studies should focus on the use of these algorithms to improve antibiotic use in the ICU setting.
Assuntos
Infecções Bacterianas/diagnóstico , Proteína C-Reativa/metabolismo , Calcitonina/sangue , Precursores de Proteínas/sangue , Sepse/diagnóstico , Síndrome de Resposta Inflamatória Sistêmica/diagnóstico , Adulto , Idoso , Idoso de 80 Anos ou mais , Algoritmos , Infecções Bacterianas/sangue , Infecções Bacterianas/imunologia , Infecções Bacterianas/microbiologia , Biomarcadores/sangue , Peptídeo Relacionado com Gene de Calcitonina , Diagnóstico Diferencial , Feminino , Humanos , Imunocompetência , Unidades de Terapia Intensiva , Masculino , Pessoa de Meia-Idade , Valor Preditivo dos Testes , Estudos Prospectivos , Sepse/sangue , Sepse/imunologia , Sepse/microbiologia , Síndrome de Resposta Inflamatória Sistêmica/sangue , Síndrome de Resposta Inflamatória Sistêmica/imunologiaRESUMO
OBJECTIVES: Septic shock is associated with increased long-term morbidity and mortality. However, little is known about the use of hospital-based acute care in survivors after hospital discharge. The objectives of the study were to examine the frequency, timing, causes, and risk factors associated with emergency department visits and hospital readmissions within 30 days of discharge. DESIGN: Retrospective cohort study. SETTING: Tertiary, academic hospital in the United States. PATIENTS: Patients admitted with septic shock (serum lactate≥4 mmol/L or refractory hypotension) and discharged alive to a nonhospice setting between 2007 and 2010. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: The coprimary outcomes were all-cause hospital readmission and emergency department visits (treat-and-release encounters) within 30 days to any of the three health system hospitals. Of 269 at-risk survivors, 63 (23.4%; 95% CI, 18.2-28.5) were readmitted within 30 days of discharge and another 12 (4.5%; 95% CI, 2.3-7.7) returned to the emergency department for a treat-and-release visit. Readmissions occurred within 15 days of discharge in 75% of cases and were more likely in oncology patients (p=0.001) and patients with a longer hospital length of stay (p=0.04). Readmissions were frequently due to another life-threatening condition and resulted in death or discharge to hospice in 16% of cases. The reasons for readmission were deemed potentially related to the index septic shock hospitalization in 78% (49 of 63) of cases. The most common cause was infection related, accounting for 46% of all 30-day readmissions, followed by cardiovascular or thromboembolic events (18%). CONCLUSIONS: The use of hospital-based acute care appeared to be common in septic shock survivors. Encounters often led to readmission within 15 days of discharge, were frequently due to another acute condition, and appeared to result in substantial morbidity and mortality. Given the potential public health implications of these findings, validation studies are needed.
Assuntos
Serviço Hospitalar de Emergência/estatística & dados numéricos , Readmissão do Paciente/estatística & dados numéricos , Choque Séptico/terapia , Estudos de Coortes , Feminino , Humanos , Tempo de Internação , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos , Estados UnidosRESUMO
BACKGROUND: Acute kidney injury is common in hospitalized patients, increases morbidity and mortality, and is under-recognized. To improve provider recognition, we previously developed an electronic alert system for acute kidney injury. To test the hypothesis that this electronic acute kidney injury alert could improve patient outcome, we designed a randomized controlled trial to test the effectiveness of this alert in hospitalized patients. The study design presented several methodologic, ethical, and statistical challenges. PURPOSE: To highlight the challenges faced and the solutions employed in the design and implementation of a clinical trial to determine whether the provision of an early electronic alert for acute kidney injury would improve outcomes in hospitalized patients. Challenges included how to randomize the delivery of the alert system and the ethical framework for waiving informed consent. Other methodologic challenges included the selection and statistical evaluation of our study outcome, a ranked-composite of a continuous covariate (creatinine) and two dichotomous outcomes (dialysis and death), and the use of the medical record as a source of trial data. METHODS: We have designed a randomized trial to assess the effectiveness of an electronic alert system for acute kidney injury. With broad inclusion criteria, and a waiver of informed consent, we enroll and randomize virtually every patient with acute kidney injury in our hospital. RESULTS: As of 31 March 2014, we have enrolled 2373 patients of 2400 targeted. Pre-alert data demonstrated a strong association between severity of acute kidney injury and inpatient mortality with a range of 6.4% in those with mild, stage 1 acute kidney injury, to 29% among those with stage 3 acute kidney injury (p < 0.001). We judged that informed consent would undermine the scientific validity of the study and present harms that are out of proportion to the very low risk intervention. CONCLUSION: Our study demonstrates the feasibility of designing an ethical randomized controlled trial of an early electronic alert for acute kidney injury without obtaining informed consent from individual participants. Our study outcome may serve as a model for other studies of acute kidney injury, insofar as our paradigm accounts for the effect that early death and dialysis have on assessment of acute kidney injury severity as defined by maximum achieved serum creatinine.
Assuntos
Injúria Renal Aguda/diagnóstico , Creatinina/sangue , Registros Eletrônicos de Saúde , Hospitalização , Injúria Renal Aguda/sangue , Método Duplo-Cego , Processamento Eletrônico de Dados , Humanos , Avaliação de Resultados em Cuidados de SaúdeRESUMO
BACKGROUND: Endotracheal aspirates (ETAs) are widely used for microbiologic studies of the respiratory tract in intubated patients. However, they involve sampling through an established endotracheal tube using suction catheters, both of which can acquire biofilms that may confound results. RESEARCH QUESTION: Does standard clinical ETA in intubated patients accurately reflect the authentic lower airway bacterial microbiome? STUDY DESIGN AND METHODS: Comprehensive quantitative bacterial profiling using 16S rRNA V1-V2 gene sequencing was applied to compare bacterial populations captured by standard clinical ETA vs contemporaneous gold standard samples acquired directly from the lower airways through a freshly placed sterile tracheostomy tube. The study included 13 patients undergoing percutaneous tracheostomy following prolonged (median, 15 days) intubation. Metrics of bacterial composition, diversity, and relative quantification were applied to samples. RESULTS: Pre-tracheostomy ETAs closely resembled the gold standard immediate post-tracheostomy airway microbiomes in bacterial composition and community features of diversity and quantification. Endotracheal tube and suction catheter biofilms also resembled cognate ETA and fresh tracheostomy communities. INTERPRETATION: Unbiased molecular profiling shows that standard clinical ETA sampling has good concordance with the authentic lower airway microbiome in intubated patients.
Assuntos
Intubação Intratraqueal , Microbiota , RNA Ribossômico 16S , Traqueostomia , Humanos , Masculino , Feminino , Traqueostomia/métodos , Traqueostomia/instrumentação , Pessoa de Meia-Idade , Idoso , Biofilmes , Bactérias/isolamento & purificação , Bactérias/genética , SucçãoRESUMO
BACKGROUND & AIMS: Despite advances in critical care medicine, the mortality rate is high among critically ill patients with cirrhosis. We aimed to identify factors that predict early (7 d) mortality among patients with cirrhosis admitted to the intensive care unit (ICU) and to develop a risk-stratification model. METHODS: We collected data from patients with cirrhosis admitted to the ICU at Indiana University (IU-ICU) from December 1, 2006, through December 31, 2009 (n = 185), or at the University of Pennsylvania (Penn-ICU) from May 1, 2005, through December 31, 2010 (n = 206). Factors associated with mortality within 7 days of admission (7-d mortality) were determined by logistic regression analyses. A model was constructed based on the predictive parameters available on the first day of ICU admission in the IU-ICU cohort and then validated in the Penn-ICU cohort. RESULTS: Median Model for End-stage Liver Disease (MELD) scores at ICU admission were 25 in the IU-ICU cohort (interquartile range, 23-34) and 32 in the Penn-ICU cohort (interquartile range, 26-41); corresponding 7-day mortalities were 28.3% and 53.6%, respectively. MELD score (odds ratio, 1.13; 95% confidence interval [CI], 1.07-1.2) and mechanical ventilation (odds ratio, 5.7; 95% CI, 2.3-14.1) were associated independently with 7-day mortality in the IU-ICU. A model based on these 2 variables separated IU-ICU patients into low-, medium-, and high-risk groups; these groups had 7-day mortalities of 9%, 27%, and 74%, respectively (concordance index, 0.80; 95% CI, 0.72-0.87; P < 10(-8)). The model was applied to the Penn-ICU cohort; the low-, medium-, and high-risk groups had 7-day mortalities of 33%, 56%, and 71%, respectively (concordance index, 0.67; 95% CI, 0.59-0.74; P < 10(-4)). CONCLUSIONS: A model based on MELD score and mechanical ventilation on day 1 can stratify risk of early mortality in patients with cirrhosis admitted to the ICU. More studies are needed to validate this model and to enhance its clinical utility.
Assuntos
Cirrose Hepática/mortalidade , Adulto , Idoso , Estudos de Coortes , Feminino , Humanos , Indiana , Unidades de Terapia Intensiva , Cirrose Hepática/patologia , Masculino , Pessoa de Meia-Idade , Modelos Estatísticos , Pennsylvania , Prognóstico , Respiração Artificial , Estudos Retrospectivos , Índice de Gravidade de Doença , Análise de SobrevidaRESUMO
OBJECTIVE: The epidemiology of severe sepsis is derived from administrative databases that rely on International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM) codes to select cases. We compared the sensitivity of two code abstraction methods in identifying severe sepsis cases using a severe sepsis registry. DESIGN: Single-center retrospective cohort study. SETTING: Tertiary care, Academic, University Hospital. PATIENTS: One thousand seven hundred thirty-five patients with severe sepsis or septic shock. INTERVENTIONS: None. MEASUREMENTS: Proportion identified as severe sepsis using two code abstraction methods: 1) the new specific ICD-9 codes for severe sepsis and septic shock, and 2) a validated method requiring two ICD-9 codes for infection and end-organ dysfunction. Multivariable logistic regression was performed to determine sociodemographics and clinical characteristics associated with documentation and coding accuracy. MAIN RESULTS: The strategy combining a code for infection and end-organ dysfunction was more sensitive in identifying cases than the method requiring specific ICD-9 codes for severe sepsis or septic shock (47% vs. 21%). Elevated serum lactate level (p<0.001), ICU admission (p<0.001), presence of shock (p<0.001), bacteremia as the source of sepsis (p=0.02), and increased Acute Physiology and Chronic Health Evaluation II score (p<0.001) were independently associated with being appropriately documented and coded. The 28-day mortality was significantly higher in those who were accurately documented/coded (41%, compared with 14% in those who were not, p<0.001), reflective of a more severe presentation on admission. CONCLUSIONS: Patients admitted with severe sepsis and septic shock were incompletely documented and under-coded, using either ICD-9 code abstracting method. Documentation of subsequent coding of severe sepsis was more common in more severely ill patients. These findings are important when evaluating current national estimates and when interpreting epidemiologic studies of severe sepsis as cohorts derived from claims-based strategies appear to be biased toward a more severely ill patient population.
Assuntos
Indexação e Redação de Resumos/métodos , Estado Terminal/classificação , Estado Terminal/epidemiologia , Sepse/classificação , Sepse/epidemiologia , Índice de Gravidade de Doença , Adulto , Idoso , Estudos de Coortes , Cuidados Críticos , Feminino , Hospitais Universitários , Humanos , Classificação Internacional de Doenças , Masculino , Prontuários Médicos/estatística & dados numéricos , Pessoa de Meia-Idade , Estudos Retrospectivos , Medição de Risco , Sensibilidade e Especificidade , Sepse/diagnóstico , Choque Séptico/classificação , Choque Séptico/epidemiologiaRESUMO
OBJECTIVES: Formal guidelines recommend that therapeutic hypothermia be considered after in-hospital cardiac arrest. The rate of therapeutic hypothermia use after in-hospital cardiac arrest and details about its implementation are unknown. We aimed to determine the use of therapeutic hypothermia for adult in-hospital cardiac arrest, whether use has increased over time, and to identify factors associated with its use. DESIGN: Multicenter, prospective cohort study. SETTING: A total of 538 hospitals participating in the Get With the Guidelines-Resuscitation database (2003-2009). PATIENTS: A total of 67,498 patients who had return of spontaneous circulation after in-hospital cardiac arrest. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: The primary outcome was the initiation of therapeutic hypothermia. We measured the proportion of therapeutic hypothermia patients who achieved target temperature (32-34 °C) and were overcooled. Of 67,498 patients, therapeutic hypothermia was initiated in 1,367 patients (2.0%). The target temperature (32-34 °C) was not achieved in 44.3% of therapeutic hypothermia patients within 24 hours and 17.6% were overcooled. The use of therapeutic hypothermia increased from 0.7% in 2003 to 3.3% in 2009 (p < 0.001). We found that younger age (p < 0.001) and occurrence in a non-ICU location (p < 0.001), on a weekday (p = 0.005), and in a teaching hospital (p = 0.001) were associated with an increased likelihood of therapeutic hypothermia being initiated. CONCLUSIONS: After in-hospital cardiac arrest, therapeutic hypothermia was used rarely. Once initiated, the target temperature was commonly not achieved. The frequency of use increased over time but remained low. Factors associated with therapeutic hypothermia use included patient age, time and location of occurrence, and type of hospital.
Assuntos
Parada Cardíaca/terapia , Hipotermia Induzida/estatística & dados numéricos , Centros Médicos Acadêmicos/estatística & dados numéricos , Fatores Etários , Idoso , Comorbidade , Difusão de Inovações , Feminino , Fidelidade a Diretrizes/estatística & dados numéricos , Fidelidade a Diretrizes/tendências , Humanos , Masculino , Pessoa de Meia-Idade , Guias de Prática Clínica como Assunto , Características de Residência/estatística & dados numéricos , Temperatura , Fatores de TempoRESUMO
BACKGROUND: We report a novel approach to mortality review using a 360° survey and a multidisciplinary mortality committee (MMC) to optimize efforts to improve inpatient care. METHODS: In 2009, a 16-item, 360° compulsory quality improvement survey was implemented for mortality review. Descriptive statistics were performed to compare the responses by provider specialty, profession, and level of training using the Fisher exact and chi-square tests, as appropriate. We compared the agreement between the MMC review and provider-reported classification regarding the preventability of each death using the Cohen kappa coefficient. A qualitative review of 360° information was performed to identify the quality opportunities. RESULTS: Completed surveys (n = 3095) were submitted for 1683 patients. The possibility of a preventable death was suggested in the 360° survey for 42 patients (1.40%). We identified 502 patients (29.83%) with completed 360° surveys who underwent MMC review. The inter-rater reliability between the provider opinions regarding preventable death and the MMC review was poor (kappa = 0.10, P < 0.001). Of the 42 cases identified by the 360° survey as preventable deaths, 15 underwent MMC review; 3 were classified as preventable and 12 were deemed unavoidable. Qualitative analyses of the 12 discrepancies did reveal quality issues; however, they were not deemed responsible for the patients' death. CONCLUSIONS: The mortality survey yielded important information regarding inpatient deaths that historically was buried with the patient. Poor agreement between the 360° survey responses and an objective MMC review support the need to have a multipronged approach to evaluating inpatient mortality.
Assuntos
Mortalidade Hospitalar , Avaliação de Resultados em Cuidados de Saúde/normas , Comitê de Profissionais/normas , Garantia da Qualidade dos Cuidados de Saúde/métodos , Centros Médicos Acadêmicos/normas , Feminino , Pesquisas sobre Atenção à Saúde , Humanos , Internato e Residência/normas , Masculino , Corpo Clínico Hospitalar/normas , Profissionais de Enfermagem/normas , Equipe de Assistência ao Paciente/normas , Assistentes Médicos/normas , Terapia Respiratória/normas , Estudos Retrospectivos , Centros de Atenção Terciária/normasRESUMO
AIMS: Modification of the mortality risk associated with acute kidney injury (AKI) necessitates recognition of AKI when it occurs. We sought to determine whether formal documentation of AKI in the medical record, assessed by billing codes for AKI, would be associated with improved clinical outcomes. METHODS: Retrospective cohort study conducted at three hospitals within a single university health system. Adults without severe underlying kidney disease who suffered in-hospital AKI as defined by a doubling of baseline creatinine (n = 5,438) were included. Those whose AKI was formally documented according to discharge billing codes were compared to those without such documentation in terms of 30-day mortality. RESULTS: Formal documentation of AKI occurred in 2,325 patients (43%). Higher baseline creatinine, higher peak creatinine, medical admission status, and higher Sequential Organ Failure Assessment (SOFA) score were strongly associated with documentation of AKI. After adjustment for severity of disease, formal AKI documentation was associated with reduced 30-day mortality - OR 0.81 (0.68 - 0.96, p = 0.02). Patients with formal documentation were more likely to receive a nephrology consultation (31% vs. 6%, p < 0.001) and fluid boluses (64% vs. 45%, p < 0.001), and had a more rapid discontinuation of angiotensin-converting enzyme inhibitor and angiotensin-receptor blocker medications (HR 2.04, CI 1.69 - 2.46, p < 0.001). CONCLUSIONS: Formal documentation of AKI is associated with improved survival after adjustment for illness severity among patients with creatinine-defined AKI.
Assuntos
Injúria Renal Aguda/mortalidade , Documentação , Injúria Renal Aguda/sangue , Injúria Renal Aguda/tratamento farmacológico , Adulto , Idoso , Estudos de Coortes , Creatinina/sangue , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Estudos RetrospectivosRESUMO
BACKGROUND: Despite the importance of reducing inpatient mortality, little has been reported about establishing a hospitalwide, systematic process to review and address inpatient deaths. In 2006 the University of Pennsylvania Health System's Mortality Review Committee was established and charged with reducing inpatient mortality as measured by the mortality index--observed/expected mortality. METHODS: Between 2006 and 2012, through interdisciplinary meetings and analysis of administrative data and chart reviews, the Mortality Review Committee identified a number of opportunities for improvement in the quality of patient care. Several programmatic interventions, such as those aimed at improving sepsis and delirium recognition and management, were initiated through the committee. RESULTS: During the committee's first six years of activity, the University HealthSystem Consortium (UHC) mortality index decreased from 1.08 to 0.53, with observed mortality decreasing from 2.45% to 1.62%. Interventions aimed at improving sepsis management implemented between 2007 and 2008 were associated with increases in severe sepsis survival from 40% to 56% and septic shock survival from 42% to 54%. The mortality index for sepsis decreased from 2.45 to 0.88. Efforts aimed at improving delirium management implemented between 2008 and 2009 were associated with an increase in the proportion of patients receiving a "timely" intervention from 18% to 57% and with a twofold increase in the percentage of patients discharged to home. DISCUSSION: The establishment of a mortality review committee was associated with a significant reduction in the mortality index. Keys to success include interdisciplinary membership, partnerships with local providers, and a multipronged approach to identifying important clinical opportunities and to implementing effective interventions.
Assuntos
Comitês Consultivos/organização & administração , Mortalidade Hospitalar/tendências , Hospitais de Ensino/organização & administração , Melhoria de Qualidade/organização & administração , Acidentes por Quedas/mortalidade , Cuidadores , Comunicação , Delírio/mortalidade , Cuidados Paliativos na Terminalidade da Vida , Humanos , Sistemas de Informação/organização & administração , Satisfação do Paciente , Pennsylvania , Indicadores de Qualidade em Assistência à Saúde , Sepse/mortalidadeRESUMO
BACKGROUND: Clinical documentation is critical to health care quality and cost. The generally poor quality of such documentation has been well recognized, yet medical students, residents, and physicians receive little or no training in it. When clinical documentation quality (CDQ) training for residents and/or physicians is provided, it excludes key constructs of self-efficacy: vicarious learning (e.g., peer demonstration) and mastery (i.e., practice). CDQ training that incorporates these key self-efficacy constructs is more resource intensive. If such training could be shown to be more effective at enhancing clinician performance, it would support the investment of the additional resources required by health care systems and residency training programs. PURPOSES: The aim of this study was to test the impact of CDQ training on clinician self-efficacy and performance and the relative efficacy of intervention designs employing two versus all four self-efficacy constructs. METHODOLOGY/APPROACH: Ninety-one internal medicine residents at a major academic medical center in the northeastern United States were assigned to one of two self-efficacy-based training groups or a control group, with CDQ and clinical documentation self-efficacy measured before and after the interventions. A structural equation model (AMOS) allowed for testing the six hypotheses in the context of the whole study, and findings were cross-validated using traditional regression. FINDINGS: Although both interventions increased CDQ, the training designed to include all four self-efficacy constructs had a significantly greater impact on improving CDQ. It also increased self-efficacy. PRACTICE IMPLICATIONS: CDQ may be significantly improved and sustained by (a) training physicians in clinical documentation and (b) employing all four self-efficacy constructs in such training designs.