Your browser doesn't support javascript.
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 84
Filtrar
1.
JAMA Netw Open ; 3(5): e205191, 2020 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-32427324

RESUMO

Importance: Risk scores used in early warning systems exist for general inpatients and patients with suspected infection outside the intensive care unit (ICU), but their relative performance is incompletely characterized. Objective: To compare the performance of tools used to determine points-based risk scores among all hospitalized patients, including those with and without suspected infection, for identifying those at risk for death and/or ICU transfer. Design, Setting, and Participants: In a cohort design, a retrospective analysis of prospectively collected data was conducted in 21 California and 7 Illinois hospitals between 2006 and 2018 among adult inpatients outside the ICU using points-based scores from 5 commonly used tools: National Early Warning Score (NEWS), Modified Early Warning Score (MEWS), Between the Flags (BTF), Quick Sequential Sepsis-Related Organ Failure Assessment (qSOFA), and Systemic Inflammatory Response Syndrome (SIRS). Data analysis was conducted from February 2019 to January 2020. Main Outcomes and Measures: Risk model discrimination was assessed in each state for predicting in-hospital mortality and the combined outcome of ICU transfer or mortality with area under the receiver operating characteristic curves (AUCs). Stratified analyses were also conducted based on suspected infection. Results: The study included 773 477 hospitalized patients in California (mean [SD] age, 65.1 [17.6] years; 416 605 women [53.9%]) and 713 786 hospitalized patients in Illinois (mean [SD] age, 61.3 [19.9] years; 384 830 women [53.9%]). The NEWS exhibited the highest discrimination for mortality (AUC, 0.87; 95% CI, 0.87-0.87 in California vs AUC, 0.86; 95% CI, 0.85-0.86 in Illinois), followed by the MEWS (AUC, 0.83; 95% CI, 0.83-0.84 in California vs AUC, 0.84; 95% CI, 0.84-0.85 in Illinois), qSOFA (AUC, 0.78; 95% CI, 0.78-0.79 in California vs AUC, 0.78; 95% CI, 0.77-0.78 in Illinois), SIRS (AUC, 0.76; 95% CI, 0.76-0.76 in California vs AUC, 0.76; 95% CI, 0.75-0.76 in Illinois), and BTF (AUC, 0.73; 95% CI, 0.73-0.73 in California vs AUC, 0.74; 95% CI, 0.73-0.74 in Illinois). At specific decision thresholds, the NEWS outperformed the SIRS and qSOFA at all 28 hospitals either by reducing the percentage of at-risk patients who need to be screened by 5% to 20% or increasing the percentage of adverse outcomes identified by 3% to 25%. Conclusions and Relevance: In all hospitalized patients evaluated in this study, including those meeting criteria for suspected infection, the NEWS appeared to display the highest discrimination. Our results suggest that, among commonly used points-based scoring systems, determining the NEWS for inpatient risk stratification could identify patients with and without infection at high risk of mortality.

3.
Ann Am Thorac Soc ; 2020 Mar 03.
Artigo em Inglês | MEDLINE | ID: mdl-32125877

RESUMO

RATIONALE: Determining when an intensive care unit (ICU) patient is ready for discharge to the ward is a complex daily challenge for any ICU care team. Patients who experience unplanned readmissions to the intensive care unit (ICU) have increased mortality, length of stay, and cost compared to those not readmitted during their hospital stay. The accuracy of clinician prediction for ICU readmission is unknown. OBJECTIVE: To determine the accuracy of intensive care unit (ICU) physicians and nurses for predicting ICU readmissions Methods: We conducted a prospective study in the medical ICU of an academic hospital from October 2015 to September 2017. After daily rounding for patients being transferred to the ward, ICU clinicians (nurses, residents, fellows, attendings) were asked to report the likelihood of readmission within 48 hours (using a 1-10 scale, with 10 being "extremely likely"). The accuracy of the clinician prediction score (1-10) was assessed for all clinicians and by clinician type using sensitivity, specificity, and area under the curve (AUC) for the receiver operating characteristic curve for predicting the primary outcome, which was ICU readmission within 48 hours of ICU discharge. RESULTS: A total of 2,833 surveys were collected for 938 ICU to ward transfers, of which 40 (4%) were readmitted to the ICU within 48 hours of transfer. The median clinician likelihood of readmission score was 3 (IQR 2-4). When physician and nurse likelihood scores were combined together, the median clinician likelihood score had an AUC of 0.70 [95%CI 0.62-0.78] for predicting ICU readmission within 48 hours. Nurses were significantly more accurate than interns at predicting 48-hour ICU readmission (AUC 0.73 95% [CI 0.64-0.82] vs. 0.60 95% [CI 0.49-0.71]; p=0.03). All other pairwise comparisons were not significantly different for predicting ICU readmission within 48 hours (p>0.05 for all comparisons). CONCLUSIONS: We found that all clinicians surveyed in our ICU, regardless of the level of experience or clinician type, had only fair accuracy for predicting ICU readmission. Further research is needed to determine if clinical decision support tools would provide prognostic value above and beyond clinical judgment for determining who is ready for ICU discharge.

5.
Crit Care Med ; 48(2): e152-e153, 2020 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-31939815
7.
JAMA ; 322(18): 1789-1798, 2019 11 12.
Artigo em Inglês | MEDLINE | ID: mdl-31714985

RESUMO

Importance: In the United States, the number of deceased donor hearts available for transplant is limited. As a proxy for medical urgency, the US heart allocation system ranks heart transplant candidates largely according to the supportive therapy prescribed by transplant centers. Objective: To determine if there is a significant association between transplant center and survival benefit in the US heart allocation system. Design, Setting, and Participants: Observational study of 29 199 adult candidates for heart transplant listed on the national transplant registry from January 2006 through December 2015 with follow-up complete through August 2018. Exposures: Transplant center. Main Outcomes and Measures: The survival benefit associated with heart transplant as defined by the difference between survival after heart transplant and waiting list survival without transplant at 5 years. Each transplant center's mean survival benefit was estimated using a mixed-effects proportional hazards model with transplant as a time-dependent covariate, adjusted for year of transplant, donor quality, ischemic time, and candidate status. Results: Of 29 199 candidates (mean age, 52 years; 26% women) on the transplant waiting list at 113 centers, 19 815 (68%) underwent heart transplant. Among heart transplant recipients, 5389 (27%) died or underwent another transplant operation during the study period. Of the 9384 candidates who did not undergo heart transplant, 5669 (60%) died (2644 while on the waiting list and 3025 after being delisted). Estimated 5-year survival was 77% (interquartile range [IQR], 74% to 80%) among transplant recipients and 33% (IQR, 17% to 51%) among those who did not undergo heart transplant, which is a survival benefit of 44% (IQR, 27% to 59%). Survival benefit ranged from 30% to 55% across centers and 31 centers (27%) had significantly higher survival benefit than the mean and 30 centers (27%) had significantly lower survival benefit than the mean. Compared with low survival benefit centers, high survival benefit centers performed heart transplant for patients with lower estimated expected waiting list survival without transplant (29% at high survival benefit centers vs 39% at low survival benefit centers; survival difference, -10% [95% CI, -12% to -8.1%]), although the adjusted 5-year survival after transplant was not significantly different between high and low survival benefit centers (77.6% vs 77.1%, respectively; survival difference, 0.5% [95% CI, -1.3% to 2.3%]). Overall, for every 10% decrease in estimated transplant candidate waiting list survival at a given center, there was an increase of 6.2% (95% CI, 5.2% to 7.3%) in the 5-year survival benefit associated with heart transplant. Conclusions and Relevance: In this registry-based study of US heart transplant candidates, transplant center was associated with the survival benefit of transplant. Although the adjusted 5-year survival after transplant was not significantly different between high and low survival benefit centers, compared with centers with survival benefit significantly below the mean, centers with survival benefit significantly above the mean performed heart transplant for recipients who had significantly lower estimated expected 5-year waiting list survival without transplant.


Assuntos
Transplante de Coração/mortalidade , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Gravidade do Paciente , Qualidade da Assistência à Saúde , Sistema de Registros , Alocação de Recursos , Análise de Sobrevida , Estados Unidos/epidemiologia , Listas de Espera
8.
Crit Care Med ; 47(12): 1735-1742, 2019 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-31599813

RESUMO

OBJECTIVES: The immune response during sepsis remains poorly understood and is likely influenced by the host's preexisting immunologic comorbidities. Although more than 20% of the U.S. population has an allergic-atopic disease, the type 2 immune response that is overactive in these diseases can also mediate beneficial pro-resolving, tissue-repair functions. Thus, the presence of allergic immunologic comorbidities may be advantageous for patients suffering from sepsis. The objective of this study was to test the hypothesis that comorbid type 2 immune diseases confer protection against morbidity and mortality due to acute infection. DESIGN: Retrospective cohort study of patients hospitalized with an acute infection between November 2008 and January 2016 using electronic health record data. SETTING: Single tertiary-care academic medical center. PATIENTS: Admissions to the hospital through the emergency department with likely infection at the time of admission who may or may not have had a type 2 immune-mediated disease, defined as asthma, allergic rhinitis, atopic dermatitis, or food allergy, as determined by International Classification of Diseases, 9th Revision, Clinical Modification codes. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Of 10,789 admissions for infection, 2,578 (24%) had a type 2 disease; these patients were more likely to be female, black, and younger than patients without type 2 diseases. In unadjusted analyses, type 2 patients had decreased odds of dying during the hospitalization (0.47; 95% CI, 0.38-0.59, p < 0.001), while having more than one type 2 disease conferred a dose-dependent reduction in the risk of mortality (p < 0.001). When adjusting for demographics, medications, types of infection, and illness severity, the presence of a type 2 disease remained protective (odds ratio, 0.55; 95% CI, 0.43-0.70; p < 0.001). Similar results were found using a propensity score analysis (odds ratio, 0.57; 95% CI, 0.45-0.71; p < 0.001). CONCLUSIONS: Patients with type 2 diseases admitted with acute infections have reduced mortality, implying that the type 2 immune response is protective in sepsis.

9.
Crit Care Med ; 47(12): e962-e965, 2019 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-31567342

RESUMO

OBJECTIVES: Early warning scores were developed to identify high-risk patients on the hospital wards. Research on early warning scores has focused on patients in short-term acute care hospitals, but there are other settings, such as long-term acute care hospitals, where these tools could be useful. However, the accuracy of early warning scores in long-term acute care hospitals is unknown. DESIGN: Observational cohort study. SETTING: Two long-term acute care hospitals in Illinois from January 2002 to September 2017. PATIENTS: Admitted adult long-term acute care hospital patients. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Demographic characteristics, vital signs, laboratory values, nursing flowsheet data, and outcomes data were collected from the electronic health record. The accuracy of individual variables, the Modified Early Warning Score, the National Early Warning Score version 2, and our previously developed electronic Cardiac Arrest Risk Triage score were compared for predicting the need for acute hospital transfer or death using the area under the receiver operating characteristic curve. A total of 12,497 patient admissions were included, with 3,550 experiencing the composite outcome. The median age was 65 (interquartile range, 54-74), 46% were female, and the median length of stay in the long-term acute care hospital was 27 days (interquartile range, 17-40 d), with an 8% in-hospital mortality. Laboratory values were the best predictors, with blood urea nitrogen being the most accurate (area under the receiver operating characteristic curve, 0.63) followed by albumin, bilirubin, and WBC count (area under the receiver operating characteristic curve, 0.61). Systolic blood pressure was the most accurate vital sign (area under the receiver operating characteristic curve, 0.60). Electronic Cardiac Arrest Risk Triage (area under the receiver operating characteristic curve, 0.72) was significantly more accurate than National Early Warning Score version 2 (area under the receiver operating characteristic curve, 0.66) and Modified Early Warning Score (area under the receiver operating characteristic curve, 0.65; p < 0.01 for all pairwise comparisons). CONCLUSIONS: In this retrospective cohort study, we found that the electronic Cardiac Arrest Risk Triage score was significantly more accurate than Modified Early Warning Score and National Early Warning Score version 2 for predicting acute hospital transfer and mortality. Because laboratory values were more predictive than vital signs and the average length of stay in an long-term acute care hospital is much longer than short-term acute hospitals, developing a score specific to the long-term acute care hospital population would likely further improve accuracy, thus allowing earlier identification of high-risk patients for potentially life-saving interventions.

10.
PLoS One ; 14(7): e0220640, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31365580

RESUMO

BACKGROUND: Deep learning algorithms have achieved human-equivalent performance in image recognition. However, the majority of clinical data within electronic health records is inherently in a non-image format. Therefore, creating visual representations of clinical data could facilitate using cutting-edge deep learning models for predicting outcomes such as in-hospital mortality, while enabling clinician interpretability. The objective of this study was to develop a framework that first transforms longitudinal patient data into visual timelines and then utilizes deep learning to predict in-hospital mortality. METHODS AND FINDINGS: All adult consecutive patient admissions from 2008-2016 at a tertiary care center were included in this retrospective study. Two-dimensional visual representations for each patient were created with clinical variables on one dimension and time on the other. Predictors included vital signs, laboratory results, medications, interventions, nurse examinations, and diagnostic tests collected over the first 48 hours of the hospital stay. These visual timelines were utilized by a convolutional neural network with a recurrent layer model to predict in-hospital mortality. Seventy percent of the cohort was used for model derivation and 30% for independent validation. Of 115,825 hospital admissions, 2,926 (2.5%) suffered in-hospital mortality. Our model predicted in-hospital mortality significantly better than the Modified Early Warning Score (area under the receiver operating characteristic curve [AUC]: 0.91 vs. 0.76, P < 0.001) and the Sequential Organ Failure Assessment score (AUC: 0.91 vs. 0.57, P < 0.001) in the independent validation set. Class-activation heatmaps were utilized to highlight areas of the picture that were most important for making the prediction, thereby providing clinicians with insight into each individual patient's prediction. CONCLUSIONS: We converted longitudinal patient data into visual timelines and applied a deep neural network for predicting in-hospital mortality more accurately than current standard clinical models, while allowing for interpretation. Our framework holds promise for predicting several important outcomes in clinical medicine.

11.
Crit Care Med ; 47(10): 1371-1379, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-31306176

RESUMO

OBJECTIVES: Assess patient outcomes in patients with suspected infection and the cost-effectiveness of implementing a quality improvement program. DESIGN, SETTING, AND PARTICIPANTS: We conducted an observational single-center study of 13,877 adults with suspected infection between March 1, 2014, and July 31, 2017. The 18-month period before and after the effective date for mandated reporting of the sepsis bundle was examined. The Sequential Organ Failure Assessment score and culture and antibiotic orders were used to identify patients meeting Sepsis-3 criteria from the electronic health record. INTERVENTIONS: The following interventions were performed as follows: 1) multidisciplinary sepsis committee with sepsis coordinator and data abstractor; 2) education campaign; 3) electronic health record tools; and 4) a Modified Early Warning System. MAIN OUTCOMES AND MEASURES: Primary health outcomes were in-hospital death and length of stay. The incremental cost-effectiveness ratio was calculated and the empirical 95% CI for the incremental cost-effectiveness ratio was estimated from 5,000 bootstrap samples. RESULTS: In multivariable analysis, the odds ratio for in-hospital death in the post- versus pre-implementation periods was 0.70 (95% CI, 0.57-0.86) in those with suspected infection, and the hazard ratio for time to discharge was 1.25 (95% CI, 1.20-1.29). Similarly, a decrease in the odds for in-hospital death and an increase in the speed to discharge was observed for the subset that met Sepsis-3 criteria. The program was cost saving in patients with suspected infection (-$272,645.7; 95% CI, -$757,970.3 to -$79,667.7). Cost savings were also observed in the Sepsis-3 group. CONCLUSIONS AND RELEVANCE: Our health system's program designed to adhere to the sepsis bundle metrics led to decreased mortality and length of stay in a cost-effective manner in a much larger catchment than just the cohort meeting the Centers for Medicare and Medicaid Services measures. Our single-center model of interventions may serve as a practice-based benchmark for hospitalized patients with suspected infection.

12.
Crit Care Med ; 47(10): 1283-1289, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-31343475

RESUMO

OBJECTIVES: To characterize the rapid response team activations, and the patients receiving them, in the American Heart Association-sponsored Get With The Guidelines Resuscitation-Medical Emergency Team cohort between 2005 and 2015. DESIGN: Retrospective multicenter cohort study. SETTING: Three hundred sixty U.S. hospitals. PATIENTS: Consecutive adult patients experiencing rapid response team activation. INTERVENTIONS: Rapid response team activation. MEASUREMENTS AND MAIN RESULTS: The cohort included 402,023 rapid response team activations from 347,401 unique healthcare encounters. Respiratory triggers (38.0%) and cardiac triggers (37.4%) were most common. The most frequent interventions-pulse oximetry (66.5%), other monitoring (59.6%), and supplemental oxygen (62.0%)-were noninvasive. Fluids were the most common medication ordered (19.3%), but new antibiotic orders were rare (1.2%). More than 10% of rapid response teams resulted in code status changes. Hospital mortality was over 14% and increased with subsequent rapid response activations. CONCLUSIONS: Although patients requiring rapid response team activation have high inpatient mortality, most rapid response team activations involve relatively few interventions, which may limit these teams' ability to improve patient outcomes.

13.
Ann Surg ; 269(6): 1059-1063, 2019 06.
Artigo em Inglês | MEDLINE | ID: mdl-31082902

RESUMO

OBJECTIVE: Assess the accuracy of 3 early warning scores for predicting severe adverse events in postoperative inpatients. SUMMARY OF BACKGROUND DATA: Postoperative clinical deterioration on inpatient hospital services is associated with increased morbidity, mortality, and cost. Early warning scores have been developed to detect inpatient clinical deterioration and trigger rapid response activation, but knowledge regarding the application of early warning scores to postoperative inpatients is limited. METHODS: This was a retrospective cohort study of adult patients hospitalized on the wards after surgical procedures at an urban academic medical center from November, 2008 to January, 2016. The accuracies of the Modified Early Warning Score (MEWS), National Early Warning Score (NEWS), and the electronic cardiac arrest risk triage (eCART) score were compared in predicting severe adverse events (ICU transfer, ward cardiac arrest, or ward death) in the postoperative period using the area under the receiver operating characteristic curve (AUC). RESULTS: Of the 32,537 patient admissions included in the study, 3.8% (n = 1243) experienced a severe adverse outcome after the procedure. The accuracy for predicting the composite outcome was highest for eCART [AUC 0.79 (95% CI: 0.78-0.81)], followed by NEWS [AUC 0.76 (95% CI: 0.75-0.78)], and MEWS [AUC 0.75 (95% CI: 0.73-0.76)]. Of the individual vital signs and labs, maximum respiratory rate was the most predictive (AUC 0.67) and maximum temperature was an inverse predictor (AUC 0.46). CONCLUSION: Early warning scores are predictive of severe adverse events in postoperative patients. eCART is significantly more accurate in this patient population than both NEWS and MEWS.


Assuntos
Parada Cardíaca/diagnóstico , Parada Cardíaca/etiologia , Complicações Pós-Operatórias/diagnóstico , Complicações Pós-Operatórias/etiologia , Triagem , Adulto , Idoso , Registros Eletrônicos de Saúde , Feminino , Hospitalização , Humanos , Masculino , Pessoa de Meia-Idade , Valor Preditivo dos Testes , Curva ROC , Estudos Retrospectivos , Medição de Risco , Sinais Vitais
15.
Am J Surg ; 218(5): 851-857, 2019 11.
Artigo em Inglês | MEDLINE | ID: mdl-30885453

RESUMO

BACKGROUND: We aim to examine the risk factors associated with infection in trauma patients and the Sepsis-3 definition. METHODS: This was a retrospective cohort study of adult trauma patients admitted to a Level I trauma center between January 2014 and January 2016. RESULTS: A total of 1499 trauma patients met inclusion criteria and 15% (n = 232) had an infection. Only 19.8% (n = 46) of infected patients met criteria for Sepsis-3, with the majority (43%) of infected cases having a Sequential Organ Failure Assessment (SOFA) score greater on admission compared to the time of suspected infection. In-hospital death was 7% vs 9% (p = 0.65) between Sepsis-3 and infected patients, respectively. Risk factors associated with infection were female sex, admission SOFA score, Elixhauser score, and severe injury (P < 0.05). CONCLUSION: Patients with trauma often arrive with organ dysfunction, which adds complexity and inaccuracy to the operational definition of Sepsis-3 using changes in SOFA scores. Injury severity score, comorbidities, SOFA score, and sex are risk factors associated with developing an infection after trauma.

16.
Am J Respir Crit Care Med ; 200(3): 327-335, 2019 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-30789749

RESUMO

Rationale: Sepsis is a heterogeneous syndrome, and identifying clinically relevant subphenotypes is essential.Objectives: To identify novel subphenotypes in hospitalized patients with infection using longitudinal temperature trajectories.Methods: In the model development cohort, inpatient admissions meeting criteria for infection in the emergency department and receiving antibiotics within 24 hours of presentation were included. Temperature measurements within the first 72 hours were compared between survivors and nonsurvivors. Group-based trajectory modeling was performed to identify temperature trajectory groups, and patient characteristics and outcomes were compared between the groups. The model was then externally validated at a second hospital using the same inclusion criteria.Measurements and Main Results: A total of 12,413 admissions were included in the development cohort, and 19,053 were included in the validation cohort. In the development cohort, four temperature trajectory groups were identified: "hyperthermic, slow resolvers" (n = 1,855; 14.9% of the cohort); "hyperthermic, fast resolvers" (n = 2,877; 23.2%); "normothermic" (n = 4,067; 32.8%); and "hypothermic" (n = 3,614; 29.1%). The hypothermic subjects were the oldest and had the most comorbidities, the lowest levels of inflammatory markers, and the highest in-hospital mortality rate (9.5%). The hyperthermic, slow resolvers were the youngest and had the fewest comorbidities, the highest levels of inflammatory markers, and a mortality rate of 5.1%. The hyperthermic, fast resolvers had the lowest mortality rate (2.9%). Similar trajectory groups, patient characteristics, and outcomes were found in the validation cohort.Conclusions: We identified and validated four novel subphenotypes of patients with infection, with significant variability in inflammatory markers and outcomes.

17.
Ann Am Thorac Soc ; 16(5): 580-588, 2019 05.
Artigo em Inglês | MEDLINE | ID: mdl-30653927

RESUMO

Rationale: Honeycombing on chest computed tomography (CT) has been described in diverse forms of interstitial lung disease (ILD); however, its prevalence and association with mortality across the spectrum of ILD remains unclear. Objective: To determine the prevalence and prognostic value of CT honeycombing and characterize associated mortality patterns across diverse ILD subtypes in a multicenter cohort. Methods: This was an observational cohort study of adult participants with multidisciplinary or adjudicated ILD diagnosis and documentation of chest CT imaging at index diagnosis across five U.S. hospitals (one tertiary and four nontertiary medical centers). Participants were stratified based on presence or absence of CT honeycombing. Vital status was determined from review of medical records and social security death index. Transplant-free survival was analyzed using univariate and multivariable Cox regression. Results: The sample comprised 1,330 participants (mean age, 66.8 yr; 50% men) with 4,831 person-years of follow-up. The prevalences of CT honeycombing were 42.0%, 41.9%, 37.6%, and 28.6% in chronic hypersensitivity pneumonitis, connective tissue disease-related ILD (CTD-ILD), idiopathic pulmonary fibrosis (IPF), and unclassifiable/other ILDs, respectively. Among those with CT honeycombing, cumulative mortality hazards were similar across ILD subtypes, except for CTD-ILD, which had a lower mortality hazard. Overall, the mean survival time was shorter among those with CT honeycombing (107 mo; 95% confidence interval [CI], 92-122 mo) than those without CT honeycombing (161 mo; 95% CI, 147-174 mo). CT honeycombing was associated with an increased mortality rate (hazard ratio, 1.72; 95% CI, 1.38-2.14) even after adjustment for center, sex, age, forced vital capacity, diffusing capacity, ILD subtype, and use of immunosuppressive therapy (hazard ratio, 1.62; 95% CI, 1.29-2.02). CT honeycombing was associated with an increased mortality rate within non-IPF ILD subgroups (chronic hypersensitivity pneumonitis, CTD-ILD, and unclassifiable/other ILD). In IPF, however, mortality rates were similar between those with and without CT honeycombing. Conclusions: CT honeycombing is prevalent in diverse forms of ILD and uniquely identifies a progressive fibrotic ILD phenotype with a high mortality rate similar to IPF. CT honeycombing did not confer additional risk in IPF, which is already known to be a progressive fibrotic ILD phenotype regardless of the presence of CT honeycombing.

18.
HERD ; 12(2): 21-29, 2019 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-30380918

RESUMO

OBJECTIVE: To investigate whether a patient's proximity to the nurse's station or ward entrance at time of admission was associated with increased risk of adverse outcomes. METHOD: We conducted a retrospective cohort study of consecutive adult inpatients to 13 medical-surgical wards at an academic hospital from 2009 to 2013. Proximity of admission room to the nurse's station and to the ward entrance was measured using Euclidean distances. Outcomes of interest include development of critical illness (defined as cardiac arrests or transfer to an intensive care unit), inhospital mortality, and increase in length of stay (LOS). RESULTS: Of the 83,635 admissions, 4,129 developed critical illness and 1,316 died. The median LOS was 3 days. After adjusting for admission severity of illness, ward, shift, and year, we found no relationship between proximity at admission to nurse's station our outcomes. However, patients admitted to end of the ward had higher risk of developing critical illness (odds ratio [ OR] = 1.15, 95% confidence interval [CI] = [1.08, 1.23]), mortality ( OR = 1.16, 95% CI [1.03, 1.33]), and a higher LOS (13-hr increase, 95% CI [10, 15] hours) compared to patients admitted closer to the ward entrance. Similar results were observed in sensitivity analyses adjusting for isolation room patients and considering patients without room transfers in the first 48 hr. CONCLUSIONS: Our study suggests that being away from the nurse's station did not increase the risk of these adverse events in ward patients, but being farther from the ward entrance was associated with increase in risk of adverse outcomes. Patient safety can be improved by recognizing this additional risk factor.

19.
Chest ; 154(6): 1462, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-30526977
20.
Chronic Obstr Pulm Dis ; 5(3): 208-220, 2018 Jun 20.
Artigo em Inglês | MEDLINE | ID: mdl-30584584

RESUMO

Rationale: Although chronic obstructive pulmonary disease (COPD) exacerbation frequency is stable in research cohorts, whether severe COPD exacerbation frequency can be used to identify patients at high risk for future severe COPD exacerbations and/or mortality is unknown. Methods: Severe COPD exacerbation frequency stability was determined in 3 distinct clinical cohorts. A total of 17,450 patients with COPD in Intermountain Healthcare were categorized based on the number of severe COPD exacerbations per year. We determined whether exacerbation frequency was stable and whether it predicted mortality. These findings were validated in 83,134 patients from the U.S. Veterans Affairs (VA) nationwide health care system and 3326 patients from the University of Chicago Medicine health system. Results: In the Intermountain Healthcare cohort, the majority (84%, 14,706 patients) had no exacerbations in 2009 and were likely to remain non-exacerbators with a significantly lower 6-year mortality compared with frequent exacerbators (2 or more exacerbations per year) (25% versus 57%, p<0.001). Similar findings were noted in the VA health system and the University of Chicago Medicine health system. Non-exacerbators were likely to remain non-exacerbators with the lowest overall mortality. In all cohorts, frequent exacerbator was not a stable phenotype until patients had at least 2 consecutive years of frequent exacerbations. COPD exacerbation frequency predicted any cause mortality. Conclusions: In clinical datasets across different organizations, severe COPD exacerbation frequency was stable after at least 2 consecutive years of frequent exacerbations. Thus, severe COPD exacerbation frequency identifies patients across a health care system at high risk for future COPD-related health care utilization and overall mortality.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA