ABSTRACT
BACKGROUND: Guidelines recommend active fever prevention for 72 hours after cardiac arrest. Data from randomized clinical trials of this intervention have been lacking. METHODS: We randomly assigned comatose patients who had been resuscitated after an out-of-hospital cardiac arrest of presumed cardiac cause to device-based temperature control targeting 36°C for 24 hours followed by targeting of 37°C for either 12 or 48 hours (for total intervention times of 36 and 72 hours, respectively) or until the patient regained consciousness. The primary outcome was a composite of death from any cause or hospital discharge with a Cerebral Performance Category of 3 or 4 (range, 1 to 5, with higher scores indicating more severe disability; a category of 3 or 4 indicates severe cerebral disability or coma) within 90 days after randomization. Secondary outcomes included death from any cause and the Montreal Cognitive Assessment score (range, 0 to 30, with higher scores indicating better cognitive ability) at 3 months. RESULTS: A total of 393 patients were randomly assigned to temperature control for 36 hours, and 396 patients were assigned to temperature control for 72 hours. At 90 days after randomization, a primary end-point event had occurred in 127 of 393 patients (32.3%) in the 36-hour group and in 133 of 396 patients (33.6%) in the 72-hour group (hazard ratio, 0.99; 95% confidence interval, 0.77 to 1.26; P = 0.70) and mortality was 29.5% in the 36-hour group and 30.3% in the 72-hour group. At 3 months, the median Montreal Cognitive Assessment score was 26 (interquartile range, 24 to 29) and 27 (interquartile range, 24 to 28), respectively. There was no significant between-group difference in the incidence of adverse events. CONCLUSIONS: Active device-based fever prevention for 36 or 72 hours after cardiac arrest did not result in significantly different percentages of patients dying or having severe disability or coma. (Funded by the Novo Nordisk Foundation; BOX ClinicalTrials.gov number, NCT03141099.).
Subject(s)
Body Temperature , Cardiopulmonary Resuscitation , Coma , Fever , Hypothermia, Induced , Out-of-Hospital Cardiac Arrest , Humans , Coma/etiology , Fever/etiology , Fever/prevention & control , Hypothermia, Induced/adverse effects , Hypothermia, Induced/instrumentation , Hypothermia, Induced/methods , Out-of-Hospital Cardiac Arrest/complications , Out-of-Hospital Cardiac Arrest/therapy , Treatment Outcome , ConsciousnessABSTRACT
BACKGROUND: Evidence to support the choice of blood-pressure targets for the treatment of comatose survivors of out-of-hospital cardiac arrest who are receiving intensive care is limited. METHODS: In a double-blind, randomized trial with a 2-by-2 factorial design, we evaluated a mean arterial blood-pressure target of 63 mm Hg as compared with 77 mm Hg in comatose adults who had been resuscitated after an out-of-hospital cardiac arrest of presumed cardiac cause; patients were also assigned to one of two oxygen targets (reported separately). The primary outcome was a composite of death from any cause or hospital discharge with a Cerebral Performance Category (CPC) of 3 or 4 within 90 days (range, 0 to 5, with higher categories indicating more severe disability; a category of 3 or 4 indicates severe disability or coma). Secondary outcomes included neuron-specific enolase levels at 48 hours, death from any cause, scores on the Montreal Cognitive Assessment (range, 0 to 30, with higher scores indicating better cognitive ability) and the modified Rankin scale (range, 0 to 6, with higher scores indicating greater disability) at 3 months, and the CPC at 3 months. RESULTS: A total of 789 patients were included in the analysis (393 in the high-target group and 396 in the low-target group). A primary-outcome event occurred in 133 patients (34%) in the high-target group and in 127 patients (32%) in the low-target group (hazard ratio, 1.08; 95% confidence interval [CI], 0.84 to 1.37; P = 0.56). At 90 days, 122 patients (31%) in the high-target group and 114 patients (29%) in the low-target group had died (hazard ratio, 1.13; 95% CI, 0.88 to 1.46). The median CPC was 1 (interquartile range, 1 to 5) in both the high-target group and the low-target group; the corresponding median modified Rankin scale scores were 1 (interquartile range, 0 to 6) and 1 (interquartile range, 0 to 6), and the corresponding median Montreal Cognitive Assessment scores were 27 (interquartile range, 24 to 29) and 26 (interquartile range, 24 to 29). The median neuron-specific enolase level at 48 hours was also similar in the two groups. The percentages of patients with adverse events did not differ significantly between the groups. CONCLUSIONS: Targeting a mean arterial blood pressure of 77 mm Hg or 63 mm Hg in patients who had been resuscitated from cardiac arrest did not result in significantly different percentages of patients dying or having severe disability or coma. (Funded by the Novo Nordisk Foundation; BOX ClinicalTrials.gov number, NCT03141099.).
Subject(s)
Arterial Pressure , Coma , Out-of-Hospital Cardiac Arrest , Adult , Humans , Arterial Pressure/physiology , Biomarkers/analysis , Cardiopulmonary Resuscitation , Coma/diagnosis , Coma/etiology , Coma/mortality , Coma/physiopathology , Double-Blind Method , Health Status Indicators , Out-of-Hospital Cardiac Arrest/complications , Out-of-Hospital Cardiac Arrest/therapy , Oxygen , Phosphopyruvate Hydratase/analysis , Survivors , Critical CareABSTRACT
BACKGROUND: The appropriate oxygenation target for mechanical ventilation in comatose survivors of out-of-hospital cardiac arrest is unknown. METHODS: In this randomized trial with a 2-by-2 factorial design, we randomly assigned comatose adults with out-of-hospital cardiac arrest in a 1:1 ratio to either a restrictive oxygen target of a partial pressure of arterial oxygen (Pao2) of 9 to 10 kPa (68 to 75 mm Hg) or a liberal oxygen target of a Pao2 of 13 to 14 kPa (98 to 105 mm Hg); patients were also assigned to one of two blood-pressure targets (reported separately). The primary outcome was a composite of death from any cause or hospital discharge with severe disability or coma (Cerebral Performance Category [CPC] of 3 or 4; categories range from 1 to 5, with higher values indicating more severe disability), whichever occurred first within 90 days after randomization. Secondary outcomes were neuron-specific enolase levels at 48 hours, death from any cause, the score on the Montreal Cognitive Assessment (ranging from 0 to 30, with higher scores indicating better cognitive ability), the score on the modified Rankin scale (ranging from 0 to 6, with higher scores indicating greater disability), and the CPC at 90 days. RESULTS: A total of 789 patients underwent randomization. A primary-outcome event occurred in 126 of 394 patients (32.0%) in the restrictive-target group and in 134 of 395 patients (33.9%) in the liberal-target group (hazard ratio, 0.95; 95% confidence interval, 0.75 to 1.21; P = 0.69). At 90 days, death had occurred in 113 patients (28.7%) in the restrictive-target group and in 123 (31.1%) in the liberal-target group. On the CPC, the median category was 1 in the two groups; on the modified Rankin scale, the median score was 2 in the restrictive-target group and 1 in the liberal-target group; and on the Montreal Cognitive Assessment, the median score was 27 in the two groups. At 48 hours, the median neuron-specific enolase level was 17 µg per liter in the restrictive-target group and 18 µg per liter in the liberal-target group. The incidence of adverse events was similar in the two groups. CONCLUSIONS: Targeting of a restrictive or liberal oxygenation strategy in comatose patients after resuscitation for cardiac arrest resulted in a similar incidence of death or severe disability or coma. (Funded by the Novo Nordisk Foundation; BOX ClinicalTrials.gov number, NCT03141099.).
Subject(s)
Coma , Out-of-Hospital Cardiac Arrest , Oxygen , Respiration, Artificial , Respiratory Insufficiency , Adult , Humans , Coma/etiology , Coma/mortality , Coma/therapy , Out-of-Hospital Cardiac Arrest/complications , Out-of-Hospital Cardiac Arrest/therapy , Oxygen/administration & dosage , Phosphopyruvate Hydratase/analysis , Survivors , Respiration, Artificial/methods , Respiratory Insufficiency/etiology , Respiratory Insufficiency/therapy , Biomarkers/analysisABSTRACT
BACKGROUND: Acute kidney injury (AKI) represents a common and serious complication to out-of-hospital cardiac arrest. The importance of post-resuscitation care targets for blood pressure and oxygenation for the development of AKI is unknown. METHODS: This is a substudy of a randomized 2-by-2 factorial trial, in which 789 comatose adult patients who had out-of-hospital cardiac arrest with presumed cardiac cause and sustained return of spontaneous circulation were randomly assigned to a target mean arterial blood pressure of either 63 or 77 mm Hg. Patients were simultaneously randomly assigned to either a restrictive oxygen target of a partial pressure of arterial oxygen (Pao2) of 9 to 10 kPa or a liberal oxygenation target of a Pao2 of 13 to 14 kPa. The primary outcome for this study was AKI according to KDIGO (Kidney Disease: Improving Global Outcomes) classification in patients surviving at least 48 hours (N=759). Adjusted logistic regression was performed for patients allocated to high blood pressure and liberal oxygen target as reference. RESULTS: The main population characteristics at admission were: age, 64 (54-73) years; 80% male; 90% shockable rhythm; and time to return of spontaneous circulation, 18 (12-26) minutes. Patients allocated to a low blood pressure and liberal oxygen target had an increased risk of developing AKI compared with patients with high blood pressure and liberal oxygen target (84/193 [44%] versus 56/187 [30%]; adjusted odds ratio, 1.87 [95% CI, 1.21-2.89]). Multinomial logistic regression revealed that the increased risk of AKI was only related to mild-stage AKI (KDIGO stage 1). There was no difference in risk of AKI in the other groups. Plasma creatinine remained high during hospitalization in the low blood pressure and liberal oxygen target group but did not differ between groups at 6- and 12-month follow-up. CONCLUSIONS: In comatose patients who had been resuscitated after out-of-hospital cardiac arrest, patients allocated to a combination of a low mean arterial blood pressure and a liberal oxygen target had a significantly increased risk of mild-stage AKI. No difference was found in terms of more severe AKI stages or other kidney-related adverse outcomes, and creatinine had normalized at 1 year after discharge. REGISTRATION: URL: https://www.clinicaltrials.gov; Unique identifier: NCT03141099.
Subject(s)
Acute Kidney Injury , Hypertension , Hypotension , Out-of-Hospital Cardiac Arrest , Adult , Humans , Male , Middle Aged , Female , Blood Pressure , Out-of-Hospital Cardiac Arrest/therapy , Out-of-Hospital Cardiac Arrest/complications , Oxygen , Coma , Creatinine , Hypertension/complications , Acute Kidney Injury/etiology , Acute Kidney Injury/therapy , Kidney , Hypotension/complicationsABSTRACT
INTRODUCTION: Bile acid diarrhea (BAD) is an underrecognized and socially debilitating disease caused by high concentrations of bile acids in the colon. Bile acids directly and indirectly promote carcinogenesis. In this article, we investigated whether individuals with BAD have an increased risk of gastrointestinal (GI) cancers. METHODS: By using the Danish health registries, adult individuals with BAD were identified by International Classification of Diseases 10th revision code K90.8 or referral to the diagnostic 75selenium-homotaurocholic acid test followed by prescription of a bile acid sequestrant within 365 days (n = 5,245). Age- and sex-matched individuals without BAD were included for comparison (n = 52,450). We analyzed the cumulative incidence of GI cancers after BAD diagnosis and the odds ratios (ORs) of GI cancer 8 and 15 years before BAD diagnosis/matching. RESULTS: Cumulative incidence of GI cancer 6 years after BAD diagnosis/matching was 1.6% in the BAD group and 1.1% in controls ( P = 0.01). The ORs of total GI cancer 8 and 15 years before BAD diagnosis were 6.16 (5.08-7.48) and 5.19 (4.28-6.29), respectively. Furthermore, 47 individuals with BAD (0.9%) and 250 (0.5%) controls died of GI cancer. DISCUSSION: This nationwide cohort study indicates an association between BAD and GI cancers. We found both a higher incidence of GI cancer after BAD diagnosis compared with controls and increased OR of GI cancer before BAD diagnosis. Bearing in mind the underdiagnosis of BAD, the delay of BAD diagnosis, and the carcinogenic effect of bile acids, these findings warrant further investigations of the risk of GI cancer in individuals with BAD.
Subject(s)
Bile Acids and Salts , Diarrhea , Gastrointestinal Neoplasms , Humans , Male , Female , Diarrhea/epidemiology , Incidence , Middle Aged , Denmark/epidemiology , Bile Acids and Salts/metabolism , Aged , Adult , Gastrointestinal Neoplasms/epidemiology , Case-Control Studies , Registries , Risk Factors , Odds RatioABSTRACT
AIM: To explore the impact of type 2 diabetes (T2D), glycaemic control and use of glucose-lowering medication on clinical outcomes in hospitalized patients with COVID-19. MATERIALS AND METHODS: For all patients admitted to a hospital in the Capital Region of Denmark (1 March 2020 to 1 December 2021) with confirmed COVID-19, we extracted data on mortality, admission to intensive care unit (ICU), demographics, comorbidities, medication use and laboratory tests from the electronic health record system. We compared patients with T2D to patients without diabetes using Cox proportional hazards models adjusted for available confounding variables. Outcomes were 30-day mortality and admission to an ICU. For patients with T2D, we also analysed the association of baseline haemoglobin A1c (HbA1c) levels and use of specific glucose-lowering medications with the outcomes. RESULTS: In total, 4430 patients were analysed, 1236 with T2D and 2194 without diabetes. The overall 30-day mortality was 19% (n = 850) and 10% (n = 421) were admitted to an ICU. Crude analyses showed that patients with T2D both had increased mortality [hazard ratio (HR) 1.37; 95% CI 1.19-1.58] and increased risk of ICU admission (HR 1.28; 95% CI 1.04-1.57). When adjusted for available confounders, this discrepancy was attenuated for both mortality (adjusted HR 1.13; 95% CI 0.95-1.33) and risk of ICU admission (adjusted HR 1.01; 95% CI 0.79-1.29). Neither baseline haemoglobin A1c nor specific glucose-lowering medication use were significantly associated with the outcomes. CONCLUSION: Among those hospitalized for COVID-19, patients with T2D did not have a higher risk of death and ICU admission, when adjusting for confounders.
Subject(s)
COVID-19 , Diabetes Mellitus, Type 2 , Humans , Diabetes Mellitus, Type 2/complications , Diabetes Mellitus, Type 2/drug therapy , COVID-19/complications , Glycated Hemoglobin , Glycemic Control , Glucose/therapeutic use , Denmark/epidemiology , Retrospective StudiesABSTRACT
BACKGROUND: Acute kidney injury (AKI) is a significant risk factor associated with reduced survival following out-of-hospital cardiac arrest (OHCA). Whether the severity of AKI simply serves as a surrogate measure of worse peri-arrest conditions, or represents an additional risk to long-term survival remains unclear. METHODS: This is a sub-study derived from a randomized trial in which 789 comatose adult OHCA patients with presumed cardiac cause and sustained return of spontaneous circulation (ROSC) were enrolled. Patients without prior dialysis dependent kidney disease and surviving at least 48 h were included (N = 759). AKI was defined by the kidney disease: improving global outcome (KDIGO) classification, and patients were divided into groups based on the development of AKI and the need for continuous kidney replacement therapy (CKRT), thus establishing three groups of patients-No AKI, AKI no CKRT, and AKI CKRT. Primary outcome was overall survival within 365 days after OHCA according to AKI group. Adjusted Cox proportional hazard models were used to assess overall survival within 365 days according to the three groups. RESULTS: In the whole population, median age was 64 (54-73) years, 80% male, 90% of patients presented with shockable rhythm, and time to ROSC was median 18 (12-26) min. A total of 254 (33.5%) patients developed AKI according to the KDIGO definition, with 77 requiring CKRT and 177 without need for CKRT. AKI CKRT patients had longer time-to-ROSC and worse metabolic derangement at hospital admission. Overall survival within 365 days from OHCA decreased with the severity of kidney injury. Adjusted Cox regression analysis found that AKI, both with and without CKRT, was significantly associated with reduced overall survival up until 365 days, with comparable hazard ratios relative to no AKI (HR 1.75, 95% CI 1.13-2.70 vs. HR 1.76, 95% CI 1.30-2.39). CONCLUSIONS: In comatose patients who had been resuscitated after OHCA, patients developing AKI, with or without initiation of CKRT, had a worse 1-year overall survival compared to non-AKI patients. This association remains statistically significant after adjusting for other peri-arrest risk factors. TRIAL REGISTRATION: The BOX trial is registered at ClinicalTrials.gov: NCT03141099.
Subject(s)
Acute Kidney Injury , Out-of-Hospital Cardiac Arrest , Aged , Female , Humans , Male , Middle Aged , Acute Kidney Injury/therapy , Acute Kidney Injury/etiology , Acute Kidney Injury/physiopathology , Out-of-Hospital Cardiac Arrest/therapy , Out-of-Hospital Cardiac Arrest/mortality , Out-of-Hospital Cardiac Arrest/complications , Proportional Hazards ModelsABSTRACT
BACKGROUND: The "Blood Pressure and Oxygenation Targets in Post Resuscitation Care" (BOX) trial investigated whether a low versus high blood pressure target, a restrictive versus liberal oxygenation target, and a shorter versus longer duration of device-based fever prevention in comatose patients could improve outcomes. No differences in rates of discharge from hospital with severe disability or 90-day mortality were found. However, long-term effects and potential interaction of the interventions are unknown. Accordingly, the objective of this study is to investigate both individual and combined effects of the interventions on 1-year mortality rates. METHODS: The BOX trial was a randomized controlled two-center trial that assigned comatose resuscitated out-of-hospital cardiac arrest patients to the following three interventions at admission: A blood pressure target of either 63 mmHg or 77 mmHg; An arterial oxygenation target of 9-10 kPa or 13-14 kPa; Device-based fever prevention administered as an initial 24 h at 36 °C and then either 12 or 48 h at 37 °C; totaling 36 or 72 h of temperature control. Randomization occurred in parallel and simultaneously to all interventions. Patients were followed for the occurrence of death from all causes for 1 year. Analyzes were performed by Cox proportional models, and assessment of interactions was performed with the interventions stated as an interaction term. RESULTS: Analysis for all three interventions included 789 patients. For the intervention of low compared to high blood pressure targets, 1-year mortality rates were 35% (138 of 396) and 36% (143 of 393), respectively, hazard ratio (HR) 0.92 (0.73-1.16) p = 0.47. For the restrictive compared to liberal oxygenation targets, 1-year mortality rates were 34% (135 of 394) and 37% (146 of 395), respectively, HR 0.92 (0.73-1.16) p = 0.46. For device-based fever prevention for a total of 36 compared to 72 h, 1-year mortality rates were 35% (139 of 393) and 36% (142 of 396), respectively, HR 0.98 (0.78-1.24) p = 0.89. There was no sign of interaction between the interventions, and accordingly, no combination of randomizations indicated differentiated treatment effects. CONCLUSIONS: There was no difference in 1-year mortality rates for a low compared to high blood pressure target, a liberal compared to restrictive oxygenation target, or a longer compared to shorter duration of device-based fever prevention after cardiac arrest. No combination of the interventions affected these findings. Trial registration ClinicalTrials.gov NCT03141099, Registered 30 April 2017.
Subject(s)
Hypertension , Out-of-Hospital Cardiac Arrest , Humans , Blood Pressure , Out-of-Hospital Cardiac Arrest/therapy , Coma , ResuscitationABSTRACT
BACKGROUND: Studying complete hospital care episodes from register data, for instance when assessing length of stay, discharges and readmissions, can cause methodological difficulties due to the lack of a contact linkage identifier. We aimed to develop an algorithm combining sequential attendance contacts in the Danish National Patient Register (DNPR) into hospital care episodes, spanning the entire duration and all contacts from hospital arrival to departure. METHODS: The algorithm was developed under the consensus of experts from research institutions across Denmark. It reads in second and third version DNPR data, deletes contacts without attendance, duplicates elective outpatient contacts corresponding to attendance dates and modifies contact types (e.g. repeated acute contacts), among others. Thereafter, sequential contacts within 4 h are marked as the same hospital care episode, consisting of one or more DNPR contact. We tested the algorithm in a data set of adults living in Denmark during 2013-2021 and compared different hourly cut-offs. RESULTS: For the demonstration, we included 120.2 m contacts from 5.7 m persons, combined into 105.9 m hospital care episodes. Of the hospital care episodes, 6.4% were acute inpatients, 8.3% were acute outpatients, 2.0% were elective inpatients and 83.3% were elective outpatients. Using 4 h as our recommendation, 3-h, 5-h and 6-h cut-offs for contact combining revealed only minor differences in the number of hospital care episodes (<0.4%), whereas 12-h (<1.7%) and 24-h cut-offs (<43.1%) had a larger impact. CONCLUSIONS: The algorithm automates data reading, modification and linkage of sequential attendance contacts. The algorithm can be initiated as a SAS macro and is available from an online repository.
ABSTRACT
BACKGROUND AND PURPOSE: Dislocation is a severe complication following total hip arthroplasty (THA). Hip precautions have been recommended in the initial postoperative period but evidence supporting this practice is limited. We therefore conducted a population-based study to evaluate the association between discontinuing recommending postoperative hip precautions and the risk of early dislocation. METHODS: This is a cohort study with data from the Danish Hip Arthroplasty Register and the Danish National Patient Register. We included patients who underwent primary THA for osteoarthritis in 2004-2019 in public hospitals in the Capital Region of Denmark. The cohort was divided into the hip precautions group, comprising patients operated on between 2004 and 2009, and the no-precautions group operated on between 2014 and 2019. The primary outcome was the difference in the absolute risk of dislocation within 3 months post-surgery. The secondary outcome assessed the same risk within 2 years. We evaluated the difference in absolute risk using absolute risk regression (ARR). RESULTS: The cumulative incidence of dislocation within 3 months was 2.9% (confidence interval [CI] 2.5-3.3) in the hip precautions group and 3.5% (CI 3.1-3.9) in the no-precautions group. The risk of dislocation was higher in the no-precautions group but failed to reach statistical significance in the crude (ARR 1.2, CI 0.9-1.6) and multivariate model (ARR 1.4, CI 0.9-2.2). CONCLUSION: We found a higher but statistically insignificant increase in the risk of early dislocation in the no-precautions group. The lack of significance in the association may be explained by the increased use of 36-mm femoral heads after the guideline revision.
Subject(s)
Arthroplasty, Replacement, Hip , Hip Dislocation , Hip Prosthesis , Postoperative Complications , Registries , Humans , Arthroplasty, Replacement, Hip/adverse effects , Male , Denmark/epidemiology , Female , Aged , Hip Dislocation/prevention & control , Hip Dislocation/etiology , Hip Dislocation/epidemiology , Middle Aged , Hip Prosthesis/adverse effects , Postoperative Complications/prevention & control , Postoperative Complications/epidemiology , Postoperative Complications/etiology , Cohort Studies , Osteoarthritis, Hip/surgery , Risk Factors , Incidence , Aged, 80 and overABSTRACT
BACKGROUND: Troponin concentrations above upper reference are associated with increased mortality in patients with pulmonary embolism (PE). We aimed to assess whether risk of 30-day mortality increases in a dose-response relationship with concentration of troponin. METHODS: Using Danish national registries, we identified patients ≥ 18 years of age hospitalized with first-time PE between 2013 and 2018 and available troponin measurements - 1/+1 day from admission. Patients were stratified into quintiles by increasing troponin concentration. Risk of 30-day mortality was assessed performing cumulative mortality curves and Cox regression model comparing the troponin quintiles. RESULTS: We identified 5,639 PE patients of which 3,278 (58%) had a troponin concentration above upper reference. These patients were older (74 years), 50% male and with heavier comorbidity compared to patients with non-elevated troponin. We found increasing 30-day mortality with increasing troponin concentration (1% in 1st quintile (95% CI 0.5-1.5%), 2% in 2nd quintile (95% CI 1-2.5%), 8% in 3rd quintile (95% CI 5-9%), 11% in 4th quintile (95% CI 9-13%) and 15% in 5th quintile (95% CI 13-16%), confirmed in a Cox model comparing 1st quintile with 2nd quintile (HR 1.09; 95% CI 0.58-2.02), 3rd quintile (HR 3.68; 95% CI 2.20-6.15), 4th quintile (HR 5.51; 95% CI 3.34-9.10) and 5th quintile (HR 8.09; 95% CI 4.95-13.23). CONCLUSION: 30-day mortality was strongly associated with troponin concentration useful for improving risk stratification, treatment strategies and outcomes in PE patients.
Subject(s)
Pulmonary Embolism , Troponin , Humans , Male , Female , Pulmonary Embolism/diagnosis , Comorbidity , Acute Disease , Proportional Hazards Models , Prognosis , Risk AssessmentABSTRACT
OBJECTIVE: To identify PaCO2 trajectories and assess their associations with mortality in critically ill patients with coronavirus disease 2019 (COVID-19) during the first and second waves of the pandemic in Denmark. DESIGN: A population-based cohort study with retrospective data collection. PATIENTS: All COVID-19 patients were treated in eight intensive care units (ICUs) in the Capital Region of Copenhagen, Denmark, between March 1, 2020 and March 31, 2021. MEASUREMENTS: Data from the electronic health records were extracted, and latent class analyses were computed based on up to the first 3 weeks of mechanical ventilation to depict trajectories of PaCO2 levels. Multivariable Cox regression analyses were used to calculate adjusted hazard ratios (aHRs) for Simplified Acute Physiology Score 3, sex and age with 95% confidence intervals (CIs) for death according to PaCO2 trajectories. MAIN RESULTS: In latent class trajectory models, including 25,318 PaCO2 measurements from 244 patients, three PaCO2 latent class trajectories were identified: a low isocapnic (Class I; n = 130), a high isocapnic (Class II; n = 80), as well as a progressively hypercapnic (Class III; n = 34) trajectory. Mortality was higher in Class II [aHR: 2.16 {1.26-3.68}] and Class III [aHR: 2.97 {1.63-5.40}]) compared to Class I (reference). CONCLUSION: Latent class analysis of arterial blood gases in mechanically ventilated COVID-19 patients identified distinct PaCO2 trajectories, which were independently associated with mortality.
Subject(s)
COVID-19 , Respiration, Artificial , Humans , Cohort Studies , Retrospective Studies , COVID-19/therapy , COVID-19/complications , Hypercapnia , Intensive Care UnitsABSTRACT
PURPOSE: It is well-known that revision rates after primary knee arthroplasty vary widely. However, it is uncertain whether hospital revision rates are reliable indicators of general surgical quality as defined by patients. The SPARK study compared primary knee arthroplasty surgery at three high-volume hospitals whose revision rates differed for unknown reasons. METHODS: This prospective observational study included primary knee arthroplasty patients (total, medial/lateral unicompartmental and patellofemoral) in two low-revision hospitals (Aarhus University Hospital and Aalborg University Hospital Farsø) and one high-revision hospital (Copenhagen University Hospital Herlev-Gentofte). Patients were followed from preoperatively (2016-17) to 1-year postoperatively with patient-reported outcome measures including Oxford Knee Score (OKS), EQ-5D-5L and Copenhagen Knee ROM (range of motion) Scale. The surgical outcomes were compared across hospitals for patients with comparable grades of radiographic knee osteoarthritis and preoperative OKS. Statistical comparisons (parametric and non-parametric) included all three hospitals. RESULTS: 97% of the 1452 patients who provided baseline data (89% of those included and 56% of those operated) responded postoperatively (90% at 1 year). Hospitals' utilization of unicompartmental knee arthroplasties differed (Aarhus 49%, Aalborg 14%, and Copenhagen 22%, p < 0.001). 28 patients had revision surgery during the first year (hospital independent, p = 0.1) and were subsequently excluded. 1-year OKS (39 ± 7) was independent of hospital (p = 0.1), even when adjusted for age, sex, Body Mass Index, baseline OKS and osteoarthritis grading. 15% of patients improved less than Minimal Important Change (8 OKS) (Aarhus 19%, Aalborg 13% and Copenhagen 14%, p = 0.051 unadjusted). Patients with comparable preoperative OKS or osteoarthritis grading had similar 1-year results across hospitals (OKS and willingness to repeat surgery, p ≥ 0.087) except for the 64 patients with Kellgren-Lawrence grade-4 (Aarhus 4-6 OKS points lower). 86% of patients were satisfied, and 92% were "willing to repeat surgery", independent of hospital (p ≥ 0.1). Hospital revision rates differences diminished during the study period. CONCLUSIONS: Patients in hospitals with a history of differing revision rates had comparable patient-reported outcomes 1 year after primary knee arthroplasty, supporting that surgical quality should not be evaluated by revision rates alone. Future studies should explore if revision rate variations may depend as much on revision thresholds and indications as on outcomes of primary surgery. LEVEL OF EVIDENCE: Level II (Prospective cohort study).
Subject(s)
Arthroplasty, Replacement, Knee , Osteoarthritis, Knee , Humans , Arthroplasty, Replacement, Knee/methods , Prospective Studies , Treatment Outcome , Osteoarthritis, Knee/surgery , Hospitals, University , DenmarkABSTRACT
PURPOSE: Revision rates following primary knee arthroplasty vary by country, region and hospital. The SPARK study was initiated to compare primary surgery across three Danish regions with consistently different revision rates. The present study investigated whether the variations were associated with differences in the primary patient selection. METHODS: A prospective observational cohort study included patients scheduled Sep 2016 Dec 2017 for primary knee arthroplasty (total, medial/lateral unicompartmental or patellofemoral) at three high-volume hospitals, representing regions with 2-year cumulative revision rates of 1, 2 and 5%, respectively. Hospitals were compared with respects to patient demographics, preoperative patient-reported outcome measures, motivations for surgery, implant selection, radiological osteoarthritis and the regional incidence of primary surgery. Statistical tests (parametric and non-parametric) comprised all three hospitals. RESULTS: Baseline data was provided by 1452 patients (89% of included patients, 56% of available patients). Patients in Copenhagen (Herlev-Gentofte Hospital, high-revision) were older (68.6 ± 9 years) than those in low-revision hospitals (Aarhus 66.6 ± 10 y. and Aalborg (Farsø) 67.3 ± 9 y., p = 0.002). In Aalborg, patients who had higher Body Mass Index (mean 30.2 kg/m2 versus 28.2 (Aarhus) and 28.7 kg/m2 (Copenhagen), p < 0.001), were more likely to be male (56% versus 45 and 43%, respectively, p = 0.002), and exhibited fewer anxiety and depression symptoms (EQ-5D-5L) (24% versus 34 and 38%, p = 0.01). The preoperative Oxford Knee Score (23.3 ± 7), UCLA Activity Scale (4.7 ± 2), range of motion (Copenhagen Knee ROM Scale) and patient motivations were comparable across hospitals but varied with implant type. Radiological classification ≥ 2 was observed in 94% (Kellgren-Lawrence) and 67% (Ahlbäck) and was more frequent in Aarhus (low-revision) (p ≤ 0.02), where unicompartmental implants were utilized most (49% versus 14 (Aalborg) and 23% (Copenhagen), p < 0.001). In the Capital Region (Copenhagen), the incidence of surgery was 15-28% higher (p < 0.001). CONCLUSION: Patient-reported outcome measures prior to primary knee arthroplasty were comparable across hospitals with differing revision rates. While radiographic classifications and surgical incidence indicated higher thresholds for primary surgery in one low-revision hospital, most variations in patient and implant selection were contrary to well-known revision risk factors, suggesting that patient selection differences alone were unlikely to be responsible for the observed variation in revision rates across Danish hospitals. LEVEL OF EVIDENCE: II, Prospective cohort study.
Subject(s)
Knee Prosthesis , Osteoarthritis, Knee , Humans , Male , Female , Prospective Studies , Treatment Outcome , Osteoarthritis, Knee/surgery , Osteoarthritis, Knee/etiology , Reoperation , Knee Prosthesis/adverse effects , Hospitals, High-Volume , DenmarkABSTRACT
BACKGROUND AND AIMS: Patients with inflammatory bowel disease (IBD) are suggested to be at increased risk of urolithiasis, but the magnitude of risk and the impact of medical and surgical treatment on this risk remain unknown. We therefore aimed to determine overall and treatment-related risk of urolithiasis in patients with IBD in a nationwide population-based cohort study. METHODS: Using national registers, we identified all patients with IBD and all cases of urolithiasis in Denmark during 1977-2018. We obtained information on all IBD medications and surgical procedures during 1995-2018. IBD cases were matched 1:10 on age and sex to non-IBD individuals. RESULTS: In total, 2,549 (3%) of 75,236 IBD patients and 11,258 (2%) of 767,403 non-IBD individuals developed urolithiasis, resulting in a 2-fold increased risk of urolithiasis (HR, 2.27; 95% CI, 2.17-2.38) in patients with IBD. The patients were also at increased risk of repetitive urolithiasis events (RR, 1.09; 95% CI: 1.04-1.15) and had increased risk of urolithiasis prior to IBD diagnosis (OR, 1.42; 95% CI: 1.34-1.50). After IBD diagnosis, risk of urolithiasis was associated with anti-TNF therapy and surgery. CONCLUSION: Patients with IBD had a 2-fold increased risk of urolithiasis after IBD diagnosis and a 42% increased risk prior to IBD diagnosis. Risk was increased in anti-TNF exposed patients, and after surgery, suggesting that IBD severity per se and surgery, with altered intestinal absorption, increase risk of urolithiasis. Since stone formation is associated with adverse outcomes including sepsis, subpopulations of IBD patients, especially those undergoing strong immunosuppression might benefit from additional urolithiasis screening.
Subject(s)
Colitis, Ulcerative , Crohn Disease , Inflammatory Bowel Diseases , Urolithiasis , Cohort Studies , Denmark/epidemiology , Humans , Inflammatory Bowel Diseases/complications , Inflammatory Bowel Diseases/epidemiology , Tumor Necrosis Factor Inhibitors , Urolithiasis/epidemiologyABSTRACT
Objectives. Implantable cardioverter defibrillator (ICD) implantation in patients resuscitated from out-of-hospital cardiac arrest (OHCA) due to acute myocardial infarction (AMI) is controversial. Design. Consecutive OHCA-survivors due to AMI from two Danish tertiary heart centers from 2007 to 2011 were included. Predictors of ICD-implantation, ICD-therapy and long-term survival (5 years) were investigated. Patients with and without ICD-implantation during the index hospital admission were included (later described as early ICD-implantation). Patients with an ICD after hospital discharge were censored from further analyses at time of implantation. Results. We identified 1,457 consecutive OHCA-patients, and 292 (20%) of the cohort met the inclusion criteria. An ICD was implanted during hospital admission in 78 patients (27%). STEMI and successful revascularization were inversely and independently associated with ICD-implantation (ORSTEMI = 0.37, 95% CI: 0.14-0.94, ORrevasc = 0.11, 0.03-0.36) whereas age, sex, LVEF <35%, comorbidity burden or shockable first OHCA-rhythm were not associated with ICD-implantation. Appropriate ICD-shock therapy during the follow-up period was noted in 15% of patients (n = 12). Five-year mortality-rate was significantly lower in ICD-patients (18% vs. 28%, plogrank = 0.02), which was persistent after adjustment for prognostic factors (HR = 0.44 (95% CI: 0.23-0.88)). This association was no longer found when using first event (death or appropriate shock whatever came first) as outcome variable (plogrank = 0.9). Conclusions. Mortality after OHCA due to AMI was significantly lower in patients with early ICD-implantation after adjustment for prognostic factors. When using appropriate shock and death as events, ICD-patients had similar outcome as patients without an ICD, which may suggest a survival benefit due to appropriate device therapy.
Subject(s)
Acute Coronary Syndrome , Defibrillators, Implantable , Heart Arrest , Survivors , Acute Coronary Syndrome/surgery , Defibrillators, Implantable/statistics & numerical data , Heart Arrest/epidemiology , Humans , Survival Analysis , Survivors/statistics & numerical dataABSTRACT
OBJECTIVES: To investigate the effects of the glucagon-like peptide-1 analog exenatide on blood glucose, lactate clearance, and hemodynamic variables in comatose, resuscitated out-of-hospital cardiac arrest patients. DESIGN: Predefined post hoc analyzes from a double-blind, randomized clinical trial. SETTING: The ICU of a tertiary heart center. PATIENTS: Consecutive sample of adult, comatose patients undergoing targeted temperature management after out-of-hospital cardiac arrest from a presumed cardiac cause, irrespective of the initial cardiac rhythm. INTERVENTIONS: Patients were randomized 1:1 to receive 6 hours and 15 minutes of infusion of either 17.4 µg of the glucagon-like peptide-1 analog exenatide (Byetta; Lilly) or placebo within 4 hours from sustained return of spontaneous circulation. The effects of exenatide were examined on the following prespecified covariates within the first 6 hours from study drug initiation: lactate level, blood glucose level, heart rate, mean arterial pressure, and combined dosage of norepinephrine and dopamine. MEASUREMENTS AND MAIN RESULTS: The population consisted of 106 patients receiving either exenatide or placebo. During the first 6 hours from study drug initiation, the levels of blood glucose and lactate decreased 17% (95% CI, 8.9-25%; p = 0.0004) and 21% (95% CI, 6.0-33%; p = 0.02) faster in patients receiving exenatide versus placebo, respectively. Exenatide increased heart rate by approximately 10 beats per minute compared to placebo (p < 0.0001). There was no effect of exenatide on other hemodynamic variables. CONCLUSIONS: In comatose out-of-hospital cardiac arrest patients, infusion with exenatide lowered blood glucose and resulted in increased clearance of lactate as well as increased heart rate. The clinical importance of these physiologic effects remains to be investigated.
Subject(s)
Blood Glucose/drug effects , Coma/metabolism , Coma/physiopathology , Exenatide/pharmacology , Glucagon-Like Peptide 1/analogs & derivatives , Heart Rate/drug effects , Lactic Acid/metabolism , Coma/blood , Coma/etiology , Double-Blind Method , Female , Hemodynamics/drug effects , Humans , Male , Middle Aged , Out-of-Hospital Cardiac Arrest/complications , Out-of-Hospital Cardiac Arrest/drug therapyABSTRACT
Aims: For patients surviving out-of-hospital cardiac arrest (OHCA) with a shockable rhythm, implantable cardioverter defibrillator (ICD) is recommended for non-reversible causes of arrest. We aimed to determine factors associated with implantation of ICD and survival in patients surviving non-AMI OHCA in a nationwide register covering all OHCAs in Denmark. Methods and results: We identified 36 950 OHCAs between 2001 and 2012, 1700 of whom were ICD naïve, ≥18 years, of non-AMI cardiac aetiology and surviving until discharge. Six hundred fifty eight patients had ICD implanted during index admission. Association to ICD implantation during index admission was analysed in logistic regression, survival was assessed using Cox regression. Implantable cardioverter defibrillator implantation increased during the study period [odds ratio (OR) 1-year increase: 1.04, 95% confidence intervals (95% CI): 1.00-1.08, P = 0.03]. Non-shockable rhythm and age ≥70 years were associated with lower odds of ICD implantation (ORnon-shockable: 0.27, 95% CI: 0.19-0.37, P < 0.001, OR70-79 years: 0.71, 95% CI: 0.52-0.98, P = 0.04, OR≥80 years: 0.13, 95% CI: 0.07-0.22, P < 0.001). Non-AMI ischaemic heart disease, highest income tertile and chronic heart failure were associated with higher odds (ORIHD: 2.51, 95% CI: 1.77-3.60, P < 0.001, ORhighest income tertile: 1.58, 95% CI: 1.06-2.23, P = 0.02, ORHF: 1.77, 95% CI: 1.35-2.32, P < 0.001). Implantable cardioverter defibrillator implantation was associated with a lower risk of mortality (HR: 0.70, 95% CI: 0.53-0.92, P = 0.01). Conclusion: Implantable cardioverter defibrillator implantation rates increased over the study period. CHF, previous IHD and high income were associated with ICD implantation, while older age and non-shockable rhythm was associated with lower odds of ICD implantation. Implantable cardioverter defibrillator implantation was associated with higher survival rates.
Subject(s)
Defibrillators, Implantable , Delivery of Health Care , Electric Countershock/instrumentation , Out-of-Hospital Cardiac Arrest/therapy , Public Sector , Socioeconomic Factors , Adolescent , Adult , Age Factors , Aged , Aged, 80 and over , Comorbidity , Defibrillators, Implantable/adverse effects , Defibrillators, Implantable/economics , Delivery of Health Care/economics , Denmark/epidemiology , Electric Countershock/adverse effects , Electric Countershock/economics , Electric Countershock/mortality , Female , Financing, Government , Health Care Costs , Health Status , Humans , Income , Male , Middle Aged , Out-of-Hospital Cardiac Arrest/diagnosis , Out-of-Hospital Cardiac Arrest/economics , Out-of-Hospital Cardiac Arrest/mortality , Public Sector/economics , Registries , Retrospective Studies , Risk Factors , Time Factors , Treatment Outcome , Young AdultABSTRACT
BACKGROUND: Estimation of cardiac output (CO) is essential in the treatment of circulatory unstable patients. CO measured by pulmonary artery catheter thermodilution is considered the gold standard but carries a small risk of severe complications. Stroke volume and CO can be measured by transesophageal echocardiography (TEE), which is widely used during cardiac surgery. We hypothesized that Doppler-derived CO by 3-dimensional (3D) TEE would agree well with CO measured with pulmonary artery catheter thermodilution as a reference method based on accurate measurements of the cross-sectional area of the left ventricular outflow tract. METHODS: The primary aim was a systematic comparison of CO with Doppler-derived 3D TEE and CO by thermodilution in a broad population of patients undergoing cardiac surgery. A subanalysis was performed comparing cross-sectional area by TEE with cardiac computed tomography (CT) angiography. Sixty-two patients, scheduled for elective heart surgery, were included; 1 was subsequently excluded for logistic reasons. Inclusion criteria were coronary artery bypass surgery (N = 42) and aortic valve replacement (N = 19). Exclusion criteria were chronic atrial fibrillation, left ventricular ejection fraction below 0.40 and intracardiac shunts. Nineteen randomly selected patients had a cardiac CT the day before surgery. All images were stored for blinded post hoc analyses, and Bland-Altman plots were used to assess agreement between measurement methods, defined as the bias (mean difference between methods), limits of agreement (equal to bias ± 2 standard deviations of the bias), and percentage error (limits of agreement divided by the mean of the 2 methods). Precision was determined for the individual methods (equal to 2 standard deviations of the bias between replicate measurements) to determine the acceptable limits of agreement. RESULTS: We found a good precision for Doppler-derived CO measured by 3D TEE, but although the bias for Doppler-derived CO by 3D compared to thermodilution was only 0.3 L/min (confidence interval, 0.04-0.58), there were wide limits of agreement (-1.8 to 2.5 L/min) with a percentage error of 55%. Measurements of cross-sectional area by 3D TEE had low bias of -0.27 cm (confidence interval, -0.45 to -0.08) and a percentage error of 18% compared to cardiac CT angiography. CONCLUSIONS: Despite low bias, the wide limits of agreement of Doppler-derived CO by 3D TEE compared to CO by thermodilution will limit clinical application and can therefore not be considered interchangeable with CO obtained by thermodilution. The lack of agreement is not explained by lack of agreement of the 3D technique.
Subject(s)
Cardiac Output , Echocardiography, Doppler , Echocardiography, Transesophageal , Heart Ventricles/diagnostic imaging , Stroke Volume , Aged , Cardiac Surgical Procedures , Catheterization, Swan-Ganz , Catheters , Coronary Artery Bypass , Echocardiography, Three-Dimensional , Elective Surgical Procedures , Female , Humans , Male , Middle Aged , Monitoring, Intraoperative , Reproducibility of Results , Sample Size , Software , Thermodilution , Tomography, X-Ray Computed , Ventricular Function, LeftABSTRACT
OBJECTIVES: Renal replacement therapy (RRT) is used to treat acute kidney injury as part of multi organ failure. Use and prognostic implications after out-of-hospital cardiac arrest (OHCA) is not well known. This study aims to assess incidence and use of RRT and whether RRT post-arrest was associated with 30-day mortality in Denmark in the years 2005-2013. METHODS: The Danish Cardiac Arrest Registry holds information on all OHCA patients in Denmark from 2005 to 2013. We identified 3,012 one-day survivors of OHCA ≥18 years, with presumed cardiac aetiology of arrest, admitted to ICU without previous RRT. Change in use of RRT during the study period was assessed using competing risk analysis. Mortality was assessed with Cox regression. RESULTS: On average, RRT was performed in 6% of the patient population with an average annual 1% increase, HR: 1.01, CI: 0.95-1.07, p = .69. Hazard of RRT was lower in patients receiving bystander cardiopulmonary resuscitation (CPR) (p < .001), patients with a shockable primary rhythm (p = .009) and elderly patients (p = .03). Socioeconomic factors did not influence hazard of RRT, but patients admitted to tertiary centres had higher hazard of RRT (p = .009). Use of RRT was associated with increased mortality in multivariate Cox regression (HR: 1.28, CI: 1.06-1.55, p = .01). CONCLUSION: Use of RRT as part of post resuscitation care following OHCA did not increase from 2005 to 2013; use was more common in tertiary centres and in patients with negative prehospital predictors (no bystander CPR, non-shockable rhythm). RRT was associated with increased mortality.