RESUMEN
BACKGROUND: A novel hydrophobically modified chitosan (hm-chitosan) polymer has been previously shown to improve survival in a non-compressible intra-abdominal bleeding model in swine. We performed a 28-day survival study to evaluate the safety of the hm-chitosan polymer in swine. METHODS: Female Yorkshire swine (40-50 kg) were used. A mild, non-compressible, closed-cavity bleeding model was created with splenic transection. The hm-chitosan polymer was applied intra-abdominally through an umbilical nozzle in the same composition and dose previously shown to improve survival. Animals were monitored intraoperatively and followed 28 days postoperatively for survival, signs of pain, and end-organ function. Gross pathological and microscopic evaluations were performed at the conclusion of the experiment. RESULTS: A total of 10 animals were included (hm-chitosan = 8; control = 2). The 2 control animals survived through 28 days, and 7 of the 8 animals from the hm-chitosan group survived without any adverse events. One animal from the hm-chitosan group required early termination of the study for signs of pain, and superficial colonic ulcers were found on autopsy. Laboratory tests showed no signs of end-organ dysfunction after exposure to hm-chitosan after 28 days. On gross pathological examination, small (<0.5 cm) peritoneal nodules were noticed in the hm-chitosan group, which were consistent with giant-cell foreign body reaction in microscopy, presumably related to polymer remnants. Microscopically, no signs of systemic polymer embolization or thrombosis were noticed. CONCLUSION: Prolonged intraperitoneal exposure to the hm-chitosan polymer was tolerated without any adverse event in the majority of animals. In the single animal that required early termination, the material did not appear to be associated with end-organ dysfunction in swine. Superficial colonic ulcers that would require surgical repair were identified in 1 out of 8 animals exposed to hm-chitosan.
Asunto(s)
Quitosano , Femenino , Animales , Porcinos , Quitosano/efectos adversos , Insuficiencia Multiorgánica , Úlcera , Hemorragia/etiología , Hemorragia/terapia , Biopolímeros , DolorRESUMEN
BACKGROUND: Existent methodologies for benchmarking the quality of surgical care are linear and fail to capture the complex interactions of preoperative variables. We sought to leverage novel nonlinear artificial intelligence methodologies to benchmark emergency surgical care. METHODS: Using a nonlinear but interpretable artificial intelligence methodology called optimal classification trees, first, the overall observed mortality rate at the index hospital's emergency surgery population (index cohort) was compared to the risk-adjusted expected mortality rate calculated by the optimal classification trees from the American College of Surgeons National Surgical Quality Improvement Program database (benchmark cohort). Second, the artificial intelligence optimal classification trees created different "nodes" of care representing specific patient phenotypes defined by the artificial intelligence optimal classification trees without human interference to optimize prediction. These nodes capture multiple iterative risk-adjusted comparisons, permitting the identification of specific areas of excellence and areas for improvement. RESULTS: The index and benchmark cohorts included 1,600 and 637,086 patients, respectively. The observed and risk-adjusted expected mortality rates of the index cohort calculated by optimal classification trees were similar (8.06% [95% confidence interval: 6.8-9.5] vs 7.53%, respectively, P = .42). Two areas of excellence and 4 for improvement were identified. For example, the index cohort had lower-than-expected mortality when patients were older than 75 and in respiratory failure and septic shock preoperatively but higher-than-expected mortality when patients had respiratory failure preoperatively and were thrombocytopenic, with an international normalized ratio ≤1.7. CONCLUSION: We used artificial intelligence methodology to benchmark the quality of emergency surgical care. Such nonlinear and interpretable methods promise a more comprehensive evaluation and a deeper dive into areas of excellence versus suboptimal care.
Asunto(s)
Servicios Médicos de Urgencia , Insuficiencia Respiratoria , Humanos , Inteligencia Artificial , Benchmarking , Bases de Datos FactualesRESUMEN
Importance: The use of artificial intelligence (AI) in clinical medicine risks perpetuating existing bias in care, such as disparities in access to postinjury rehabilitation services. Objective: To leverage a novel, interpretable AI-based technology to uncover racial disparities in access to postinjury rehabilitation care and create an AI-based prescriptive tool to address these disparities. Design, Setting, and Participants: This cohort study used data from the 2010-2016 American College of Surgeons Trauma Quality Improvement Program database for Black and White patients with a penetrating mechanism of injury. An interpretable AI methodology called optimal classification trees (OCTs) was applied in an 80:20 derivation/validation split to predict discharge disposition (home vs postacute care [PAC]). The interpretable nature of OCTs allowed for examination of the AI logic to identify racial disparities. A prescriptive mixed-integer optimization model using age, injury, and gender data was allowed to "fairness-flip" the recommended discharge destination for a subset of patients while minimizing the ratio of imbalance between Black and White patients. Three OCTs were developed to predict discharge disposition: the first 2 trees used unadjusted data (one without and one with the race variable), and the third tree used fairness-adjusted data. Main Outcomes and Measures: Disparities and the discriminative performance (C statistic) were compared among fairness-adjusted and unadjusted OCTs. Results: A total of 52â¯468 patients were included; the median (IQR) age was 29 (22-40) years, 46â¯189 patients (88.0%) were male, 31â¯470 (60.0%) were Black, and 20â¯998 (40.0%) were White. A total of 3800 Black patients (12.1%) were discharged to PAC, compared with 4504 White patients (21.5%; P < .001). Examining the AI logic uncovered significant disparities in PAC discharge destination access, with race playing the second most important role. The prescriptive fairness adjustment recommended flipping the discharge destination of 4.5% of the patients, with the performance of the adjusted model increasing from a C statistic of 0.79 to 0.87. After fairness adjustment, disparities disappeared, and a similar percentage of Black and White patients (15.8% vs 15.8%; P = .87) had a recommended discharge to PAC. Conclusions and Relevance: In this study, we developed an accurate, machine learning-based, fairness-adjusted model that can identify barriers to discharge to postacute care. Instead of accidentally encoding bias, interpretable AI methodologies are powerful tools to diagnose and remedy system-related bias in care, such as disparities in access to postinjury rehabilitation care.
RESUMEN
We sought to study the role of circulating cellular clusters (CCC) -such as circulating leukocyte clusters (CLCs), platelet-leukocyte aggregates (PLA), and platelet-erythrocyte aggregates (PEA)- in the immunothrombotic state induced by COVID-19. Forty-six blood samples from 37 COVID-19 patients and 12 samples from healthy controls were analyzed with imaging flow cytometry. Patients with COVID-19 had significantly higher levels of PEAs (p value<0.001) and PLAs (p value = 0.015) compared to healthy controls. Among COVID-19 patients, CLCs were correlated with thrombotic complications (p value = 0.016), vasopressor need (p value = 0.033), acute kidney injury (p value = 0.027), and pneumonia (p value = 0.036), whereas PEAs were associated with positive bacterial cultures (p value = 0.033). In predictive in silico simulations, CLCs were more likely to result in microcirculatory obstruction at low flow velocities (≤1 mm/s) and at higher branching angles. Further studies on the cellular component of hyperinflammatory prothrombotic states may lead to the identification of novel biomarkers and drug targets for inflammation-related thrombosis.
RESUMEN
BACKGROUND: Artificial intelligence (AI) risk prediction algorithms such as the smartphone-available Predictive OpTimal Trees in Emergency Surgery Risk (POTTER) for emergency general surgery (EGS) are superior to traditional risk calculators because they account for complex nonlinear interactions between variables, but how they compare to surgeons' gestalt remains unknown. Herein, we sought to: (1) compare POTTER to surgeons' surgical risk estimation and (2) assess how POTTER influences surgeons' risk estimation. STUDY DESIGN: A total of 150 patients who underwent EGS at a large quaternary care center between May 2018 and May 2019 were prospectively followed up for 30-day postoperative outcomes (mortality, septic shock, ventilator dependence, bleeding requiring transfusion, pneumonia), and clinical cases were systematically created representing their initial presentation. POTTER's outcome predictions for each case were also recorded. Thirty acute care surgeons with diverse practice settings and levels of experience were then randomized into two groups: 15 surgeons (SURG) were asked to predict the outcomes without access to POTTER's predictions while the remaining 15 (SURG-POTTER) were asked to predict the same outcomes after interacting with POTTER. Comparing to actual patient outcomes, the area under the curve (AUC) methodology was used to assess the predictive performance of (1) POTTER versus SURG, and (2) SURG versus SURG-POTTER. RESULTS: POTTER outperformed SURG in predicting all outcomes (mortality-AUC: 0.880 vs. 0.841; ventilator dependence-AUC: 0.928 vs. 0.833; bleeding-AUC: 0.832 vs. 0.735; pneumonia-AUC: 0.837 vs. 0.753) except septic shock (AUC: 0.816 vs. 0.820). SURG-POTTER outperformed SURG in predicting mortality (AUC: 0.870 vs. 0.841), bleeding (AUC: 0.811 vs. 0.735), pneumonia (AUC: 0.803 vs. 0.753) but not septic shock (AUC: 0.712 vs. 0.820) or ventilator dependence (AUC: 0.834 vs. 0.833). CONCLUSION: The AI risk calculator POTTER outperformed surgeons' gestalt in predicting the postoperative mortality and outcomes of EGS patients, and when used, improved the individual surgeons' risk prediction. Artificial intelligence algorithms, such as POTTER, could prove useful as a bedside adjunct to surgeons when preoperatively counseling patients. LEVEL OF EVIDENCE: Prognostic and Epidemiological; Level II.
Asunto(s)
Inteligencia Artificial , Cirujanos , Humanos , Complicaciones Posoperatorias/epidemiología , Complicaciones Posoperatorias/etiología , Medición de Riesgo/métodos , PronósticoRESUMEN
INTRODUCTION: Preperitoneal pelvic packing (PPP) is an important intervention for control of severe pelvic hemorrhage in blunt trauma patients. We hypothesized that PPP is associated with an increased incidence of deep vein thrombosis (DVT) and pulmonary embolism (PE). METHODS: A retrospective cohort analysis of blunt trauma patients with severe pelvic fractures (AIS ≥4) using the 2015-2017 American College of Surgeons-Trauma Quality Improvement Program database was performed. Patients who underwent PPP within four hours of admission were matched to patients who did not using propensity score matching. Matching was performed based on demographics, comorbidities, injury- and resuscitation-related parameters, vital signs at presentation, and initiation and type of prophylactic anticoagulation. The rates of DVT and PE were compared between the matched groups. RESULTS: Out of 5129 patients with severe pelvic fractures, 157 (3.1%) underwent PPP within four h of presentation and were matched with 157 who did not. No significant differences were detected between the two matched groups in any of the examined baseline variables. Similarly, mortality and end-organ failure rates were not different. However, PPP patients were significantly more likely to develop DVT (12.7% versus 5.1%, P = 0.028) and PE (5.7% versus 0.0%, P = 0.003). CONCLUSIONS: PPP in severe pelvic fractures secondary to blunt trauma is associated with an increased risk of DVT and PE. A high index of suspicion and a low threshold for screening for these conditions should be maintained in patients who undergo PPP.
Asunto(s)
Fracturas Óseas , Huesos Pélvicos , Embolia Pulmonar , Tromboembolia Venosa , Heridas no Penetrantes , Humanos , Tromboembolia Venosa/epidemiología , Tromboembolia Venosa/etiología , Tromboembolia Venosa/prevención & control , Estudios Retrospectivos , Huesos Pélvicos/lesiones , Embolia Pulmonar/epidemiología , Embolia Pulmonar/etiología , Embolia Pulmonar/prevención & control , Fracturas Óseas/etiología , Fracturas Óseas/complicaciones , Heridas no Penetrantes/complicaciones , Heridas no Penetrantes/diagnóstico , Heridas no Penetrantes/epidemiología , AnticoagulantesRESUMEN
Objective: To determine whether the outcomes of postoperative patients admitted directly to an intensive care unit (ICU) differ based on the academic status of the institution and the total operative volume of the unit. Methods: This was a retrospective analysis using the eICU Collaborative Research Database v2.0, a national database from participating ICUs in the United States. All patients admitted directly to the ICU from the operating room were included. Transfer patients and patients readmitted to the ICU were excluded. Patients were stratified based on admission to an ICU in an academic medical center (AMC) versus non-AMC, and to ICUs with different operative volume experience, after stratification in quartiles (high, medium-high, medium-low, and low volume). Primary outcomes were ICU and hospital mortality. Secondary outcomes included the need for continuous renal replacement therapy (CRRT) during ICU stay, ICU length of stay (LOS), and 30-day ventilator free days. Results: Our analysis included 22,180 unique patients; the majority of which (15,085[68%]) were admitted to ICUs in non-AMCs. Cardiac and vascular procedures were the most common types of procedures performed. Patients admitted to AMCs were more likely to be younger and less likely to be Hispanic or Asian. Multivariable logistic regression indicated no meaningful association between academic status and ICU mortality, hospital mortality, initiation of CRRT, duration of ICU LOS, or 30-day ventilator-free-days. Contrarily, medium-high operative volume units had higher ICU mortality (OR = 1.45, 95%CI = 1.10-1.91, p-value = 0.040), higher hospital mortality (OR = 1.33, 95%CI = 1.07-1.66, p-value = 0.033), longer ICU LOS (Coefficient = 0.23, 95%CI = 0.07-0.39, p-value = 0.038), and fewer 30-day ventilator-free-days (Coefficient = -0.30, 95%CI = -0.48 - -0.13, p-value = 0.015) compared to their high operative volume counterparts. Conclusions: This study found that a volume-outcome association in the management of postoperative patients requiring ICU level of care immediately after a surgical procedure may exist. The academic status of the institution did not affect the outcomes of these patients.
Asunto(s)
Cuidados Críticos , Unidades de Cuidados Intensivos , Humanos , Estados Unidos/epidemiología , Estudios Retrospectivos , Mortalidad Hospitalaria , Tiempo de Internación , HospitalesRESUMEN
BACKGROUND: Delays in admitting high-risk emergency surgery patients to the intensive care unit result in worse outcomes and increased health care costs. We aimed to use interpretable artificial intelligence technology to create a preoperative predictor for postoperative intensive care unit need in emergency surgery patients. METHODS: A novel, interpretable artificial intelligence technology called optimal classification trees was leveraged in an 80:20 train:test split of adult emergency surgery patients in the 2007-2017 American College of Surgeons National Surgical Quality Improvement Program database. Demographics, comorbidities, and laboratory values were used to develop, train, and then validate optimal classification tree algorithms to predict the need for postoperative intensive care unit admission. The latter was defined as postoperative death or the development of 1 or more postoperative complications warranting critical care (eg, unplanned intubation, ventilator requirement ≥48 hours, cardiac arrest requiring cardiopulmonary resuscitation, and septic shock). An interactive and user-friendly application was created. C statistics were used to measure performance. RESULTS: A total of 464,861 patients were included. The mean age was 55 years, 48% were male, and 11% developed severe postoperative complications warranting critical care. The Predictive OpTimal Trees in Emergency Surgery Risk Intensive Care Unit application was created as the user-friendly interface of the complex optimal classification tree algorithms. The number of questions (ie, tree depths) needed to predict intensive care unit admission ranged from 2 to 11. The Predictive OpTimal Trees in Emergency Surgery Risk Intensive Care Unit application had excellent discrimination for predicting the need for intensive care unit admission (C statistics: 0.89 train, 0.88 test). CONCLUSION: We recommend the Predictive OpTimal Trees in Emergency Surgery Risk Intensive Care Unit application as an accurate, artificial intelligence-based tool for predicting severe complications warranting intensive care unit admission after emergency surgery. The Predictive OpTimal Trees in Emergency Surgery Risk Intensive Care Unit application can prove useful to triage patients to the intensive care unit and to potentially decrease failure to rescue in emergency surgery patients.
Asunto(s)
Inteligencia Artificial , Teléfono Inteligente , Adulto , Cuidados Críticos , Femenino , Humanos , Unidades de Cuidados Intensivos , Masculino , Persona de Mediana Edad , Complicaciones Posoperatorias/epidemiología , Complicaciones Posoperatorias/etiología , Estudios RetrospectivosRESUMEN
BACKGROUND: Balanced blood component administration during massive transfusion is standard of care. Most literature focuses on the impact of red blood cell (RBC)/fresh frozen plasma (FFP) ratio, while the value of balanced RBC:platelet (PLT) administration is less established. The aim of this study was to evaluate and quantify the independent impact of RBC:PLT on 24-hour mortality in trauma patients receiving massive transfusion. METHODS: Using the 2013 to 2018 American College of Surgeons Trauma Quality Improvement Program database, adult patients who received massive transfusion (≥10 U of RBC/24 hours) and ≥1 U of RBC, FFP, and PLT within 4 hours of arrival were retrospectively included. To mitigate survival bias, only patients with consistent RBC:PLT and RBC:FFP ratios between 4 and 24 hours were analyzed. Balanced FFP or PLT transfusions were defined as having RBC:PLT and RBC:FFP of ≤2, respectively. Multivariable logistic regression was used to compare the independent relationship between RBC:FFP, RBC:PLT, balanced transfusion, and 24-hour mortality. RESULTS: A total of 9,215 massive transfusion patients were included. The number of patients who received transfusion with RBC:PLT >2 (1,942 [21.1%]) was significantly higher than those with RBC:FFP >2 (1,160 [12.6%]) (p < 0.001). Compared with an RBC:PLT ratio of 1:1, a gradual and consistent risk increase was observed for 24-hour mortality as the RBC:PLT ratio increased (p < 0.001). Patients with both FFP and PLT balanced transfusion had the lowest adjusted risk for 24-hour mortality. Mortality increased as resuscitation became more unbalanced, with higher odds of death for unbalanced PLT (odds ratio, 2.48 [2.18-2.83]) than unbalanced FFP (odds ratio, 1.66 [1.37-1.98]), while patients who received both FFP and PLT unbalanced transfusion had the highest risk of 24-hour mortality (odds ratio, 3.41 [2.74-4.24]). CONCLUSION: Trauma patients receiving massive transfusion significantly more often have unbalanced PLT rather than unbalanced FFP transfusion. The impact of unbalanced PLT transfusion on 24-hour mortality is independent and potentially more pronounced than unbalanced FFP transfusion, warranting serious system-level efforts for improvement. LEVEL OF EVIDENCE: Therapeutic/Care Management; Level IV.
Asunto(s)
Plaquetas , Transfusión de Eritrocitos , Adulto , Transfusión de Componentes Sanguíneos , Eritrocitos , Humanos , Estudios RetrospectivosRESUMEN
INTRODUCTION: Emergency physicians and trauma surgeons are increasingly confronted with pre-injury direct oral anticoagulants (DOACs). The objective of this study was to assess if pre-injury DOACs, compared to vitamin K antagonists (VKA), or no oral anticoagulants is independently associated with differences in treatment, mortality and inpatient rehabilitation requirement. METHODS: We performed a review of the prospectively maintained institutional trauma registry at an urban academic level 1 trauma center. We included all geriatric patients (aged ≥ 65 years) with tICH after a fall, admitted between January 2011 and December 2018. Multivariable logistic regression analysis controlling for demographics, comorbidities, vital signs, and tICH types were performed to identify the association between pre-injury anticoagulants and reversal agent use, neurosurgical interventions, inhospital mortality, 3-day mortality, and discharge to inpatient rehabilitation. RESULTS: A total of 1453 tICH patients were included (52 DOAC, 376 VKA, 1025 control). DOAC use was independently associated with lower odds of receiving specific reversal agents [odds ratio (OR) 0.28, 95% confidence interval (CI) 0.15-0.54] than VKA patients. DOAC use was independently associated with requiring neurosurgical intervention (OR 3.14, 95% CI 1.36-7.28). VKA use, but not DOAC use, was independently associated with inhospital mortality, or discharge to hospice care (OR 1.62, 95% CI 1.15-2.27) compared to controls. VKA use was independently associated with higher odds of discharge to inpatient rehabilitation (OR 1.41, 95% CI 1.06-1.87) compared to controls. CONCLUSION: Despite the higher neurosurgical intervention rates, patients with pre-injury DOAC use were associated with comparable rates of mortality and discharge to inpatient rehabilitation as patients without anticoagulation exposure. Future research should focus on risk assessment and stratification of DOAC-exposed trauma patients.
Asunto(s)
Hemorragia Intracraneal Traumática , Administración Oral , Anciano , Anticoagulantes , Fibrinolíticos , Humanos , Resultado del Tratamiento , Vitamina KRESUMEN
BACKGROUND: In military combat settings, noncompressible closed cavity exsanguination is the leading cause of potentially survivable deaths, with no effective treatment available at point of injury. The aim of this study was to assess whether an expanding foam based on hydrophobically modified chitosan (hm-chitosan) may be used as a locally injectable hemostatic agent for the treatment of noncompressible bleeding in a swine model. METHODS: A closed-cavity, grade V hepato-portal injury was created in all animals resulting in massive noncoagulopathic, noncompressible bleeding. Animals received either fluid resuscitation alone (control, n = 8) or fluid resuscitation plus intraperitoneal hm-chitosan agent through an umbilical port (experimental, n = 18). The experiment was terminated at 180 minutes or death (defined as end-tidal CO2 <8mmHg or mean arterial pressure [MAP] <15mmHg), whichever came first. RESULTS: All animals had profound hypotension and experienced a near-arrest from hypovolemic shock (mean MAP = 24 mmHg at 10 minutes). Mean survival time was higher than 150 minutes in the experimental arm versus 27 minutes in the control arm (P < .001). Three-hour survival was 72% in the experimental group and 0% in the control group (P = .002). Hm-chitosan stabilized rising lactate, preventing acute lethal acidosis. MAP improved drastically after deployment of the hm-chitosan and was preserved at 60 mmHg throughout the 3 hours. Postmortem examination was performed in all animals and the hepatoportal injuries were anatomically similar. CONCLUSION: Intraperitoneal administration of hm-chitosan-based foam for massive, noncompressible abdominal bleeding improves survival in a lethal, closed-cavity swine model. Chronic safety and toxicity studies are required.
Asunto(s)
Quitosano , Hemostáticos , Animales , Modelos Animales de Enfermedad , Fluidoterapia/efectos adversos , Hemorragia/etiología , Hemorragia/terapia , Técnicas Hemostáticas , Hemostáticos/uso terapéutico , Humanos , PorcinosRESUMEN
INTRODUCTION: Intraoperative deaths (IODs) are rare but catastrophic. We systematically analyzed IODs to identify clinical and patient safety patterns. METHODS: IODs in a large academic center between 2015 and 2019 were included. Perioperative details were systematically reviewed, focusing on (1) identifying phenotypes of IOD, (2) describing emerging themes immediately preceding cardiac arrest, and (3) suggesting interventions to mitigate IOD in each phenotype. RESULTS: Forty-one patients were included. Three IOD phenotypes were identified: trauma (T), nontrauma emergency (NT), and elective (EL) surgery patients, each with 2 sub-phenotypes (e.g., ELm and ELv for elective surgery with medical arrests or vascular injury and bleeding, respectively). In phenotype T, cardiopulmonary resuscitation was initiated before incision in 42%, resuscitative thoracotomy was performed in 33%, and transient return of spontaneous circulation was achieved in 30% of patients. In phenotype NT, ruptured aortic aneurysms accounted for half the cases, and median blood product utilization was 2,694 mL. In phenotype ELm, preoperative evaluation did not include electrocardiogram in 12%, cardiac consultation in 62%, stress test in 87%, and chest x-ray in 37% of patients. In phenotype ELv, 83% had a single peripheral intravenous line, and vascular injury was almost always followed by escalation in monitoring (e.g., central/arterial line), alert to the blood bank, and call for surgical backup. CONCLUSIONS: We have created a framework for IOD that can help with intraoperative safety and quality analysis. Focusing on interventions that address appropriateness versus futility in care in phenotypes T and NT, and on prevention and mitigation of intraoperative vessel injury (e.g., intraoperative rescue team) or preoperative optimization in phenotype EL may help prevent IODs.
Asunto(s)
Reanimación Cardiopulmonar , Paro Cardíaco , Lesiones del Sistema Vascular , Paro Cardíaco/etiología , Paro Cardíaco/prevención & control , Hemorragia , Humanos , ToracotomíaRESUMEN
BACKGROUND: Ischemic gastrointestinal complications (IGIC) following cardiac surgery are associated with high morbidity and mortality and remain difficult to predict. We evaluated perioperative risk factors for IGIC in patients undergoing open cardiac surgery. METHODS: All patients that underwent an open cardiac surgical procedure at a tertiary academic center between 2011 and 2017 were included. The primary outcome was IGIC, defined as acute mesenteric ischemia necessitating a surgical intervention or postoperative gastrointestinal bleeding that was proven to be of ischemic etiology and necessitated blood product transfusion. A backward stepwise regression model was constructed to identify perioperative predictors of IGIC. RESULTS: Of 6862 patients who underwent cardiac surgery during the study period, 52(0.8%) developed IGIC. The highest incidence of IGIC (1.9%) was noted in patients undergoing concomitant coronary artery, valvular, and aortic procedures. The multivariable regression identified hypertension (odds ratio [OR] = 5.74), preoperative renal failure requiring dialysis (OR = 3.62), immunocompromised status (OR = 2.64), chronic lung disease (OR = 2.61), and history of heart failure (OR = 2.03) as independent predictors for postoperative IGIC. Pre- or intraoperative utilization of intra-aortic balloon pump or catheter-based assist devices (OR = 4.54), intraoperative transfusion requirement of >4 RBC units(OR = 2.47), and cardiopulmonary bypass > 180 min (OR = 2.28) were also identified as independent predictors for the development of IGIC. CONCLUSIONS: We identified preoperative and intraoperative risk factors that independently increase the risk of developing postoperative IGIC after cardiac surgery. A high index of suspicion must be maintained and any deviation from the expected recovery course in patients with the above-identified risk factors should trigger an immediate evaluation with the involvement of the acute care surgical team.
Asunto(s)
Procedimientos Quirúrgicos Cardíacos , Enfermedades Gastrointestinales , Procedimientos Quirúrgicos Cardíacos/efectos adversos , Enfermedades Gastrointestinales/etiología , Humanos , Complicaciones Posoperatorias/epidemiología , Complicaciones Posoperatorias/etiología , Estudios Retrospectivos , Factores de RiesgoRESUMEN
PURPOSE: Damage control laparotomy (DCL) is used for both traumatic and non-traumatic indications. Failure to achieve primary fascial closure (PFC) in a timely fashion has been associated with complications including sepsis, fistula, and mortality. We sought to identify factors associated with time to PFC in a multicenter retrospective cohort. METHODS: We reviewed retrospective data from 15 centers in the EAST SLEEP-TIME registry, including age, comorbidities (Charlson Comorbidity Index [CCI]), small and large bowel resection, bowel discontinuity, vascular procedures, retained packs, number of re-laparotomies, net fluid balance after 24 h, trauma, and time to first takeback in 12-h increments to identify key factors associated with time to PFC. RESULTS: In total, 368 patients (71.2% trauma, of which 50.6% were penetrating, median ISS 25 [16, 34], with median Apache II score 15 [11, 22] in non-trauma) were in the cohort. Of these, 92.9% of patients achieved PFC at 60.8 ± 72.0 h after 1.6 ± 1.2 re-laparotomies. Each additional re-laparotomy reduced the odds of PFC by 91.5% (95%CI 88.2-93.9%, p < 0.001). Time to first re-laparotomy was highly significant (p < 0.001) in terms of odds of achieving PFC, with no difference between 12 and 24 h to first re-laparotomy (ref), and decreases in odds of PFC of 78.4% (65.8-86.4%, p < 0.001) for first re-laparotomy after 24.1-36 h, 90.8% (84.7-94.4%, p < 0.001) for 36.1-48 h, and 98.1% (96.4-99.0%, p < 0.001) for > 48 h. Trauma patients had increased likelihood of PFC in two separate analyses (p = 0.022 and 0.002). CONCLUSION: Time to re-laparotomy ≤ 24 h and minimizing number of re-laparotomies are highly predictive of rapid achievement of PFC in patients after trauma- and non-trauma DCL. LEVEL OF EVIDENCE: 2B.
Asunto(s)
Traumatismos Abdominales , Laparotomía , Traumatismos Abdominales/cirugía , Fasciotomía , Humanos , Laparotomía/métodos , Estudios Multicéntricos como Asunto , Sistema de Registros , Estudios Retrospectivos , Sueño , Resultado del TratamientoRESUMEN
BACKGROUND: Outcomes of early enteral nutrition (EEN) in critically ill patients on vasoactive medications remain unclear. We aimed to compare in-hospital outcomes for EEN vs late EN (LEN) in mechanically ventilated patients receiving vasopressor support. METHODS: This was a retrospective study using the national eICU Collaborative Research Database. Adult patients requiring vasopressor support and mechanical ventilation within 24 h of admission and for ≥2 days were included. Patients with an admission diagnosis that could constitute a contraindication for EEN (eg, gastrointestinal [GI] perforation, GI surgery) and patients with an intensive care unit (ICU) length of stay (LOS) <72 h were excluded. EEN and LEN were defined as tube feeding within 48 h and between 48 h and 1 week (nothing by mouth during the first 48 h) of admission, respectively. Propensity score matching was performed to derive two cohorts receiving EEN and LEN that were comparable for baseline patient characteristics. RESULTS: Among 1701 patients who met the inclusion criteria (EEN: 1001, LEN: 700), 1148 were included in propensity score-matched cohorts (EEN: 574, LEN: 574). Median time to EN was 29 vs 79 h from admission in the EEN and LEN groups, respectively. There was no significant difference in mortality or hospital LOS between the two nutrition strategies. EEN was associated with shorter ICU LOS, lower need for renal replacement therapy, and lower incidence of electrolyte abnormalities. CONCLUSION: This study showed no difference in 28-day mortality between EEN and LEN in critically ill patients receiving vasopressor support.
Asunto(s)
Enfermedad Crítica , Nutrición Enteral , Adulto , Enfermedad Crítica/terapia , Humanos , Unidades de Cuidados Intensivos , Tiempo de Internación , Respiración Artificial , Estudios RetrospectivosRESUMEN
PURPOSE: To evaluate factors associated with ICU delirium in patients who underwent damage control laparotomy (DCL), with the hypothesis that benzodiazepines and paralytic infusions would be associated with increased delirium risk. We also sought to evaluate the differences in sedation practices between trauma (T) and non-trauma (NT) patients. METHODS: We reviewed retrospective data from 15 centers in the EAST SLEEP-TIME registry admitted from January 1, 2017 to December 31, 2018. We included all adults undergoing DCL, regardless of diagnosis, who had completed daily Richmond Agitation Sedation Score (RASS) and Confusion Assessment Method-ICU (CAM-ICU). We excluded patients younger than 18 years, pregnant women, prisoners and patients who died before the first re-laparotomy. Data collected included age, number of re-laparotomies after DCL, duration of paralytic infusion, duration and type of sedative and opioid infusions as well as daily CAM-ICU and RASS scores to analyze risk factors associated with the proportion of delirium-free/coma-free ICU days during the first 30 days (DF/CF-ICU-30) using multivariate linear regression. RESULTS: A 353 patient subset (73.2% trauma) from the overall 567-patient cohort had complete daily RASS and CAM-ICU data. NT patients were older (58.9 ± 16.0 years vs 40.5 ± 17.0 years [p < 0.001]). Mean DF/CF-ICU-30 days was 73.7 ± 96.4% for the NT and 51.3 ± 38.7% in the T patients (p = 0.030). More T patients were exposed to Midazolam, 41.3% vs 20.3% (p = 0.002). More T patients were exposed to Propofol, 91.0% vs 71.9% (p < 0.001) with longer infusion times in T compared to NT (71.2 ± 85.9 vs 48.9 ± 69.8 h [p = 0.017]). Paralytic infusions were also used more in T compared to NT, 34.8% vs 18.2% (p < 0.001). Using linear regression, dexmedetomidine infusion and paralytic infusions were associated with decreases in DF/CF-ICU-30, (- 2.78 (95%CI [- 5.54, - 0.024], p = 0.040) and (- 7.08 ([- 13.0, - 1.10], p = 0.020) respectively. CONCLUSIONS: Although the relationship between paralytic use and delirium is well-established, the observation that dexmedetomidine exposure is independently associated with increased delirium and coma is novel and bears further study.
Asunto(s)
Delirio , Dexmedetomidina , Adulto , Delirio/inducido químicamente , Delirio/epidemiología , Dexmedetomidina/efectos adversos , Femenino , Humanos , Unidades de Cuidados Intensivos , Laparotomía , Estudios Multicéntricos como Asunto , Embarazo , Respiración Artificial , Estudios Retrospectivos , Factores de Riesgo , SueñoRESUMEN
BACKGROUND: Optimal use of interventional procedures and diagnostic tests for patients with suspected choledocholithiasis depends on accurate pretest risk estimation. We sought to define sensitivity/specificity of transaminases in identifying choledocholithiasis and to incorporate them into a biochemical marker composite score that could accurately predict choledocholithiasis. METHODS: All adult patients who underwent laparoscopic cholecystectomy by our Emergency Surgery Service between 2010 and 2018 were reviewed. Admission total bilirubin (TB), aspartate aminotransferase (AST), alanine aminotransferase (ALT), and alkaline phosphatase (ALP) was captured. Choledocholithiasis was confirmed via intraoperative cholangiogram, endoscopic retrograde cholangiopancreatography, or magnetic resonance cholangiopancreatography. Area under receiver operating characteristic curve (AUC) or C-statistic for AST, ALT, ALP, and TB as a measure of detecting choledocholithiasis was calculated. For score development, our database was randomly dichotomized to derivation and validation cohort and a score was derived. The score was validated by calculating its C-statistic. RESULTS: 1089 patients were included; 210 (20.3%) had confirmed choledocholithiasis. The AUC was .78 for TB, .77 for ALP and AST, and .76 for ALT. 545 and 544 patients were included in the derivation and the validation cohort, respectively. The elements of the derived score were TB, AST, and ALP. The score ranged from 0 to 4. The AUC was .82 in the derivation and .77 in the validation cohort. The probability of choledocholithiasis increased from 8% to 89% at scores 0 to 4, respectively. CONCLUSIONS: Aspartate aminotransferase predicted choledocholithiasis adequately and should be featured in choledocholithiasis screening algorithms. We developed a biochemical composite score, shown to be accurate in preoperative choledocholithiasis risk assessment in an emergency surgery setting.
Asunto(s)
Colecistectomía Laparoscópica , Coledocolitiasis , Adulto , Fosfatasa Alcalina , Aspartato Aminotransferasas , Bilirrubina , Colangiopancreatografia Retrógrada Endoscópica/métodos , Colecistectomía Laparoscópica/métodos , Coledocolitiasis/diagnóstico por imagen , Coledocolitiasis/cirugía , Humanos , Valor Predictivo de las Pruebas , Estudios RetrospectivosRESUMEN
BACKGROUND: There is little research evaluating outcomes from sepsis in intensive care units (ICUs) with lower sepsis patient volumes as compared to ICUs with higher sepsis patient volumes. Our objective was to compare the outcomes of septic patients admitted to ICUs with different sepsis patient volumes. MATERIALS AND METHODS: We included all patients from the eICU-CRD database admitted for the management of sepsis with blood lactate ≥ 2mmol/L within 24 hours of admission. Our primary outcome was ICU mortality. Secondary outcomes included hospital mortality, 30-day ventilator free days, and initiation of renal replacement therapy (RRT). ICUs were grouped in quartiles based on the number of septic patients treated at each unit. RESULTS: 10,716 patients were included in our analysis; 272 (2.5%) in low sepsis volume ICUs, 1,078 (10.1%) in medium-low sepsis volume ICUs, 2,608 (24.3%) in medium-high sepsis volume ICUs, and 6,758 (63.1%) in high sepsis volume ICUs. On multivariable analyses, no significant differences were documented regarding ICU and hospital mortality, and ventilator days in patients treated in lower versus higher sepsis volume ICUs. Patients treated at lower sepsis volume ICUs had lower rates of RRT initiation as compared to high volume units (medium-high vs. high: OR = 0.78, 95%CI = 0.66-0.91, P-value = 0.002 and medium-low vs. high: OR = 0.57, 95%CI = 0.44-0.73, P-value < 0.001). CONCLUSION: The previously described volume-outcome association in septic patients was not identified in an intensive care setting.