Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 40
Filter
1.
Transplantation ; 2024 Jun 04.
Article in English | MEDLINE | ID: mdl-38831488

ABSTRACT

BACKGROUND: This study compares selection criteria for liver transplant (LT) for hepatocellular carcinoma (HCC) for inclusivity and predictive ability to identify the most permissive criteria that maintain patient outcomes. METHODS: The Scientific Registry of Transplant Recipients (SRTR) database was queried for deceased donor LT's for HCC (2003-2020) with 3-y follow-up; these data were compared with a 2-center experience. Milan, University of California, San Francisco (UCSF), 5-5-500, Up-to-seven (U7), HALT-HCC, and Metroticket 2.0 scores were calculated. RESULTS: Nationally, 26 409 patients were included, and 547 at the 2 institutions. Median SRTR-follow-up was 6.8 y (interquartile range 3.9-10.1). Three criteria allowed the expansion of candidacy versus Milan: UCSF (7.7%, n = 1898), Metroticket 2.0 (4.2%, n = 1037), and U7 (3.5%, n = 828). The absolute difference in 3-y overall survival (OS) between scores was 1.5%. HALT-HCC (area under the curve [AUC] = 0.559, 0.551-0.567) best predicted 3-y OS although AUC was notably similar between criteria (0.506 < AUC < 0.527, Mila n = 0.513, UCSF = 0.506, 5-5-500 = 0.522, U7 = 0.511, HALT-HCC = 0.559, and Metroticket 2.0 = 0.520), as was Harrall's c-statistic (0.507 < c-statistic < 0.532). All scores predicted survival to P < 0.001 on competing risk analysis. Median follow-up in our enterprise was 9.8 y (interquartile range 7.1-13.3). U7 (13.0%, n = 58), UCSF (11.1%, n = 50), HALT-HCC (6.4%, n = 29), and Metroticket 2.0 (6.3%, n = 28) allowed candidate expansion. HALT-HCC (AUC = 0.768, 0.713-0.823) and Metroticket 2.0 (AUC = 0.739, 0.677-0.801) were the most predictive of recurrence. All scores predicted recurrence and survival to P < 0.001 using competing risk analysis. CONCLUSIONS: Less restrictive criteria such as Metroticket 2.0, UCSF, or U7 allow broader application of transplants for HCC without sacrificing outcomes. Thus, the criteria for Model for End-stage Liver Disease-exception points for HCC should be expanded to allow more patients to receive life-saving transplantation.

2.
Liver Transpl ; 2024 Jun 05.
Article in English | MEDLINE | ID: mdl-38833290

ABSTRACT

BACKGROUND: Ex-situ normothermic machine perfusion (NMP) helps increase the use of extended criteria donor livers. However, the impact of an NMP program on waitlist times and mortality has not been evaluated. METHODS: Adult patients listed for liver transplant (LT) at two academic centers 1/1/2015-9/1/2023 were included (n=2773) to allow all patients >6-months follow-up from listing. Routine NMP was implemented on 10/14/2022. Waitlist outcomes were compared from pre-NMP pre-acuity-circles (n=1,460), pre-NMP with acuity circles (n=842) and with NMP (n=381). RESULTS: Median waitlist time was 79days (IQR 20-232 d) at baseline, 49days (7-182) with acuity circles, and 14days (5-56) with NMP (p<0.001). The rate of transplant-per-100-person-years improved from 61-per-100-person-years to 99-per-100-person-years with acuity circles, and 194-per-100-person-years with NMP (p<0.001). Crude mortality without transplant decreased from 18.3% (n=268/1460), to 13.3% (n=112/843), to 6.3% (n=24/381) p<0.001) with NMP. Incidence of mortality without LT was 15-per-100-person-years before acuity circles, 19-per-100 with acuity circles, and 9-per-100-person-years after NMP (p<0.001). Median MELD at LT was lowest with NMP, but MELD at listing was highest in this era (p<0.0001). Median DRI of transplanted livers at baseline was 1.54 (1.27-1.82), 1.66 (1.42-2.16) with acuity circles, and 2.06 (1.63-2.46) with NMP (p<0.001). Six-month post-LT survival was not different between eras (p=0.322). The total cost of healthcare while waitlisted was lowest in the NMP era ($53,683 vs. $32,687 vs. $23,688, p<0.001); cost-per-day did not differ between eras (p=0.152). CONCLUSION: Implementation of a routine NMP program was associated with reduced waitlist time and mortality without compromising short-term survival after liver transplant despite increased use of riskier grafts. Routine NMP use enables better waitlist management with reduced healthcare costs.

3.
Liver Transpl ; 2024 Jun 05.
Article in English | MEDLINE | ID: mdl-38833301

ABSTRACT

BACKGROUND: We describe a novel pre-liver-transplant (LT) approach in colorectal liver metastasis (CRLM) allowing for improved monitoring of tumor biology and reduction of disease burden before committing a patient to transplantation. METHODS: Patients undergoing LT for CRLM at Cleveland Clinic were included. The described protocol involves intensive locoregional therapy with systemic chemotherapy, aiming to reach minimal disease burden revealed by PET scan and CEA. Patients with no detectable disease or irreversible treatment-induced liver injury undergo transplant. RESULTS: Nine patients received liver transplant out of 27 who were evaluated (33.3%). Median follow-up was 700 days. Seven patients (77.8%) received a living donor LT. Five had no detectable disease and four had treatment-induced cirrhosis. Pre-transplant management included chemotherapy (n=9) +/- Bevacizumab (n=6) and/or Anti-EGFR (n=6). Median pre-LT cycles of chemotherapy=16 (Range 10-40). Liver-directed therapy included Yttrium-90 (n=5), ablation (n=4), resection (n=4), and HAI-pump (n=3). Three patients recurred after LT. Actuarial 1- and 2-year recurrence-free survival were 75% (n=6/8) and 60% (n=3/5). Recurrence occurred in the lungs (n=1), liver graft (n=1), and lungs+paraaortic nodes (n=1). Patients with pre-LT detectable disease had reduced RFS (p=0.04). All patients with recurrence had histologically-viable tumor in the liver explant. Patients treated in our protocol (n=16) demonstrated improved survival versus those who were not candidates (n=11) regardless of transplant status (p=0.01). CONCLUSION: A protocol defined by aggressive pre-transplant liver-directed treatment and transplant for patients with undetectable disease or treatment-induced liver injury may help prevent tumor recurrence.

4.
Ann Surg ; 2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38557793

ABSTRACT

OBJECTIVE: Assess cost and complication outcomes after liver transplantation (LT) using normothermic machine perfusion (NMP). SUMMARY BACKGROUND DATA: End-ischemic NMP is often used to aid logistics, yet its' impact on outcomes after LT remains unclear, as does its' true impact on costs associated with transplantation. METHODS: Deceased donor liver recipients at two centers (1/1/2019-6/30/2023) were included. Retransplants, splits and combined grafts were excluded. End-ischemic NMP (OrganOx-Metra®) was implemented 10/2022 for extended-criteria DBDs, all DCDs and logistics. NMP-cases were matched 1:2 with cold storage controls (SCS) using the Balance-of-Risk (DBD-grafts) and UK-DCD Score (DCD-grafts). RESULTS: Overall, 803 transplantations were included, 174 (21.7%) receiving NMP. Matching was achieved between 118 NMP-DBDs with 236 SCS; and 37 NMP-DCD with 74 corresponding SCS. For both graft types, median inpatient comprehensive complications index (CCI) values were comparable between groups. DCD-NMP grafts experienced reduced cumulative 90-day CCI (27.6 vs. 41.9, P=0.028). NMP also reduced the need for early relaparotomy and renal-replacement-therapy, with subsequently less-frequent major complications (Clavien-Dindo >IVa). This effect was more pronounced in DCD-transplants. NMP had no protective effect on early biliary complications. Organ acquisition/preservation costs were higher with NMP, yet NMP-treated grafts had lower 90-day pre-transplant costs in context of shorter waiting-list times. Overall costs were comparable for both cohorts. CONCLUSIONS: This is the first risk-adjusted outcome and cost analysis comparing NMP and SCS. In addition to logistical benefits, NMP was associated with a reduction in relaparotomy and bleeding in DBD-grafts, and overall complications and post-LT renal-replacement for DCDs. While organ acquisition/preservation was more costly with NMP, overall 90-day-healthcare costs-per-transplantation were comparable.

5.
Cancers (Basel) ; 16(8)2024 Apr 10.
Article in English | MEDLINE | ID: mdl-38672535

ABSTRACT

Hepatocellular carcinoma (HCC) is the third leading cause of cancer-related death and the sixth most diagnosed malignancy worldwide. Serum alpha-fetoprotein (AFP) is the traditional, ubiquitous biomarker for HCC. However, there has been an increasing call for the use of multiple biomarkers to optimize care for these patients. AFP, AFP-L3, and prothrombin induced by vitamin K absence II (DCP) have described clinical utility for HCC, but unfortunately, they also have well established and significant limitations. Circulating tumor DNA (ctDNA), genomic glycosylation, and even totally non-invasive salivary metabolomics and/or micro-RNAS demonstrate great promise for early detection and long-term surveillance, but still require large-scale prospective validation to definitively validate their clinical validity. This review aims to provide an update on clinically available and emerging biomarkers for HCC, focusing on their respective clinical strengths and weaknesses.

6.
Cancers (Basel) ; 16(5)2024 Feb 25.
Article in English | MEDLINE | ID: mdl-38473290

ABSTRACT

INTRODUCTION: Circulating tumor DNA (ctDNA) is emerging as a promising, non-invasive diagnostic and surveillance biomarker in solid organ malignancy. However, its utility before and after liver transplant (LT) for patients with primary and secondary liver cancers is still underexplored. METHODS: Patients undergoing LT for hepatocellular carcinoma (HCC), cholangiocarcinoma (CCA), and colorectal liver metastases (CRLM) with ctDNA testing were included. CtDNA testing was conducted pre-transplant, post-transplant, or both (sequential) from 11/2019 to 09/2023 using Guardant360, Guardant Reveal, and Guardant360 CDx. RESULTS: 21 patients with HCC (n = 9, 43%), CRLM (n = 8, 38%), CCA (n = 3, 14%), and mixed HCC/CCA (n = 1, 5%) were included in the study. The median follow-up time was 15 months (range: 1-124). The median time from pre-operative testing to surgery was 3 months (IQR: 1-4; range: 0-5), and from surgery to post-operative testing, it was 9 months (IQR: 2-22; range: 0.4-112). A total of 13 (62%) patients had pre-transplant testing, with 8 (62%) having ctDNA detected (ctDNA+) and 5 (32%) not having ctDNA detected (ctDNA-). A total of 18 (86%) patients had post-transplant testing, 11 (61%) of whom were ctDNA+ and 7 (33%) of whom were ctDNA-. The absolute recurrence rates were 50% (n = 5) in those who were ctDNA+ vs. 25% (n = 1) in those who were ctDNA- in the post-transplant setting, though this difference was not statistically significant (p = 0.367). Six (29%) patients (HCC = 3, CCA = 1, CRLM = 2) experienced recurrence with a median recurrence-free survival of 14 (IQR: 6-40) months. Four of these patients had positive post-transplant ctDNA collected following diagnosis of recurrence, while one patient had positive post-transplant ctDNA collected preceding recurrence. A total of 10 (48%) patients had sequential ctDNA testing, of whom n = 5 (50%) achieved ctDNA clearance (+/-). The remainder were ctDNA+/+ (n = 3, 30%), ctDNA-/- (n = 1, 10%), and ctDNA-/+ (n = 1, 11%). Three (30%) patients showed the acquisition of new genomic alterations following transplant, all without recurrence. Overall, the median tumor mutation burden (TMB) decreased from 1.23 mut/Mb pre-transplant to 0.00 mut/Mb post-transplant. CONCLUSIONS: Patients with ctDNA positivity experienced recurrence at a higher rate than the ctDNA- patients, indicating the potential role of ctDNA in predicting recurrence after curative-intent transplant. Based on sequential testing, LT has the potential to clear ctDNA, demonstrating the capability of LT in the treatment of systemic disease. Transplant providers should be aware of the potential of donor-derived cell-free DNA and improved approaches are necessary to address such concerns.

7.
Int J Surg ; 110(5): 2818-2831, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38241354

ABSTRACT

BACKGROUND: Liver transplantation (LT) is a well-established treatment for hepatocellular carcinoma (HCC), but there are ongoing debates regarding outcomes and selection. This study examines the experience of LT for HCC at a high-volume centre. METHODS: A prospectively maintained database was used to identify HCC patients undergoing LT from 2000 to 2020 with more than or equal to 3-years follow-up. Data were obtained from the centre database and electronic medical records. The Metroticket 2.0 HCC-specific 5-year survival scale was calculated for each patient. Kaplan-Meier and Cox-regression analyses were employed assessing survival between groups based on Metroticket score and individual donor and recipient risk factors. RESULTS: Five hundred sixty-nine patients met criteria. Median follow-up was 96.2 months (8.12 years; interquartile range 59.9-147.8). Three-year recurrence-free (RFS) and overall survival (OS) were 88.6% ( n =504) and 86.6% ( n =493). Five-year RFS and OS were 78.9% ( n =449) and 79.1% ( n =450). Median Metroticket 2.0 score was 0.9 (interquartile range 0.9-0.95). Tumour size greater than 3 cm ( P =0.012), increasing tumour number on imaging ( P =0.001) and explant pathology ( P <0.001) was associated with recurrence. Transplant within Milan ( P <0.001) or UCSF criteria ( P <0.001) had lower recurrence rates. Increasing alpha-fetoprotein (AFP)-values were associated with more HCC recurrence ( P <0.001) and reduced OS ( P =0.008). Chemoembolization was predictive of recurrence in the overall population ( P =0.043) and in those outside-Milan criteria ( P =0.038). A receiver-operator curve using Metroticket 2.0 identified an optimal cut-off of projected survival greater than or equal to 87.5% for predicting recurrence. This cut-off was able to predict RFS ( P <0.001) in the total cohort and predict both, RFS ( P =0.007) and OS ( P =0.016) outside Milan. Receipt of donation after brain death (DBD) grafts (55/478, 13%) or living-donor grafts (3/22, 13.6%) experienced better survival rates compared to donation after cardiac death (DCD) grafts ( n =15/58, 25.6%, P =0.009). Donor age was associated with a higher HCC recurrence ( P =0.006). Both total ischaemia time (TIT) greater than 6hours ( P =0.016) and increasing TIT correlated with higher HCC recurrence ( P =0.027). The use of DCD grafts for outside-Milan candidates was associated with increased recurrence ( P =0.039) and reduced survival ( P =0.033). CONCLUSION: This large two-centre analysis confirms favourable outcomes after LT for HCC. Tumour size and number, pre-transplant AFP, and Milan criteria remain important recipient HCC-risk factors. A higher donor risk (i.e. donor age, DCD grafts, ischaemia time) was associated with poorer outcomes.


Subject(s)
Carcinoma, Hepatocellular , Liver Neoplasms , Liver Transplantation , Humans , Carcinoma, Hepatocellular/surgery , Carcinoma, Hepatocellular/mortality , Carcinoma, Hepatocellular/pathology , Liver Transplantation/mortality , Liver Neoplasms/surgery , Liver Neoplasms/mortality , Liver Neoplasms/pathology , Male , Female , Middle Aged , Risk Assessment , Follow-Up Studies , Aged , Retrospective Studies , Adult , Risk Factors , Neoplasm Recurrence, Local , Kaplan-Meier Estimate
9.
Ann Surg ; 2023 Dec 05.
Article in English | MEDLINE | ID: mdl-38050733

ABSTRACT

OBJECTIVE: We aim to report our institutional outcomes of single-staged combined liver transplantation (LT) and cardiac surgery (CS). SUMMARY BACKGROUND DATA: Concurrent LT and CS is a potential treatment for combined cardiac dysfunction and end-stage liver disease, yet only 54 cases have been previously reported in the literature. Thus, the outcomes of this approach are relatively unknown, and this approach has been previously regarded as extremely risky. METHODS: Thirty-one patients at our institution underwent combined cardiac surgery and liver transplant. Patients with at least one-year follow-up were included. The Leave-One-Out Cross-Validation (LOOCV) machine-learning approach was used to generate a model for mortality. RESULTS: Median follow-up was 8.2 years (IQR 4.6-13.6 y). One- and five-year survival was 74.2% (N=23) and 55% (N=17), respectively. Negative predictive factors of survival included recipient age>60 years (P=0.036), NASH-cirrhosis (P=0.031), Coronary Artery Bypass-Graft (CABG)-based CS (P=0.046) and pre-operative renal dysfunction (P=0.024). The final model demonstrated that renal dysfunction had a relative weighted impact of 3.2 versus CABG (1.7), age ≥60y (1.7) or NASH (1.3). Elevated LT+CS risk score was associated with an increased five-year mortality after surgery (AUC=0.731, P=<0.001). Conversely, the widely accepted STS-PROM calculator was unable to successfully stratify patients according to 1- (P>0.99) or 5-year (P=0.695) survival rates. CONCLUSIONS: This is the largest series describing combined LT+CS, with joint surgical management appearing feasible in highly selected patients. CABG and pre-operative renal dysfunction are important negative predictors of mortality. The four-variable LT+CS score may help predict patients at high risk for post-operative mortality.

10.
J Emerg Trauma Shock ; 14(3): 128-135, 2021.
Article in English | MEDLINE | ID: mdl-34759630

ABSTRACT

INTRODUCTION: Pediatric trauma centers (PTCs) were created to address the unique needs of injured children with the expectation that outcomes would be improved. However, prior studies to evaluate the impact of PTCs have had conflicting results. Our study was conducted to further clarify this question. We hypothesize that severely injured children ≤ 14 years of age have better outcomes at PTCs and that better survival may be due to higher emergency department (ED) survival rates than at adult trauma centers (ATCs). METHODS: A retrospective analysis of severely injured children (ISS>15) ≤18 years of age entered into the National Trauma Data Bank (NTDB) between 2011 and 2012 was performed. Subjects were stratified into 2 age cohorts; young children (0-14 years) and adolescents (15-18 years). Primary outcomes were emergency department (ED) and in-patient (IP) mortality. Secondary outcomes included in-hospital complications, hospital and ICU length of stay, and ventilator days. Outcome differences were assessed using multilevel logistic and negative binomial regression analyses. RESULTS: A total of 10,028 children were included. Median ISS was 22 (Interquartile range 17-29). Adjusting for confounders on multivariate analysis, children ≤ 14 had lower odds of ED (0.42[CI 0.25-0.71], p=0.001) and IP mortality (0.73[CI 0.5-0.9], p=0.02) at PTCs. There were no differences in odds of ED mortality (0.81 [CI 0.5-1.3], p=0.4) or IP mortality (1.01 [CI 0.8-1.2], p=0.88) for adolescents between centers. There were no differences in complication rates between PTCs and ATCs (OR 0.86 [CI 0.69-1.06], p=1.7) but children were more likely to be discharged to home and have more ICU and ventilator free days if treated at a PTC. CONCLUSION: Young children but not adolescents have better ED survival at PTCs compared to ATCs.Level of Evidence: Level IV, Therapeutic.

11.
J Surg Res ; 213: 131-137, 2017 06 01.
Article in English | MEDLINE | ID: mdl-28601305

ABSTRACT

BACKGROUND: Sepsis remains the leading cause of death in the surgical intensive care unit. Prior studies have demonstrated a survival benefit of remote ischemic conditioning (RIC) in many disease states. The aim of this study was to determine the effects of RIC on survival in sepsis in an animal model and to assess alterations in inflammatory biochemical profiles. We hypothesized that RIC alters inflammatory biochemical profiles resulting in decreased mortality in a septic mouse model. MATERIALS AND METHODS: Eight to 12 week C57BL/6 mice received intra-peritoneal injection of 12.5-mg/kg lipopolysaccharide (LPS). Septic animals in the experimental group underwent RIC at 0, 2, and 6 h after LPS by surgical exploration and alternate clamping of the femoral artery. Six 4-min cycles of ischemia-reperfusion were performed. Primary outcome was survival at 5-d after LPS injection. Secondary outcome was to assess the following serum cytokine levels: interferon-γ (IFN-γ), interleukin (IL)-10, IL-1ß, and tumor necrosis factoralpha (TNFα) at the baseline before LPS injection, 0 hour after LPS injection, and at 2, 4, 24 hours after induction of sepsis (RIC was performed at 2 h after LPS injection). Kaplan-Meier survival analysis and log-rank test were used. ANOVA test was used to compare cytokine measurements. RESULTS: We performed experiments on 44 mice: 14 sham and 30 RIC mice (10 at each time point). Overall survival was higher in the experimental group compared to the sham group (57% versus 21%; P = 0.02), with the highest survival rate observed in the 2-hour post-RIC group (70%). On Kaplan-Meier analysis, 2-h post-RIC group had increased survival at 5 days after LPS (P = 0.04) with hazard ratio of 0.3 (95% confidence interval = 0.09-0.98). In the RIC group, serum concentrations of IFN-γ, IL-10, IL-1ß, and TNFα peaked at 2 h after LPS and then decreased significantly over 24 hours (P < 0.0001) compared to the baseline. CONCLUSIONS: RIC improves survival in sepsis and has the potential for implementation in the clinical practice. Early implementation of RIC may play an immune-modulatory role in sepsis. Further studies are necessary to refine understanding of the observed survival benefits and its implications in sepsis management.


Subject(s)
Ischemia , Lower Extremity/blood supply , Reperfusion/methods , Sepsis/therapy , Animals , Biomarkers/metabolism , Femoral Artery , Kaplan-Meier Estimate , Male , Mice , Mice, Inbred C57BL , Random Allocation , Sepsis/immunology , Sepsis/mortality , Treatment Outcome
12.
J Orthop Trauma ; 30(12): 653-658, 2016 Dec.
Article in English | MEDLINE | ID: mdl-27875491

ABSTRACT

OBJECTIVES: Prothrombin complex concentrate (PCC) is being increasingly used for reversing induced coagulopathy of trauma. However, the use of PCC for reversing coagulopathy in multiply injured patients with pelvic and/or lower extremity fractures remains unclear. The aim of our study was to assess the efficacy of PCC for reversing coagulopathy in this group of patients. DESIGN: Two-year retrospective analysis. SETTING: Our level I trauma center. PATIENTS/PARTICIPANTS: All coagulopathic [International normalized ratio (INR) ≥1.5] trauma patients. Patients with femur, tibia, or pelvic fracture were included. Patients were divided into 2 groups: PCC (single dose) and fresh frozen plasma (FFP). Patients in the 2 groups were matched using propensity score matching. MAIN OUTCOME MEASUREMENTS: Time to correction of INR, time to intervention, development of thromboembolic complications, mortality, and cost of therapy. RESULTS: A total of 81 patients (PCC: 27, FFP: 54) were included. Patients who received PCC had faster correction of INR and shorter time to surgical intervention in comparison to patients who received FFP. PCC therapy was also associated with lower overall blood product requirement (P = 0.02) and lower transfusion costs (P = 0.0001). CONCLUSIONS: In a matched cohort of multiply injured patients with pelvic and/or lower extremity fractures, administration of a single dose of PCC significantly reduced the time to correction of INR and time to intervention compared with patients who received FFP therapy. This may allow orthopaedic surgeons to more safely proceed with early, definitive fixation strategies. LEVEL OF EVIDENCE: Therapeutic level III. See Instructions for Authors for a complete description of levels of evidence.


Subject(s)
Blood Coagulation Disorders/mortality , Blood Coagulation Disorders/prevention & control , Blood Coagulation Factors/therapeutic use , Fractures, Bone/mortality , Leg Injuries/mortality , Multiple Trauma/mortality , Premedication/statistics & numerical data , Arizona/epidemiology , Causality , Comorbidity , Female , Fractures, Bone/therapy , Humans , Leg Injuries/therapy , Longitudinal Studies , Male , Middle Aged , Multiple Trauma/therapy , Pelvic Bones/drug effects , Pelvic Bones/injuries , Prevalence , Retrospective Studies , Risk Factors , Survival Rate , Treatment Outcome
13.
Cancer Treat Commun ; 8: 1-4, 2016.
Article in English | MEDLINE | ID: mdl-27774410

ABSTRACT

INTRODUCTION: Optimization of surgical outcomes after colectomy continues to be actively studied, but most studies group right-sided and left-sided colectomies together. The aim of our study was to determine whether the complication rate differs between right-sided and left-sided colectomies for cancer. METHODS: We identified patients who underwent laparoscopic colectomy for colon cancer between 2005 and 2010 in the American College of Surgeons National Surgical Quality Improvement Program database and stratified cases by right and left side. The two groups were matched using propensity score matching for demographics, previous abdominal surgery, pre-operative chemotherapy and radiotherapy, and preoperative laboratory data. Outcome measures were: 30-day mortality and morbidity. RESULTS: We identified 2512 patients who underwent elective laparoscopic colectomy for right-sided or left-sided colon cancer. The two groups were similar in demographics, and pre-operative characteristics. There was no difference in overall morbidity (15% vs. 17.7%; p value < 0.08) or 30-day mortality (1.5% vs. 1.5%; p value < 0.9) between the two groups. Sub-analysis revealed higher surgical site infection rates (9% vs. 6%; p value < 0.04), higher incidence of ureteral injury (0.6% vs. 0.4%; p value < 0.04), higher conversion rate to open colectomy (51% vs. 30%; p value < 0.01) and a longer hospital length of stay (10.5 ± 4 vs. 7.1 ± 1.3 days; p value < 0.02) in patients undergoing laparoscopic left colectomy. CONCLUSION: Our study highlights the difference in complications between right-sided and left-sided colectomies for cancer. Further research on outcomes after colectomy should incorporate right vs. left side colon resection as a potential pre-operative risk factor.

14.
World J Surg ; 40(11): 2667-2672, 2016 Nov.
Article in English | MEDLINE | ID: mdl-27307089

ABSTRACT

INTRODUCTION: Early seizures after severe traumatic brain injury (TBI) have a reported incidence of up to 15 %. Prophylaxis for early seizures using 1 week of phenytoin is considered standard of care for seizure prevention. However, many centers have substituted the anticonvulsant levetiracetam without good data on the efficacy of this approach. Our hypothesis was that the treatment with levetiracetam is not effective in preventing early post-traumatic seizures. METHODS: All trauma patients sustaining a TBI from January 2007 to December 2009 at an urban level-one trauma center were retrospectively analyzed. Seizures were identified from a prospectively gathered morbidity database and anticonvulsant use from the pharmacy database. Statistical comparisons were made by Chi square, t tests, and logistic regression modeling. Patients who received levetiracetam prophylaxis were matched 1:1 using propensity score matching with those who did not receive the drug. RESULTS: 5551 trauma patients suffered a TBI during the study period, with an overall seizure rate of 0.7 % (39/5551). Of the total population, 1795 were diagnosed with severe TBI (Head AIS score 3-5). Seizures were 25 times more likely in the severe TBI group than in the non-severe group [2.0 % (36/1795) vs. 0.08 % (3/3756); OR 25.6; 95 % CI 7.8-83.2; p < 0.0001]. Of the patients who had seizures after severe TBI, 25 % (9/36) received pharmacologic prophylaxis with levetiracetam, phenytoin, or fosphenytoin. In a matched cohort by propensity scores, no difference was seen in seizure rates between the levetiracetam group and no-prophylaxis group (1.9 vs. 3.4 %, p = 0.50). CONCLUSIONS: In this propensity score-matched cohort analysis, levetiracetam prophylaxis was ineffective in preventing seizures as the rate of seizures was similar whether patients did or did not receive the drug. The incidence of post-traumatic seizures in severe TBI patients was only 2.0 % in this study; therefore we question the benefit of routine prophylactic anticonvulsant therapy in patients with TBI.


Subject(s)
Anticonvulsants/therapeutic use , Brain Injuries, Traumatic/complications , Piracetam/analogs & derivatives , Seizures/prevention & control , Adolescent , Adult , Chemoprevention , Databases, Factual , Female , Humans , Levetiracetam , Male , Middle Aged , Phenytoin/analogs & derivatives , Phenytoin/therapeutic use , Piracetam/therapeutic use , Propensity Score , Retrospective Studies , Seizures/etiology , Treatment Failure , Young Adult
15.
Am Surg ; 82(3): 271-7, 2016 Mar.
Article in English | MEDLINE | ID: mdl-27099065

ABSTRACT

The use of 1:1:1 (packed red blood cells: fresh frozen plasma: platelets) transfusion ratio has been shown to improve survival in severely injured trauma patients. The aim of this study was to assess the outcomes in patients with traumatic brain injury (TBI) receiving 1:1:1 ratio-based blood product transfusion (RBT). We hypothesized that RBT improves survival in patients with TBI as only major injury. We performed a 3-year retrospective analysis of all patients with TBI as only major injury presenting to our Level I trauma center. Patients receiving blood transfusion were included. Patients were stratified into two groups: those who received RBT and those who did not receive RBT (No-RBT). The outcome measure was inhospital mortality. Multivariate logistic regression analysis was performed. A total of 189 patients were included of which 29 per cent (n = 55) received RBT. The mean age was 48 ± 24 years, median (range) Glasgow Coma Scale score was 12 (3-15), and median head abbreviated injury severity scale was 3 (3-5). The overall mortality rate was 28.5 per cent. Patients in the RBT group had a higher survival rate compared with the patients in the No-RBT group (83.6% vs 66.5%, P = 0.02). In conclusion, the survival benefit of RBT exists even in patients with TBI as major injury. Guidelines for the initial management of TBI patients should focus on the use of RBT. The beneficial effect of platelets in RBT among TBI patients requires further evaluation.


Subject(s)
Blood Transfusion , Brain Injuries/therapy , Platelet Transfusion , Resuscitation , Adult , Aged , Blood Transfusion/statistics & numerical data , Female , Humans , Male , Middle Aged , Platelet Transfusion/statistics & numerical data , Retrospective Studies , Treatment Outcome
16.
J Trauma Acute Care Surg ; 80(6): 923-32, 2016 06.
Article in English | MEDLINE | ID: mdl-26958796

ABSTRACT

BACKGROUND: Emerging literature in acute appendicitis favors the nonoperative management of acute appendicitis. However, the actual use of this practice on a national level is not assessed. The aim of this study was to assess the changing trends in nonoperative management of acute appendicitis and its effects on patient outcomes. METHODS: We did an 8-year (2004-2011) retrospective analysis of the National Inpatient Sample database. We included all inpatients with the diagnosis of acute appendicitis. Patients with a diagnosis of appendiceal abscess or patients who underwent surgery for any other pathology were excluded from the analysis. Jonckheere-Terpstra trend analysis was performed for operative versus nonoperative management and outcomes. RESULTS: A total of 436,400 cases of acute appendicitis were identified. Mean age of the population was 33 ± 19.5 years, and 54.5% were male. There was no significant change in the number of acute appendicitis diagnosed over the study period (p = 0.2). During the study period, nonoperative management of acute appendicitis increased significantly from 4.5% in 2004 to 6% in 2011 (p < 0.001). When compared with operatively managed patients, conservatively managed patients had a significantly longer hospital length of stay (3 [2-6] vs. 2 [1-3] days, p < 0.001), and in-hospital complications (27.8% vs. 7%, p < 0.001). On comparison of open and laparoscopic appendectomy, both had shorter hospital length of stay and rate of in-hospital complications. Overall hospital charges were lower in patients managed conservatively (15,441 [8,070-31,688] vs. 20,062 [13,672-29,928] USD, p < 0.001). CONCLUSIONS: Nonoperative management of appendicitis has increased over time; however, outcomes of nonoperative management did not improve over the study period. A more in-depth analysis of patient and system demographics may reveal this disparity in trends. LEVEL OF EVIDENCE: Epidemiologic/prognostic study, level III.


Subject(s)
Anti-Bacterial Agents/therapeutic use , Appendicitis/drug therapy , Practice Patterns, Physicians'/statistics & numerical data , Adolescent , Adult , Aged , Appendectomy/methods , Appendicitis/surgery , Child , Female , Hospital Charges , Humans , Laparoscopy , Length of Stay/statistics & numerical data , Male , Middle Aged , Postoperative Complications/epidemiology , Retrospective Studies , Treatment Outcome , United States/epidemiology
17.
Am J Surg ; 211(6): 982-8, 2016 Jun.
Article in English | MEDLINE | ID: mdl-26879418

ABSTRACT

BACKGROUND: Blunt cardiac injury (BCI) is an infrequent but potentially fatal finding in thoracic trauma. Its clinical presentation is highly variable and patient characteristics and injury pattern have never been described in trauma patients. The aim of this study was to identify predictors of mortality in BCI patients. METHODS: We performed an 8-year retrospective analysis of all trauma patients diagnosed with BCI at our Level 1 trauma center. Patients older than 18 years, blunt chest trauma, and a suspected diagnosis of BCI were included. BCI was diagnosed based on the presence of electrocardiography (EKG), echocardiography, biochemical cardiac markers, and/or radionuclide imaging studies. Elevated troponin I was defined as more than 2 recordings of greater than or equal to .2. Abnormal EKG findings were defined as the presence of bundle branch block, ST segment, and t-wave abnormalities. Univariate and multivariate regression analyses were performed. RESULTS: A total of 117 patients with BCI were identified. The mean age was 51 ± 22 years, 65% were male, mean systolic blood pressure was 93 ± 65, and overall mortality rate was 44%. Patients who died were more likely to have a lactate greater than 2.5 (68% vs 31%, P = .02), hypotension (systolic blood pressure < 90) (86% vs 14%, P = .001), and elevated troponin I (86% vs 11%, P = .01). There was no difference in the rib fracture (58% vs 56%, P = .8), sternal fracture (11% vs 21%, P = .2), and abnormal EKG (89% vs 90%, P = .6) findings. Hypotension and lactate greater than 2.5 were the strongest predictors of mortality in BCI. CONCLUSIONS: BCI remains an important diagnostic and management challenge. However, once diagnosed resuscitative therapy focused on correction of hypotension and lactate may prove beneficial. Although the role of troponin in diagnosing BCI remains controversial, elevated troponin may have prognostic significance.


Subject(s)
Cause of Death , Myocardial Contusions/diagnosis , Myocardial Contusions/mortality , Wounds, Nonpenetrating/diagnosis , Wounds, Nonpenetrating/mortality , Academic Medical Centers , Adult , Aged , Cohort Studies , Echocardiography/methods , Female , Humans , Injury Severity Score , Male , Middle Aged , Morbidity , Multimodal Imaging/methods , Positron-Emission Tomography/methods , Predictive Value of Tests , Retrospective Studies , Risk Assessment , Survival Rate , Trauma Centers , Troponin I/analysis
18.
J Pediatr Surg ; 51(4): 649-53, 2016 Apr.
Article in English | MEDLINE | ID: mdl-26778841

ABSTRACT

INTRODUCTION: Whole body CT (WBCT) scan is known to be associated with significant radiation risk especially in pediatric trauma patients. The aim of this study was to assess the use WBCT scan across trauma centers for the management of pediatric trauma patients. METHODS: We performed a two year (2011-2012) retrospective analysis of the National Trauma Data Bank. Pediatric (age≤18years) trauma patients managed in level I or II adult or pediatric trauma centers with a head, neck, thoracic, or abdominal CT scan were included. WBCT scan was defined as CT scan of the head, neck, thorax, and abdomen. Patients were stratified into two groups: patients managed in adult centers and patients managed in designated pediatric centers. Outcome measure was use of WBCT. Multivariate logistic regression analysis was performed. RESULTS: A total of 30,667 pediatric trauma patients were included of which; 38.3% (n=11,748) were managed in designated pediatric centers. 26.1% (n=8013) patients received a WBCT. The use of WBCT scan was significantly higher in adult trauma centers in comparison to pediatric centers (31.4% vs. 17.6%, p=0.001). There was no difference in mortality rate between the two groups (2.2% vs. 2.1%, p=0.37). After adjusting for all confounding factors, pediatric patients managed in adult centers were 1.8 times more likely to receive a WBCT compared to patients managed in pediatric centers (OR [95% CI]: 1.8 [1.3-2.1], p=0.001). CONCLUSIONS: Variability exists in the use of WBCT scan across trauma centers with no difference in patient outcomes. Pediatric patients managed in adult trauma centers were more likely to be managed with WBCT, increasing their risk for radiation without a difference in outcomes. Establishing guidelines for minimizing the use of WBCT across centers is warranted.


Subject(s)
Healthcare Disparities/statistics & numerical data , Practice Patterns, Physicians'/statistics & numerical data , Tomography, X-Ray Computed/statistics & numerical data , Trauma Centers , Whole Body Imaging/statistics & numerical data , Wounds and Injuries/diagnostic imaging , Adolescent , Adult , Child , Child, Preschool , Databases, Factual , Female , Humans , Infant , Infant, Newborn , Logistic Models , Male , Multivariate Analysis , Retrospective Studies , Tomography, X-Ray Computed/methods , United States
19.
Traffic Inj Prev ; 17(5): 460-4, 2016 Jul 03.
Article in English | MEDLINE | ID: mdl-26760495

ABSTRACT

INTRODUCTION: Distracted driving (talking and/or texting) is a growing public safety problem, with increasing incidence among adult drivers. The aim of this study was to identify the incidence of distracted driving (DD) among health care providers and to create awareness against DD. We hypothesized that distracted driving is prevalent among health care providers and a preventive campaign against distracted driving would effectively decrease distracted driving among health care providers. METHODS: We performed a 4-phase prospective interventional study of all health care providers at our level 1 trauma center. Phase 1: one week of pre-intervention observation; phase 2: one week of intervention; phase 3: one week of postintervention observation; and phase 4: one week of 6 months of postintervention observation. Observations were performed outside employee parking garage at the following time intervals: 6:30-8:30 a.m., 4:40-5:30 p.m., and 6:30-7:30 p.m. Intervention included an e-mail survey, pamphlets and banners in the hospital cafeteria, and a postintervention survey. Hospital employees were identified with badges and scrubs, employees exiting through employee gate, and parking pass on the car. Outcome measure was incidence of DD pre, post, and 6 months postintervention. RESULTS: A total of 15,416 observations (pre: 6,639, post: 4,220, 6 months post: 4,557) and 520 survey responses were collected. The incident of DD was 11.8% among health care providers. There was a significant reduction in DD in each time interval of observation between pre- and postintervention. On subanalysis, there was a significant decrease in talking (P = .0001) and texting (P = .01) while driving postintervention compared to pre-intervention. In the survey, 35.5% of respondents admitted to DD and 4.5% respondents were involved in an accident due to DD. We found that 77% respondents felt more informed after the survey and 91% respondents supported a state legislation against DD. The reduction in the incidence of DD postintervention was sustained even at 6-month follow-up. CONCLUSION: There was a 32% reduction in the incidence of distracted driving postintervention, which remained low even at 6-month follow-up. Implementation of an effective injury prevention campaign could reduce the incidence of distracted driving nationally.


Subject(s)
Accidents, Traffic/statistics & numerical data , Automobile Driving/psychology , Distracted Driving/prevention & control , Health Personnel/psychology , Wounds and Injuries/prevention & control , Adult , Automobile Driving/statistics & numerical data , Awareness , Communication , Distracted Driving/statistics & numerical data , Female , Follow-Up Studies , Health Personnel/statistics & numerical data , Humans , Male , Prevalence , Program Evaluation , Prospective Studies , Surveys and Questionnaires , Text Messaging/statistics & numerical data
20.
J Surg Res ; 200(2): 586-92, 2016 Feb.
Article in English | MEDLINE | ID: mdl-26365164

ABSTRACT

BACKGROUND: Multiple prior studies have suggested an association between survival and beta-blocker administration in patients with severe traumatic brain injury (TBI). However, it is unknown whether this benefit of beta-blockers is dependent on heart rate control. The aim of this study was to assess whether rate control affects survival in patients receiving metoprolol with severe TBI. Our hypothesis was that improved survival from beta-blockade would be associated with a reduction in heart rate. METHODS: We performed a 7-y retrospective analysis of all blunt TBI patients at a level-1 trauma center. Patients aged >16 y with head abbreviated injury scale 4 or 5, admitted to the intensive care unit (ICU) from the operating room or emergency room (ER), were included. Patients were stratified into two groups: metoprolol and no beta-blockers. Using propensity score matching, we matched the patients in two groups in a 1:1 ratio controlling for age, gender, race, admission vital signs, Glasgow coma scale, injury severity score, mean heart rate monitored during ICU admission, and standard deviation of heart rate during the ICU admission. Our primary outcome measure was mortality. RESULTS: A total of 914 patients met our inclusion criteria, of whom 189 received beta-blockers. A propensity-matched cohort of 356 patients (178: metoprolol and 178: no beta-blockers) was created. Patients receiving metoprolol had higher survival than those patients who did not receive beta-blockers (78% versus 68%; P = 0.04); however, there was no difference in the mean heart rate (89.9 ± 13.9 versus 89.9 ± 15; P = 0.99). Nor was there a difference in the mean of standard deviation of the heart rates (14.7 ± 6.3 versus 14.4 ± 6.5; P = 0.65) between the two groups. In Kaplan-Meier survival analysis, patients who received metoprolol had a survival advantage (P = 0.011) compared with patients who did not receive any beta-blockers. CONCLUSIONS: Our study shows an association with improved survival in patients with severe TBI receiving metoprolol, and this effect appears to be independent of any reduction in heart rate. We suggest that beta-blockers should be administered to all severe TBI patients irregardless of any perceived beta-blockade effect on heart rate.


Subject(s)
Adrenergic beta-Antagonists/pharmacology , Brain Injuries/drug therapy , Heart Rate/drug effects , Metoprolol/pharmacology , Adolescent , Adrenergic beta-Antagonists/therapeutic use , Adult , Aged , Aged, 80 and over , Brain Injuries/mortality , Female , Glasgow Coma Scale , Humans , Injury Severity Score , Male , Metoprolol/therapeutic use , Middle Aged , Propensity Score , Retrospective Studies , Treatment Outcome , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...