Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 23
Filter
1.
Article in English | MEDLINE | ID: mdl-39225798

ABSTRACT

BACKGROUND: Prediction models for survival in trauma rely on arrival vital signs to generate survival probabilities. Hospitals are benchmarked on expected and observed outcomes. Prehospital blood (PB) transfusion has been shown to improve mortality, which may affect survival prediction modeling. We hypothesize that the use of PB increases the predicted survival derived from probability models compared with non-blood-based resuscitation. METHODS: All adult trauma patients presenting to a level 1 trauma center requiring emergency release blood transfusion from January 2017 to December 2021 were reviewed. Patients were grouped into those receiving PB or those who did not (no PB). Prehospital Trauma and Injury Severity Score (TRISS) and shock index were compared with those at presentation to hospital. Univariate and multivariate regressions were performed to identify factors associated with changes in survival probability at presentation. RESULTS: In total, 2117 patients were reviewed (PB, 1,011; no PB, 1,106). Patients receiving PB were younger (35 vs. 40 years, p < 0.001), more likely to have blunt mechanism (71% vs. 65%, p = 0.002), and more severely injured (Injury Severity Score, 27 vs. 25; p < 0.001) and had higher rates of prehospital hypotension (44% vs. 19%, p < 0.001) and shock index (1.10 vs. 0.87, p < 0.001). Upon arrival, PB patients had lower rates of ED hypotension (34% vs. 39%, p = 0.01), and significant improvements in arrival TRISS scores (+0.09 vs. -0.02, p < 0.001) and shock index (+0.10 vs. -0.07, p < 0.001) compared with prehospital. On multivariate analysis, PB was associated with a threefold increase in unexpected survivors (odds ratio, 3.28; 95% confidence interval, 2.23-4.60). CONCLUSION: The use of PB was associated with improved probability of survival and an increase in unexpected survivors. Applying TRISS and shock index at hospital arrival does not account for en route hemostatic resuscitation, causing patients to arrive with improved vitals despite severity of injury. Caution should be used when implementing survival probability calculations using arrival vitals in centers with prehospital transfusion capability. LEVEL OF EVIDENCE: Retrospective Comparative Study Without Negative Criteria; Level III.

2.
Article in English | MEDLINE | ID: mdl-38689402

ABSTRACT

INTRODUCTION: Non-narcotic intravenous medications may be a beneficial adjunct to oral multimodal pain regimens (MMPRs) which reduce but do not eliminate opioid exposure and prescribing after trauma. We hypothesized that the addition of a sub-dissociative ketamine infusion (KI) to a standardized oral MMPR reduces inpatient opioid exposure. METHODS: Eligible adult trauma patients admitted to the intermediate or intensive care unit were randomized upon admission to our institutional MMPR per usual care (UC) or UC plus sub-dissociative KI for 24 to 72 hours after arrival. The primary outcome was morphine milligram equivalents per day (MME/d) and secondary outcomes included total MME, discharge with an opioid prescription (OP%), and rates of ketamine side effects. Bayesian posterior probabilities (pp) were calculated using neutral priors. RESULTS: A total of 300 patients were included in the final analysis with 144 randomized to KI and 156 to UC. Baseline characteristics were similar between groups. The injury severity scores for KI were 19 [14, 29] versus UC 22 [14, 29]. The KI group had a lower rate of long-bone fracture (37% versus 49%) and laparotomy (16% versus 24%). Patients receiving KI had an absolute reduction of 7 MME/day, 96 total MME, and 5% in OP%. Additionally, KI had a relative risk (RR) reduction of 19% in MME/day (RR 0.81 [0.69 - 0.95], pp = 99%), 20% in total MME (RR 0.80 [0.64, 0.99], pp = 98%), and 8% in OP% (RR 0.92 [0.76, 1.11], pp = 81%). The KI group had a higher rate of delirium (11% versus 6%); however, rates of other side effects such as arrythmias and unplanned intubations were similar between groups. CONCLUSION: Addition of a sub-dissociative ketamine infusion to an oral MMPR resulted in a decrease in opioid exposure in severely injured patients. Sub-dissociative ketamine infusions can be used as a safe adjunct to decrease opioid exposure in monitored settings. LEVEL OF EVIDENCE: I; Therapeutic/Care Management.

3.
Surg Infect (Larchmt) ; 25(1): 19-25, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38170174

ABSTRACT

Background: Patients undergoing trauma laparotomy experience high rates of surgical site infection (SSI). Although intra-operative shock is a likely contributor to SSI risk, little is known about the relation between shock, intra-operative restoration of physiologic normalcy, and SSI development. Patients and Methods: A retrospective review of trauma patients who underwent emergent definitive laparotomy was performed. Using shock index and base excess at the beginning and end of laparotomy, patients were classified as normal, persistent shock, resuscitated, or new shock. Univariable and multivariable analyses were performed to identify predictors of organ/space SSI, superficial/deep SSI, and any SSI. Results: Of 1,191 included patients, 600 (50%) were categorized as no shock, 248 (21%) as resuscitated, 109 (9%) as new shock, and 236 (20%) as persistent shock, with incidence of any SSI as 51 (9%), 28 (11%), 26 (24%), and 32 (14%), respectively. These rates were similar in organ/space and superficial/deep SSIs. On multivariable analysis, resuscitated, new shock, and persistent shock were associated with increased odds of organ/space SSI (odds ratio [OR], 2.2; 95% confidence interval [CI], 1.3-3.5; p < 0.001) and any SSI (OR, 2.0; 95% CI, 1.4-3.2; p < 0.001), but no increased risk of superficial/deep SSI (OR, 1.4; 95% CI, 0.8-2.6; p = 0.331). Conclusions: Although the trajectory of physiologic status influenced SSI, the presence of shock at any time during trauma laparotomy, regardless of restoration of physiologic normalcy, was associated with increased odds of SSI. Further investigation is warranted to determine the relation between peri-operative shock and SSI in trauma patients.


Subject(s)
Laparotomy , Surgical Wound Infection , Humans , Surgical Wound Infection/epidemiology , Surgical Wound Infection/etiology , Laparotomy/adverse effects , Risk Factors , Retrospective Studies , Incidence
4.
Ann Surg ; 278(3): 357-365, 2023 09 01.
Article in English | MEDLINE | ID: mdl-37317861

ABSTRACT

OBJECTIVE: To compare the effectiveness of surgical stabilization of rib fractures (SSRFs) to nonoperative management in severe chest wall injury. BACKGROUND: SSRF has been shown to improve outcomes in patients with clinical flail chest and respiratory failure. However, the effect of SSRF outcomes in severe chest wall injuries without clinical flail chest is unknown. METHODS: Randomized controlled trial comparing SSRF to nonoperative management in severe chest wall injury, defined as: (1) a radiographic flail segment without clinical flail or (2) ≥5 consecutive rib fractures or (3) any rib fracture with bicortical displacement. Randomization was stratified by the unit of admission as a proxy for injury severity. Primary outcome was hospital length of stay (LOS). Secondary outcomes included intensive care unit (ICU) LOS, ventilator days, opioid exposure, mortality, and incidences of pneumonia and tracheostomy. Quality of life at 1, 3, and 6 months was measured using the EQ-5D-5L survey. RESULTS: Eighty-four patients were randomized in an intention-to-treat analysis (usual care = 42, SSRF = 42). Baseline characteristics were similar between groups. The numbers of total fractures, displaced fractures, and segmental fractures per patient were also similar, as were the incidences of displaced fractures and radiographic flail segments. Hospital LOS was greater in the SSRF group. ICU LOS and ventilator days were similar. After adjusting for the stratification variable, hospital LOS remained greater in the SSRF group (RR: 1.48, 95% CI: 1.17-1.88). ICU LOS (RR: 1.65, 95% CI: 0.94-2.92) and ventilator days (RR: 1.49, 95% CI: 0.61--3.69) remained similar. Subgroup analysis showed that patients with displaced fractures were more likely to have LOS outcomes similar to their usual care counterparts. At 1 month, SSRF patients had greater impairment in mobility [3 (2-3) vs 2 (1-2), P = 0.012] and self-care [2 (1-2) vs 2 (2-3), P = 0.034] dimensions of the EQ-5D-5L. CONCLUSIONS: In severe chest wall injury, even in the absence of clinical flail chest, the majority of patients still reported moderate to extreme pain and impairment of usual physical activity at one month. SSRF increased hospital LOS and did not provide any quality of life benefit for up to 6 months.


Subject(s)
Flail Chest , Rib Fractures , Thoracic Wall , Humans , Rib Fractures/surgery , Rib Fractures/complications , Flail Chest/surgery , Flail Chest/complications , Thoracic Wall/surgery , Quality of Life , Length of Stay , Ribs , Retrospective Studies
5.
J Trauma Acute Care Surg ; 95(5): 685-690, 2023 11 01.
Article in English | MEDLINE | ID: mdl-37125814

ABSTRACT

BACKGROUND: Following COVID and the subsequent blood shortage, several investigators evaluated futility cut points in massive transfusion. We hypothesized that early aggressive use of damage-control resuscitation, including whole blood (WB), would demonstrate that these cut points of futility were significantly underestimating potential survival among patients receiving >50 U of blood in the first 4 hours. METHODS: Adult trauma patients admitted from November 2017 to October 2021 who received emergency-release blood products in prehospital or emergency department setting were included. Deaths within 30 minutes of arrival were excluded. Total blood products were defined as total red blood cell, plasma, and WB in the field and in the first 4 hours after arrival. Patients were first divided into those receiving ≤50 or >50 U of blood in the first 4 hours. We then evaluated patients by whether they received any WB or received only component therapy. Thirty-day survival was evaluated for all included patients. RESULTS: A total of 2,299 patients met the inclusion criteria (2,043 in ≤50 U, 256 in >50 U groups). While there were no differences in age or sex, the >50 U group was more likely to sustain penetrating injury (47% vs. 30%, p < 0.05). Patients receiving >50 U of blood had lower field and arrival blood pressure and larger prehospital and emergency department resuscitation volumes ( p < 0.05). Patients in the >50 U group had lower survival than those in the ≤50 cohort (31% vs. 79%; p < 0.05). Patients who received WB (n = 1,291) had 43% increased odds of survival compared with those who received only component therapy (n = 1,008) (1.09-1.87, p = 0.009) and higher 30-day survival at transfusion volumes >50 U. CONCLUSION: Patient survival rates in patients receiving >50 U of blood in the first 4 hours of care are as high as 50% to 60%, with survival still at 15% to 25% after 100 U. While responsible blood stewardship is critical, futility should not be declared based on high transfusion volumes alone. LEVEL OF EVIDENCE: Therapeutic/Care Management; Level III.


Subject(s)
Medical Futility , Wounds and Injuries , Adult , Humans , Blood Transfusion , Emergency Service, Hospital , Plasma , Resuscitation , Wounds and Injuries/therapy , Retrospective Studies , Injury Severity Score , Blood Component Transfusion
6.
Trauma Surg Acute Care Open ; 7(1): e001043, 2022.
Article in English | MEDLINE | ID: mdl-36483590

ABSTRACT

Introduction: Dysphagia is associated with increased morbidity, mortality, and resource utilization in hospitalized patients, but studies on outcomes in geriatric trauma patients with dysphagia are limited. We hypothesized that geriatric trauma patients with dysphagia would have worse clinical outcomes compared with those without dysphagia. Methods: Patients with and without dysphagia were compared in a single-center retrospective cohort study of trauma patients aged ≥65 years admitted in 2019. The primary outcome was mortality. Secondary outcomes included intensive care unit (ICU) length of stay (LOS), hospital LOS, discharge destination, and unplanned ICU admission. Multivariable regression analyses and Bayesian analyses adjusted for age, Injury Severity Score, mechanism of injury, and gender were performed to determine the association between dysphagia and clinical outcomes. Results: Of 1706 geriatric patients, 69 patients (4%) were diagnosed with dysphagia. Patients with dysphagia were older with a higher Injury Severity Score. Increased odds of mortality did not reach statistical significance (OR 1.6, 95% CI 0.6 to 3.4, p=0.30). Dysphagia was associated with increased odds of unplanned ICU admission (OR 4.6, 95% CI 2.0 to 9.6, p≤0.001) and non-home discharge (OR 5.2, 95% CI 2.4 to 13.9, p≤0.001), as well as increased ICU LOS (OR 4.9, 95% CI 3.1 to 8.1, p≤0.001), and hospital LOS (OR 2.1, 95% CI 1.7 to 2.6, p≤0.001). On Bayesian analysis, dysphagia was associated with an increased probability of longer hospital and ICU LOS, unplanned ICU admission, and non-home discharge. Conclusions: Clinically apparent dysphagia is associated with poor outcomes, but it remains unclear if dysphagia represents a modifiable risk factor or a marker of underlying frailty, leading to poor outcomes. This study highlights the importance of screening protocols for dysphagia in geriatric trauma patients to possibly mitigate adverse outcomes. Level of evidence: Level III.

7.
Trials ; 23(1): 599, 2022 Jul 27.
Article in English | MEDLINE | ID: mdl-35897081

ABSTRACT

BACKGROUND: Evidence for effective pain management and opioid minimization of intravenous ketamine in elective surgery has been extrapolated to acutely injured patients, despite limited supporting evidence in this population. This trial seeks to determine the effectiveness of the addition of sub-dissociative ketamine to a pill-based, opioid-minimizing multi-modal pain regimen (MMPR) for post traumatic pain. METHODS: This is a single-center, parallel-group, randomized, controlled comparative effectiveness trial comparing a MMPR to a MMPR plus a sub-dissociative ketamine infusion. All trauma patients 16 years and older admitted following a trauma which require intermediate (IMU) or intensive care unit (ICU) level of care are eligible. Prisoners, patients who are pregnant, patients not expected to survive, and those with contraindications to ketamine are excluded from this study. The primary outcome is opioid use, measured by morphine milligram equivalents (MME) per patient per day (MME/patient/day). The secondary outcomes include total MME, pain scores, morbidity, lengths of stay, opioid prescriptions at discharge, and patient centered outcomes at discharge and 6 months. DISCUSSION: This trial will determine the effectiveness of sub-dissociative ketamine infusion as part of a MMPR in reducing in-hospital opioid exposure in adult trauma patients. Furthermore, it will inform decisions regarding acute pain strategies on patient centered outcomes. TRIAL REGISTRATION: The Ketamine for Acute Pain Management After Trauma (KAPT) with registration # NCT04129086 was registered on October 16, 2019.


Subject(s)
Acute Pain , Ketamine , Acute Pain/diagnosis , Acute Pain/drug therapy , Acute Pain/etiology , Adult , Analgesics/adverse effects , Analgesics, Opioid/adverse effects , Humans , Ketamine/adverse effects , Pain Measurement , Pain, Postoperative
8.
J Trauma Acute Care Surg ; 93(1): e30-e39, 2022 07 01.
Article in English | MEDLINE | ID: mdl-35393377

ABSTRACT

ABSTRACT: The prior article in this series delved into measuring cost in acute care surgery, and this subsequent work explains in detail how quality is measured. Specifically, objective quality is based on outcome measures, both from administrative and clinical registry databases from a multitude of sources. Risk stratification is key in comparing similar populations across diseases and procedures. Importantly, a move toward focusing on subjective outcomes like patient-reported outcomes measures and financial well-being are vital to evolving surgical quality measures for the 21st century.


Subject(s)
Outcome Assessment, Health Care , Patient Reported Outcome Measures , Databases, Factual , Humans , Registries
9.
J Burn Care Res ; 42(6): 1146-1151, 2021 11 24.
Article in English | MEDLINE | ID: mdl-34302482

ABSTRACT

In 2019, we implemented a pill-based, opioid-minimizing pain protocol and protocolized moderate sedation for dressing changes in order to decrease opioid exposure in burn patients. We hypothesized that these interventions would reduce inpatient opioid exposure without increasing acute pain scores. Two groups of consecutive patients admitted to the burn service were compared: Pre-group (from January 1, 2018 to July 31, 2019) and Post-group (from January 1, 2020 to June 30, 2020) from before and after the implementation of the protocols (from August 1, 2019 to December 31, 2019). We abstracted patient demographics and burn injury characteristics from the burn registry. We obtained opioid exposure and pain scale scores from the electronic medical record. The primary outcome was total morphine milligram equivalents (MMEs). Secondary outcomes included MMEs/day, pain domain-specific MMEs, and pain scores. Pain was estimated by creating a normalized pain score (range 0-1), which incorporated three different pain scales (Numeric Rating Scale, Behavioral Pain Scale, and Behavioral Pain Assessment Scale). Groups were compared using Wilcoxon rank-sum and chi-square tests. Treatment effects were estimated using Bayesian generalized linear models. There were no differences in demographics or burn characteristics between the Pre-group (n = 495) and Post-group (n = 174). The Post-group had significantly lower total MMEs (Post-group 110 MMEs [32, 325] vs Pre-group 230 [60, 840], P < .001), MMEs/day (Post-group 33 MMEs/day [15, 54] vs Pre-group 52 [27, 80], P < .001), and domain-specific total MMEs. No difference in average normalized pain scores was seen. Implementation of opioid-minimizing protocols for acute burn pain was associated with a significant reduction in inpatient opioid exposure without an increase in pain scores.


Subject(s)
Acute Pain/drug therapy , Analgesics, Opioid/therapeutic use , Burns/drug therapy , Pain Management/statistics & numerical data , Practice Patterns, Physicians'/statistics & numerical data , Acute Pain/etiology , Adult , Bayes Theorem , Burns/complications , Drug Prescriptions/statistics & numerical data , Female , Humans , Male , Middle Aged , Retrospective Studies
13.
JAMA Surg ; 153(2): 107-113, 2018 Feb 01.
Article in English | MEDLINE | ID: mdl-28975247

ABSTRACT

IMPORTANCE: Time to definitive care following injury is important to the outcomes of trauma patients. Prehospital trauma care is provided based on policies developed by individual trauma systems and is an important component of the care of injured patients. Given a paucity of systems-level trauma research, considerable variability exists in prehospital care policies across trauma systems, potentially affecting patient outcomes. OBJECTIVE: To evaluate whether private vehicle prehospital transport confers a survival advantage vs ground emergency medical services (EMS) transport following penetrating injuries in urban trauma systems. DESIGN, SETTING, AND PARTICIPANTS: Retrospective cohort study of data included in the National Trauma Data Bank from January 1, 2010, through December 31, 2012, comprising 298 level 1 and level 2 trauma centers that contribute data to the National Trauma Data Bank that are located within the 100 most populous metropolitan areas in the United States. Of 2 329 446 patients assessed for eligibility, 103 029 were included in this study. All patients were 16 years or older, had a gunshot wound or stab wound, and were transported by ground EMS or private vehicle. MAIN OUTCOME AND MEASURE: In-hospital mortality. RESULTS: Of the 2 329 446 records assessed for eligibility, 103 029 individuals at 298 urban level 1 and level 2 trauma centers were included in the analysis. The study population was predominantly male (87.6%), with a mean age of 32.3 years. Among those included, 47.9% were black, 26.3% were white, and 18.4% were Hispanic. Following risk adjustment, individuals with penetrating injuries transported by private vehicle were less likely to die than patients transported by ground EMS (odds ratio [OR], 0.38; 95% CI, 0.31-0.47). This association remained statistically significant on stratified analysis of the gunshot wound (OR, 0.45; 95% CI, 0.36-0.56) and stab wound (OR, 0.32; 95% CI, 0.20-0.52) subgroups. CONCLUSIONS AND RELEVANCE: Private vehicle transport is associated with a significantly lower likelihood of death when compared with ground EMS transport for individuals with gunshot wounds and stab wounds in urban US trauma systems. System-level evidence such as this can be a valuable tool for those responsible for developing and implementing policies at the trauma system level.


Subject(s)
Ambulances/statistics & numerical data , Automobiles/statistics & numerical data , Urban Health Services/statistics & numerical data , Wounds, Gunshot/mortality , Wounds, Stab/mortality , Adolescent , Adult , Databases, Factual , Female , Humans , Male , Middle Aged , Retrospective Studies , Survival Rate , Time-to-Treatment , Trauma Centers/statistics & numerical data , United States/epidemiology , Young Adult
14.
J Trauma Acute Care Surg ; 83(5): 837-845, 2017 11.
Article in English | MEDLINE | ID: mdl-29068873

ABSTRACT

BACKGROUND: Patients managed nonoperatively have been excluded from risk-adjusted benchmarking programs, including the American College of Surgeons (ACS) National Surgical Quality Improvement Program (NSQIP). Consequently, optimal performance evaluation is not possible for specialties like emergency general surgery (EGS) where nonoperative management is common. We developed a multi-institutional EGS clinical data registry within ACS NSQIP that includes patients managed nonoperatively to evaluate variability in nonoperative care across hospitals and identify gaps in performance assessment that occur when only operative cases are considered. METHODS: Using ACS NSQIP infrastructure and methodology, surgical consultations for acute appendicitis, acute cholecystitis, and small bowel obstruction (SBO) were sampled at 13 hospitals that volunteered to participate in the EGS clinical data registry. Standard NSQIP variables and 16 EGS-specific variables were abstracted with 30-day follow-up. To determine the influence of complications in nonoperative patients, rates of adverse outcomes were identified, and hospitals were ranked by performance with and then without including nonoperative cases. RESULTS: Two thousand ninety-one patients with EGS diagnoses were included, 46.6% with appendicitis, 24.3% with cholecystitis, and 29.1% with SBO. The overall rate of nonoperative management was 27.4%, 6.6% for appendicitis, 16.5% for cholecystitis, and 69.9% for SBO. Despite comprising only 27.4% of patients in the EGS pilot, nonoperative management accounted for 67.7% of deaths, 34.3% of serious morbidities, and 41.8% of hospital readmissions. After adjusting for patient characteristics and hospital diagnosis mix, addition of nonoperative management to hospital performance assessment resulted in 12 of 13 hospitals changing performance rank, with four hospitals changing by three or more positions. CONCLUSION: This study identifies a gap in performance evaluation when nonoperative patients are excluded from surgical quality assessment and demonstrates the feasibility of incorporating nonoperative care into existing surgical quality initiatives. Broadening the scope of hospital performance assessment to include nonoperative management creates an opportunity to improve the care of all surgical patients, not just those who have an operation. LEVEL OF EVIDENCE: Care management, level IV; Epidemiologic, level III.


Subject(s)
Benchmarking , Emergency Medicine/standards , General Surgery/standards , Quality Improvement , Appendicitis/therapy , Cholecystitis/therapy , Female , Humans , Intestinal Obstruction/therapy , Intestine, Small , Male , Pilot Projects
15.
Am J Surg ; 214(5): 773-779, 2017 Nov.
Article in English | MEDLINE | ID: mdl-28637590

ABSTRACT

INTRODUCTION: Readmissions have become a focus of pay-for-performance programs. Surgical site infections (SSI) are the reason for most readmissions. Readmissions for SSI could be a unique target for quality improvement. METHODS: Readmission risk for SSI were evaluated for patients undergoing colectomies from 2013 to 2014. Hazard models were developed to examine factors associated with and hospital-level variation in risk-adjusted rates of readmission for SSI. RESULTS: Among 59,088 patients at 525 hospitals, the rate of readmissions for SSI ranged from 1.45% to 6.34%. Characteristics associated with a greater likelihood of SSI readmissions include male gender, smoking, open surgery and hospitals with increased socioeconomically-disadvantaged patients. After risk adjustment, there was little correlation between hospital performance with SSI readmission rate and performance with overall SSI or total readmission rate (r2 = 0.29, r2 = 0.14). CONCLUSIONS: Readmission for SSI represents a unique aspect of quality beyond that offered by measuring only SSI or readmission rates alone, and may provide actionable quality improvement.


Subject(s)
Patient Readmission/statistics & numerical data , Quality Improvement , Surgical Wound Infection/epidemiology , Aged , Aged, 80 and over , Female , Humans , Male , Middle Aged
18.
J Trauma Acute Care Surg ; 81(5): 931-935, 2016 11.
Article in English | MEDLINE | ID: mdl-27537514

ABSTRACT

BACKGROUND: Rapid transport to definitive care ("scoop and run") versus field stabilization in trauma remains a topic of debate and has resulted in variability in prehospital policy. We aimed to identify trauma systems frequently using a true "scoop and run" police transport approach and to compare mortality rates between police and ground emergency medical services (EMS) transport. METHODS: Using the National Trauma Databank (NTDB), we identified adult gunshot and stab wound patients presenting to Level 1 or 2 trauma centers from 2010 to 2012. Hospitals were grouped into their respective cities and regional trauma systems. Patients directly transported by police or ground EMS to trauma centers in the 100 most populous US trauma systems were included. Frequency of police transport was evaluated, identifying trauma systems with high utilization. Mortality rates and risk-adjusted odds ratio for mortality for police versus EMS transport were derived. RESULTS: Of 88,564 total patients, 86,097 (97.2%) were transported by EMS and 2,467 (2.8%) by police. Unadjusted mortality was 17.7% for police transport and 11.6% for ground EMS. After risk adjustment, patients transported by police were no more likely to die than those transported by EMS (OR = 1.00, 95% CI: 0.69-1.45). Among all police transports, 87.8% occurred in three locations (Philadelphia, Sacramento, and Detroit). Within these trauma systems, unadjusted mortality was 19.9% for police transport and 13.5% for ground EMS. Risk-adjusted mortality was no different (OR = 1.01, 95% CI: 0.68-1.50). CONCLUSIONS: Using trauma system-level analyses, patients with penetrating injuries in urban trauma systems were found to have similar mortality for police and EMS transport. The majority of prehospital police transport in penetrating trauma occurs in three trauma systems. These cities represent ideal sites for additional system-level evaluation of prehospital transport policies. LEVEL OF EVIDENCE: Prognostic/epidemiologic study, level III.


Subject(s)
Emergency Medical Services/organization & administration , Transportation of Patients/methods , Wounds, Gunshot/mortality , Wounds, Stab/mortality , Adult , Databases, Factual , Hospitals, Urban , Humans , Organizational Policy , Police , Trauma Centers , United States/epidemiology , Wounds, Gunshot/therapy , Wounds, Penetrating/mortality , Wounds, Penetrating/therapy , Wounds, Stab/therapy
19.
JAMA Surg ; 151(12): 1125-1130, 2016 12 01.
Article in English | MEDLINE | ID: mdl-27556900

ABSTRACT

Importance: There are currently 2 widely accepted treatment strategies for patients presenting to the hospital with choledocholithiasis. However, the rate of use for each strategy in the United States has not been evaluated, and their trends over time have not been described. Furthermore, an optimal management strategy for choledocholithiasis has yet to be defined. Objective: To evaluate secular trends in the management of choledocholithiasis in the United States and to compare hospital length of stay between patients with choledocholithiasis treated with endoscopic retrograde cholangiopancreatography with laparoscopic cholecystectomy (ERCP+LC) vs laparoscopic common bile duct exploration with laparoscopic cholecystectomy (LCBDE+LC). Design, Setting, and Participants: In this cohort study, we studied patients with a primary diagnosis of choledocholithiasis that were included in the National Inpatient Sample between 1998 and 2013 from a representative sample of acute care hospitals in the United States. Patients with cholangitis or pancreatitis were excluded. Main Outcomes and Measures: Unadjusted and risk-adjusted median hospital length of stay. Results: Of the 37 207 patients included in our analysis, 36 048 (96.9%) were treated with ERCP+LC and 1159 (3.1%) were treated with LCBDE+LC. The mean (SD) age of patients treated with ERCP+LC was 50.7 (21.1) years and was 51.9 (20.9) years for those treated with LCBDE+LC; 25 788 (69.3%) were female. Analysis of the National Inpatient Sample data indicates that there are an average of 26 158 patients with choledocholithiasis admitted in the United States each year. The overall use of CBDE for patients with choledocholithiasis decreased from 39.8% of admissions in 1998 to 8.5% in 2013 (P < .001). A decrease was also seen for open CBDE (30.6% vs 5.5%; P < .001) and laparoscopic CBDE (9.2% vs 3.0%; P < .001) independently. Rates of management with LCBDE+LC decreased from 5.3% to 1.5% (P < .001), while rates of ERCP+LC increased from 52.8% to 85.7% (P < .001). The unadjusted median hospital length of stay was shorter for patients treated with LCBDE+LC than for those treated with ERCP+LC (3.0 vs 4.0 days; P < .001). After risk-adjustment, the median length of stay remained 0.5 days shorter for patients treated with LCBDE+LC than with ERCP+LC (3.5 vs 4.0 days; P < .001). Conclusions and Relevance: This study highlights the marked decline in the use of both open and laparoscopic CBDE in the United States as well as the benefit to the length of stay LCBDE+LC has over ERCP+LC. Despite a persistent need for CBDE and the potential benefits of LCBDE+LC over ERCP+LC for managing choledocholithiasis, if current trends continue, CBDE may be at risk of disappearing from the surgical armamentarium.


Subject(s)
Cholangiopancreatography, Endoscopic Retrograde/trends , Cholecystectomy, Laparoscopic/trends , Choledocholithiasis/surgery , Common Bile Duct/surgery , Length of Stay/statistics & numerical data , Adult , Aged , Female , Humans , Male , Middle Aged , Retrospective Studies , Risk Adjustment
SELECTION OF CITATIONS
SEARCH DETAIL