RESUMO
Aims: The aims were to assess whether joint-specific outcome after total knee arthroplasty (TKA) was influenced by implant design over a 12-year follow-up period, and whether patient-related factors were associated with loss to follow-up and mortality risk. Methods: Long-term follow-up of a randomized controlled trial was undertaken. A total of 212 patients were allocated a Triathlon or a Kinemax TKA. Patients were assessed preoperatively, and one, three, eight, and 12 years postoperatively using the Oxford Knee Score (OKS). Reasons for patient lost to follow-up, mortality, and revision were recorded. Results: A total of 94 patients completed 12-year functional follow-up (62 females, mean age 66 years (43 to 82) at index surgery). There was a clinically significantly greater improvement in the OKS at one year (mean difference (MD) 3.0 (95% CI 0.4 to 5.7); p = 0.027) and three years (MD 4.7 (95% CI 1.9 to 7.5); p = 0.001) for the Triathlon group, but no differences were observed at eight (p = 0.331) or 12 years' (p = 0.181) follow-up. When assessing the OKS in the patients surviving to 12 years, the Triathlon group had a clinically significantly greater improvement in the OKS (marginal mean 3.8 (95% CI 0.2 to 7.4); p = 0.040). Loss to functional follow-up (53%, n = 109/204) was independently associated with older age (p = 0.001). Patient mortality was the major reason (56.4%, n = 62/110) for loss to follow-up. Older age (p < 0.001) and worse preoperative OKS (p = 0.043) were independently associated with increased mortality risk. An age at time of surgery of ≥ 72 years was 75% sensitive and 74% specific for predicting mortality with an area under the curve of 78.1% (95% CI 70.9 to 85.3; p < 0.001). Conclusion: The Triathlon TKA was associated with clinically meaningful greater improvement in knee-specific outcome when compared to the Kinemax TKA. Loss to follow-up at 12 years was a limitation, and studies planning longer-term functional assessment could limit their cohort to patients aged under 72 years.
RESUMO
OBJECTIVE: Because there is no reliable method on admission to predict whether a patient will require neurosurgical intervention in the future, the general approach remains to treat each patient with mild traumatic brain injury (mTBI) and subdural hematoma (SDH) as if they will require such an intervention. Consequently, there is a growing population of patients with mTBI and SDH that is overtriaged despite having a low probability of needing neurosurgical intervention. This study aimed to train and validate a predictive rule-out tool for neurosurgical intervention in patients with mTBI and SDH. METHODS: This was a retrospective cohort study of all trauma patients admitted to six level I trauma centers in three states. Patients were included if they met the following criteria: admitted between 2016 and 2020, ≥ 18 years of age, ICD-10 diagnosis of isolated SDH, initial head imaging available, initial Glasgow Coma Scale score of 13-15, and arrived within 48 hours of injury. Exclusion criteria included skull fracture, intracranial hemorrhage other than an SDH, and no neurosurgical consultation. Prediction variables included 34 demographic, clinical, and radiographic variables. The study outcome was neurosurgical intervention within 48 hours of hospital admission. Seventy-five percent of the data were used for training, and 25% for testing. Multivariable logistic regression with fivefold cross-validation was used on the training set to identify covariates with the highest specificity while holding sensitivity at 100%. Results were validated on the testing set. RESULTS: In total, 1000 patients were in the training set and 333 in the testing set. The overall neurosurgical intervention rate was 8.8%. For the fivefold cross-validation process, three variables were selected that maximized specificity while holding sensitivity at 100%: maximum hematoma thickness, initial Glasgow Coma Scale score, and preinjury antithrombotic use (sensitivity 100%, specificity 56%, area under the receiver operating characteristic curve 0.94). With a cutoff probability of neurosurgical intervention set at 1.88%, the final model was validated to predict neurosurgical intervention with a sensitivity of 100% (95% CI 88.4%-100%) and specificity of 55.1% (95% CI 49.3%-60.8%). CONCLUSIONS: In this study, the largest of its kind to date, the authors successfully developed and validated a new tool for ruling out the necessity of neurosurgical intervention in patients with mTBI and isolated SDH. By successfully identifying more than half of patients who are unlikely to require neurosurgery within the first 2 days of admission, this tool can be used to improve treatment efficiency and provide patients and clinicians with valuable prognostic information.
RESUMO
Decision makers are often confronted with inadequate information to predict nutrient loads and yields in freshwater ecosystems at large spatial scales. We evaluate the potential of using data mapped at large spatial scales (regional to global) and often coarse resolution to predict nitrogen yields at varying smaller scales (e.g., at the catchment and stream reach level). We applied the SPAtially Referenced Regression On Watershed attributes (SPARROW) model in three regions: the Upper Midwest part of the United States, New Zealand, and the Grande River Basin in southeastern Brazil. For each region, we compared predictions of nitrogen delivery between models developed using novel large-scale datasets and those developed using local-scale datasets. Large-scale models tended to underperform the local-scale models in poorly monitored areas. Despite this, large-scale models are well suited to generate hypotheses about relative effects of different nutrient source categories (point and urban, agricultural, native vegetation) and to identify knowledge gaps across spatial scales when data are scarce. Regardless of the spatial resolution of the predictors used in the models, a representative network of water quality monitoring stations is key to improve the performance of large-scale models used to estimate loads and yields. We discuss avenues of research to understand how this large-scale modelling approach can improve decision making for managing catchments at local scales, particularly in data poor regions.
RESUMO
BACKGROUND: Coronary artery disease remains the largest contributor to cardiac arrests worldwide; yet, long-term outcomes are often driven by neurological status after resuscitation. We examined the association between pre-percutaneous coronary intervention (PCI) level of consciousness (LOC) and outcomes among patients with cardiac arrest who underwent PCI. METHODS: The study cohort included patients undergoing PCI after cardiac arrest between April 2018 and March 2022 at 48 hospitals in the state of Michigan. Pre-PCI LOC was categorized as mentally alert, partially responsive, unresponsive, and unable to assess. In-hospital outcomes included mortality, bleeding, and acute kidney injury. RESULTS: Among 3021 patients who underwent PCI after cardiac arrest, 1394 (49%) were mentally alert, 132 (5%) were partially responsive, 698 (24%) were unresponsive, and 631 (22%) were unable to assess. The mentally alert cohort had lower mortality (4.59%) compared with the partially responsive (17.42%), unresponsive (50.14%), and unable to assess cohorts (38.03%; P<0.001). After adjusting for baseline differences, compared with mentally alert patients, the odds of mortality were markedly elevated in patients who were partially responsive (adjusted odds ratio, 4.63 [95% CI, 2.67-8.04]; P<0.001), unable to assess (adjusted odds ratio, 13.95 [95% CI, 9.97-19.51]; P<0.001), and unresponsive (adjusted odds ratio, 24.36 [17.34-34.23]; P<0.001). After adjustment, patients with impaired LOC also had higher risks of acute kidney injury and bleeding compared with mentally alert patients. CONCLUSIONS: Pre-PCI LOC is a strong predictor of in-hospital outcomes after PCI among cardiac arrest patients. A patient's pre-PCI LOC should be considered an important factor when weighing treatment options, designing clinical trials, and counseling patients and their families regarding prognosis after PCI.
Assuntos
Parada Cardíaca , Mortalidade Hospitalar , Intervenção Coronária Percutânea , Humanos , Intervenção Coronária Percutânea/efeitos adversos , Intervenção Coronária Percutânea/mortalidade , Masculino , Feminino , Idoso , Pessoa de Meia-Idade , Michigan , Resultado do Tratamento , Fatores de Risco , Medição de Risco , Parada Cardíaca/mortalidade , Parada Cardíaca/terapia , Parada Cardíaca/diagnóstico , Parada Cardíaca/fisiopatologia , Fatores de Tempo , Injúria Renal Aguda/mortalidade , Estado de Consciência , Doença da Artéria Coronariana/mortalidade , Doença da Artéria Coronariana/terapia , Doença da Artéria Coronariana/complicações , Hemorragia , Idoso de 80 Anos ou mais , Sistema de RegistrosRESUMO
The lessons learned from reviewing national risk assessments to modernise the Australian Standard for the post-mortem inspection and disposition judgement of beef, sheep, goat, and pig carcases are discussed. The initial risk profiles identified priorities for quantitative assessments. Broadly, the main difficulty encountered was the paucity of quantified performance for the current inspection. Resolving this involved acquiring gross abnormality data representing regional production/proportional abattoir volumes, the range of gross abnormalities appearing nationally, proportional occurrence at carcase sites, and seasonality to enable the comparison of procedures. The methodologies followed the Codex Alimentarius Commission's risk assessment guidelines and are fully documented in the associated publications. The evidence and discussion are provided for the associated challenges experienced, including preventing contamination, the use of food chain information to support amendment, inspection as a part of industry Quality Assurance programmes, and opportunities to improve inspector training. The criteria considered by the Competent Authority for the determination of the equivalence of alternative post-mortem inspection techniques included comparisons of public health risk, non-detection rates for gross abnormalities, and microbial contamination resulting from inspection activities, as appropriate. Most of the gross abnormalities detected arose from animal health and welfare conditions affecting wholesomeness and did not present as food safety hazards. The non-detection rates between the current and alternative inspection (observation) were negligible. A quantitative risk assessment for Cysticercus bovis was conducted. Carcases with multiple gross abnormalities predominantly reflected historic infections (prior septicaemia), where trimming achieved wholesomeness unless they were cachexic.
RESUMO
INTRODUCTION: Rates of subtotal cholecystectomy (STC) are increasing in response to challenging cases of laparoscopic cholecystectomy (LC) to avoid bile duct injury, yet are associated with significant morbidity. The present study identifies risk factors for STC and both derives and validates a risk model for STC. METHODS: LC performed for all biliary pathology across three general surgical units were included (2015-2020). Clinicopathological, intraoperative and post-operative details were reported. Backward stepwise multivariable regression was performed to derive the most parsimonious predictive model for STC. Bootstrapping was performed for internal validation and patients were categorised into risk groups. RESULTS: Overall, 2768 patients underwent LC (median age, 53 years; median ASA, 2; median BMI, 29.7 kg/m2), including 99 cases (3.6%) of STC. Post-operatively following STC, there were bile leaks in 29.3%, collections in 19.2% and retained stones in 10.1% of patients. Post-operative intervention was performed in 29.3%, including ERCP (22.2%), laparoscopy (5.0%) and laparotomy (3.0%). The following variables were positive predictors of STC and were included in the final model: age > 60 years, male sex, diabetes mellitus, acute cholecystitis (AC), increased severity of AC (CRP > 90 mg/L), ≥ 3 biliary admissions, pre-operative ERCP with/without stent, pre-operative cholecystostomy and emergency LC (AUC = 0.84). Low, medium and high-risk groups had a STC rate of 0.8%, 3.9% and 24.5%, respectively. DISCUSSION: The present study determines the morbidity of STC and identifies high-risk features associated with STC. A risk model for STC is derived and internally validated to help surgeons identify high-risk patients and both improve pre-operative decision-making and patient counselling.
Assuntos
Anestésicos , Faringe , Humanos , Estudos Retrospectivos , Terapia Combinada , Faringe/patologia , Faringe/cirurgiaRESUMO
OBJECTIVE: The objective was to identify demographic, clinical, and radiographic risk factors for neurosurgical intervention within 48 hours of admission in patients with mild traumatic brain injury and isolated subdural hematoma. METHODS: The authors conducted a multicenter retrospective cohort study of all trauma patients admitted to 6 level I/II trauma centers who met the following criteria: admitted between January 1, 2016, and December 31, 2020, age ≥ 18 years, ICD-10 diagnosis code for isolated subdural hematoma, available initial head imaging, initial Glasgow Coma Scale score of 13-15, and arrival at the hospital within 48 hours of injury. Patients were excluded for skull fracture, non-subdural hematoma, and absence of neurosurgical consultation. The study outcome was neurosurgical intervention within 48 hours of hospital admission. Multivariable logistic regression with backward selection examined 30 demographic, clinical, and radiographic risk factors for neurosurgery. RESULTS: In total, 1333 patients were included, of whom 117 (8.8%) received a neurosurgical intervention. When only demographic and clinical variables were considered, sex, mechanism of injury, and hours from injury to initial head imaging were significant covariates (area under the receiver operating characteristic curve [AUROC] [95% CI] 0.70 [0.65-0.75]). When only radiographic risk factors were considered, only maximum hemorrhage thickness (in mm) and midline shift (in mm) were independent risk factors for the outcome (AUROC 0.95 [0.92-0.97]). When all demographic, clinical, and radiographic variables were considered together, advanced directive, Injury Severity Score, midline shift, and maximum hemorrhage thickness were identified as significant risk factors for neurosurgical intervention within 48 hours of hospital admission (AUROC 0.95 [0.94-0.97]). CONCLUSIONS: In the setting of mild traumatic brain injury with isolated subdural hematoma, radiographic risk factors were shown to be stronger than demographic and clinical variables in understanding future risk of neurosurgical intervention. These final radiographic risk factors should be considered in the creation of future prediction models and used to increase the efficiency of existing management guidelines.
RESUMO
Objectives Expandable transforaminal interbody fusion (TLIF) devices have been developed to introduce more segmental lordosis through a narrow operative corridor, but there are concerns about the degree of achievable correction with a small graft footprint. In this report, we describe the technical nuances associated with placing bilateral expandable cages for correction of iatrogenic deformity. Materials and Methods A 60-year-old female with symptomatic global sagittal malalignment and a severe lumbar kyphotic deformity after five prior lumbar surgeries presented to our institution. We performed multilevel posterior column osteotomies, a L3-4 intradiscal osteotomy, and placed bilateral lordotic expandable TLIF cages at the level of maximum segmental kyphosis. Results We achieve a 21-degree correction of the patient's focal kyphotic deformity and restoration of the patient global sagittal alignment. Conclusion This case demonstrates both the feasibility and utility of placing bilateral expandable TLIF cages at a single disc space in the setting of severe focal sagittal malalignment. This technique expands the implant footprint and, when coupled with an intradiscal osteotomy, allows for a significant restoration of segmental lordosis.
RESUMO
PURPOSE: The aim of this study is to investigate the cost-effectiveness of revision total knee arthroplasty compared to primary total knee arthroplasty in terms of cost-per-quality-adjusted life year (QALY). METHODS: Data were retrieved for all primary and revision total knee replacement (TKA) procedures performed at a tertiary Swiss hospital between 2006 and 2019. A Markov model was created to evaluate revision risk and we calculated lifetime QALY gain and lifetime procedure costs through individual EuroQol 5 dimension (EQ-5D) scores, hospital costs, national life expectancy tables and standard discounting processes. Cost-per-QALY gain was calculated for primary and revision procedures. RESULTS: EQ-5D data were available for 1343 primary and 103 revision procedures. Significant QALY gains were seen following surgery in all cases. Similar, but significantly more QALYs were gained following primary TKA (PTKA) (5.67 ± 3.98) than following revision TKA (RTKA) (4.67 ± 4.20). Cost-per-QALY was 4686 for PTKA and 10,364 for RTKA. The highest average cost-per-QALY was seen in two-stage RTKA (12,292), followed by one-stage RTKA (8982). CONCLUSION: RTKA results in a similar QALY gain as PTKA. The costs of achieving health gain are two to three times higher in RTKA, but both procedures are highly cost-effective. LEVEL OF EVIDENCE: Economic level II.
RESUMO
BACKGROUND: Spinal cord hypoperfusion undermines clinical recovery in acute traumatic spinal cord injuries. New guidelines suggest cerebrospinal fluid (CSF) drainage is an important strategy for preventing spinal cord hypoperfusion in the acute post-injury phase. METHODS: This study included participants presenting to a single level 1 trauma center between 2018 and 2022 with cervical or thoracic traumatic spinal cord injury severity grade A-C, as evaluated by the American spinal injury association impairment scale (AIS). The primary objective of this study was to compare the efficacy of two CSF drainage protocols in preventing spinal cord hypoperfusion; 1) draining CSF only when spinal cord perfusion pressure (SCPP) drops below 65 mmHg (i.e. reactive) versus 2) empiric CSF drainage of 5-10 mL every hour. Intrathecal pressure, spinal cord perfusion pressure (SCPP), mean arterial pressure (MAP), and vasopressor utilization were compared using univariate T-test statistical analysis. RESULTS: While there was no difference in the incidence of sub-optimal SCPP (<65 mmHg; p = 0.1658), reactively drained participants were more likely to exhibit critical hypoperfusion (<50 mmHg; p = 0.0030) despite also having lower average intrathecal pressures (p < 0.001). There were no differences in average SCPP, mean arterial pressure (MAP), or vasopressor utilization between the two groups (p > 0.05). CONCLUSIONS: Empiric (vs reactive) CSF drainage resulted in fewer incidences of critical spinal cord hypoperfusion for patients with acute traumatic spinal cord injuries.
Assuntos
Drenagem , Traumatismos da Medula Espinal , Humanos , Traumatismos da Medula Espinal/terapia , Feminino , Masculino , Adulto , Pessoa de Meia-Idade , Drenagem/métodos , Estudos Retrospectivos , Pressão do Líquido Cefalorraquidiano/fisiologia , Idoso , Adulto JovemRESUMO
BACKGROUND: Although a common injury there is a lack of published primary data to inform clinical management of sports related brachial plexus injuries. METHODS: A systematic search was completed in Medline, CINAHL, PubMed, SPORTDiscus and Web of Science databases and Google Scholar from inception to August 2023 according to the PRISMA-ScR guidelines. Methodological quality assessment of included articles was with the Joanna Briggs Institute tool. Studies providing primary data as to the rehabilitative management of diagnosed or suspected brachial plexus injuries sustained when playing contact sports were included. RESULTS: Sixty-five studies were identified and screened, of which, 8 case reports were included, incorporating 10 participants with a mean age of 19.8 (±4.09) years. There was wide heterogeneity in injury severity, injury reporting, physical examination and imaging approaches documented. 9 of 10 participants returned to competitive sports, though follow-up periods also varied widely. Whilst return to play criteria varied between studies, the most consistent indicator was pain-free shoulder range of motion and strength. CONCLUSIONS: There is a distinct lack of data available to inform evidence-based rehabilitation management of sports related brachial plexus injury. Only 8 individual case reports contain published data reporting on 10 athletes. Further reporting is critical to inform clinical management.
Assuntos
Traumatismos em Atletas , Plexo Braquial , Adolescente , Adulto , Feminino , Humanos , Masculino , Adulto Jovem , Traumatismos em Atletas/reabilitação , Plexo Braquial/lesões , Neuropatias do Plexo Braquial/reabilitação , Neuropatias do Plexo Braquial/etiologia , Amplitude de Movimento Articular , Volta ao EsporteRESUMO
BACKGROUND: Limited research has explored the effect of Circle of Willis (CoW) anatomy among blunt cerebrovascular injuries (BCVI) on outcomes. It remains unclear if current BCVI screening and scanning practices are sufficient in identification of concomitant COW anomalies and how they affect outcomes. METHODS: This retrospective cohort study included adult traumatic BCVIs at 17 level I-IV trauma centers (08/01/2017-07/31/2021). The objectives were to compare screening criteria, scanning practices, and outcomes among those with and without COW anomalies. RESULTS: Of 561 BCVIs, 65% were male and the median age was 48 y/o. 17% (n = 93) had a CoW anomaly. Compared to those with normal CoW anatomy, those with CoW anomalies had significantly higher rates of any strokes (10% vs. 4%, p = 0.04), ICHs (38% vs. 21%, p = 0.001), and clinically significant bleed (CSB) before antithrombotic initiation (14% vs. 3%, p < 0.0001), respectively. Compared to patients with a normal CoW, those with a CoW anomaly also had ischemic strokes more often after antithrombotic interruption (13% vs. 2%, p = 0.02).Patients with CoW anomalies were screened significantly more often because of some other head/neck indication not outlined in BCVI screening criteria than patients with normal CoW anatomy (27% vs. 18%, p = 0.04), respectively. Scans identifying CoW anomalies included both the head and neck significantly more often (53% vs. 29%, p = 0.0001) than scans identifying normal CoW anatomy, respectively. CONCLUSIONS: While previous studies suggested universal scanning for BCVI detection, this study found patients with BCVI and CoW anomalies had some other head/neck injury not identified as BCVI scanning criteria significantly more than patients with normal CoW which may suggest that BCVI screening across all patients with a head/neck injury may improve the simultaneous detection of CoW and BCVIs. When screening for BCVI, scans including both the head and neck are superior to a single region in detection of concomitant CoW anomalies. Worsened outcomes (strokes, ICH, and clinically significant bleeding before antithrombotic initiation) were observed for patients with CoW anomalies when compared to those with a normal CoW. Those with a CoW anomaly experienced strokes at a higher rate than patients with normal CoW anatomy specifically when antithrombotic therapy was interrupted. This emphasizes the need for stringent antithrombotic therapy regimens among patients with CoW anomalies and may suggest that patients CoW anomalies would benefit from more varying treatment, highlighting the need to include the CoW anatomy when scanning for BCVI. LEVEL OF EVIDENCE: Level III, Prognostic/Epidemiological.
Assuntos
Traumatismo Cerebrovascular , Círculo Arterial do Cérebro , Ferimentos não Penetrantes , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Traumatismo Cerebrovascular/diagnóstico por imagem , Círculo Arterial do Cérebro/anormalidades , Círculo Arterial do Cérebro/anatomia & histologia , Círculo Arterial do Cérebro/diagnóstico por imagem , Estudos Retrospectivos , Centros de Traumatologia , Ferimentos não Penetrantes/complicaçõesRESUMO
Spinal serotonin enables neuro-motor recovery (i.e., plasticity) in patients with debilitating paralysis. While there exists time of day fluctuations in serotonin-dependent spinal plasticity, it is unknown, in humans, whether this is due to dynamic changes in spinal serotonin levels or downstream signaling processes. The primary objective of this study was to determine if time of day variations in spinal serotonin levels exists in humans. To assess this, intrathecal drains were placed in seven adults with cerebrospinal fluid (CSF) collected at diurnal (05:00 to 07:00) and nocturnal (17:00 to 19:00) intervals. High performance liquid chromatography with mass spectrometry was used to quantify CSF serotonin levels with comparisons being made using univariate analysis. From the 7 adult patients, 21 distinct CSF samples were collected: 9 during the diurnal interval and 12 during nocturnal. Diurnal CSF samples demonstrated an average serotonin level of 216.6 ± $ \pm $ 67.7 nM. Nocturnal CSF samples demonstrated an average serotonin level of 206.7 ± $ \pm $ 75.8 nM. There was no significant difference between diurnal and nocturnal CSF serotonin levels (p = .762). Within this small cohort of spine healthy adults, there were no differences in diurnal versus nocturnal spinal serotonin levels. These observations exclude spinal serotonin levels as the etiology for time of day fluctuations in serotonin-dependent spinal plasticity expression.
Assuntos
Ritmo Circadiano , Serotonina , Humanos , Serotonina/líquido cefalorraquidiano , Masculino , Adulto , Feminino , Ritmo Circadiano/fisiologia , Pessoa de Meia-Idade , Medula Espinal/metabolismo , Cromatografia Líquida de Alta Pressão , IdosoRESUMO
BACKGROUND: There is an epidemic of firearm injuries in the United States since the mid-2000s. Thus, we sought to examine whether hospitalization from firearm injuries have increased over time, and to examine temporal changes in patient demographics, firearm injury intent, and injury severity. METHODS: This was a multicenter, retrospective, observational cohort study of patients hospitalized with a traumatic injury to six US level I trauma centers between 1/1/2016 and 6/30/2022. ICD-10-CM cause codes were used to identify and describe firearm injuries. Temporal trends were compared for demographics (age, sex, race, insured status), intent (assault, unintentional, self-harm, legal intervention, and undetermined), and severity (death, ICU admission, severe injury (injury severity score ≥ 16), receipt of blood transfusion, mechanical ventilation, and hospital and ICU LOS (days). Temporal trends were examined over 13 six-month intervals (H1, January-June; H2, July-December) using joinpoint regression and reported as semi-annual percent change (SPC); significance was p < 0.05. RESULTS: Firearm injuries accounted for 2.6% (1908 of 72,474) of trauma hospitalizations. The rate of firearm injuries initially declined from 2016-H1 to 2018-H2 (SPC = - 4.0%, p = 0.002), followed by increased rates from 2018-H2 to 2020-H1 (SPC = 9.0%, p = 0.005), before stabilizing from 2020-H1 to 2022-H1 (0.5%, p = 0.73). NH black patients had the greatest hospitalization rate from firearm injuries (14.0%) and were the only group to demonstrate a temporal increase (SPC = 6.3%, p < 0.001). The proportion of uninsured patients increased (SPC = 2.3%, p = 0.02) but there were no temporal changes by age or sex. ICU admission rates declined (SPC = - 2.2%, p < 0.001), but ICU LOS increased (SPC = 2.8%, p = 0.04). There were no significant changes over time in rates of death (SPC = 0.3%), severe injury (SPC = 1.6%), blood transfusion (SPC = 0.6%), and mechanical ventilation (SPC = 0.6%). When examined by intent, self-harm injuries declined over time (SPC = - 4.1%, p < 0.001), assaults declined through 2019-H2 (SPC = - 5.6%, p = 0.01) before increasing through 2022-H1 (SPC = 6.5%, p = 0.01), while undetermined injuries increased through 2019-H1 (SPC = 24.1%, p = 0.01) then stabilized (SPC = - 4.5%, p = 0.39); there were no temporal changes in unintentional injuries or legal intervention. CONCLUSIONS: Hospitalizations from firearm injuries are increasing following a period of declines, driven by increases among NH Black patients. Trauma systems need to consider these changing trends to best address the needs of the injured population.
RESUMO
BACKGROUND: In patients with severe traumatic brain injury (TBI), clinicians must balance preventing venous thromboembolism (VTE) with the risk of intracranial hemorrhagic expansion (ICHE). We hypothesized that low molecular weight heparin (LMWH) would not increase risk of ICHE or VTE as compared to unfractionated heparin (UH) in patients with severe TBI. METHODS: Patients ≥ 18 years of age with isolated severe TBI (AIS ≥ 3), admitted to 24 level I and II trauma centers between January 1, 2014 to December 31, 2020 and who received subcutaneous UH and LMWH injections for chemical venous thromboembolism prophylaxis (VTEP) were included. Primary outcomes were VTE and ICHE after VTEP initiation. Secondary outcomes were mortality and neurosurgical interventions. Entropy balancing (EBAL) weighted competing risk or logistic regression models were estimated for all outcomes with chemical VTEP agent as the predictor of interest. RESULTS: 984 patients received chemical VTEP, 482 UH and 502 LMWH. Patients on LMWH more often had pre-existing conditions such as liver disease (UH vs LMWH 1.7 % vs. 4.4 %, p = 0.01), and coagulopathy (UH vs LMWH 0.4 % vs. 4.2 %, p < 0.001). There were no differences in VTE or ICHE after VTEP initiation. There were no differences in neurosurgical interventions performed. There were a total of 29 VTE events (3 %) in the cohort who received VTEP. A Cox proportional hazards model with a random effect for facility demonstrated no statistically significant differences in time to VTE across the two agents (p = 0.44). The LMWH group had a 43 % lower risk of overall ICHE compared to the UH group (HR = 0.57: 95 % CI = 0.32-1.03, p = 0.062), however was not statistically significant. CONCLUSION: In this multi-center analysis, patients who received LMWH had a decreased risk of ICHE, with no differences in VTE, ICHE after VTEP initiation and neurosurgical interventions compared to those who received UH. There were no safety concerns when using LMWH compared to UH. LEVEL OF EVIDENCE: Level III, Therapeutic Care Management.
Assuntos
Anticoagulantes , Lesões Encefálicas Traumáticas , Heparina de Baixo Peso Molecular , Pontuação de Propensão , Tromboembolia Venosa , Humanos , Tromboembolia Venosa/prevenção & controle , Lesões Encefálicas Traumáticas/complicações , Masculino , Feminino , Pessoa de Meia-Idade , Anticoagulantes/uso terapêutico , Heparina de Baixo Peso Molecular/uso terapêutico , Adulto , Heparina/uso terapêutico , Estudos Retrospectivos , Idoso , Hemorragias IntracranianasRESUMO
Top carnivores can influence the structure of ecological communities, primarily through competition and predation; however, communities are also influenced by bottom-up forces such as anthropogenic habitat disturbance. Top carnivore declines will likely alter competitive dynamics within and amongst sympatric carnivore species. Increasing intraspecific competition is generally predicted to drive niche expansion and/or individual specialisation, while interspecific competition tends to constrain niches. Using stable isotope analysis of whiskers, we studied the effects of Tasmanian devil Sarcophilus harrisii declines upon the population- and individual-level isotopic niches of Tasmanian devils and sympatric spotted-tailed quolls Dasyurus maculatus subsp. maculatus. We investigated whether time since the onset of devil decline (a proxy for severity of decline) and landscape characteristics affected the isotopic niche breadth and overlap of devil and quoll populations. We quantified individual isotopic niche breadth for a subset of Tasmanian devils and spotted-tailed quolls and assessed whether between-site population niche variation was driven by individual-level specialisation. Tasmanian devils and spotted-tailed quolls demonstrated smaller population-level isotopic niche breadths with increasing human-modified habitat, while time since the onset of devil decline had no effect on population-level niche breadth or interspecific niche overlap. Individual isotopic niche breadths of Tasmanian devils and spotted-tailed quolls were narrower in human-modified landscapes, likely driving population isotopic niche contraction, however, the degree of individuals' specialisation relative to one another remained constant. Our results suggest that across varied landscapes, mammalian carnivore niches can be more sensitive to the bottom-up forces of anthropogenic habitat disturbance than to the top-down effects of top carnivore decline.