Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 30
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
J Surg Res ; 295: 274-280, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38048751

RESUMO

INTRODUCTION: Trauma registries and their quality improvement programs only collect data from the acute hospital admission, and no additional information is captured once the patient is discharged. This lack of long-term data limits these programs' ability to affect change. The goal of this study was to create a longitudinal patient record by linking trauma registry data with third party payer claims data to allow the tracking of these patients after discharge. METHODS: Trauma quality collaborative data (2018-2019) was utilized. Inclusion criteria were patients age ≥18, ISS ≥5 and a length of stay ≥1 d. In-hospital deaths were excluded. A deterministic match was performed with insurance claims records based on the hospital name, date of birth, sex, and dates of service (±1 d). The effect of payer type, ZIP code, International Classification of Diseases, Tenth Revision, Clinical Modification diagnosis specificity and exact dates of service on the match rate was analyzed. RESULTS: The overall match rate between these two patient record sources was 27.5%. There was a significantly higher match rate (42.8% versus 6.1%, P < 0.001) for patients with a payer that was contained in the insurance collaborative. In a subanalysis, exact dates of service did not substantially affect this match rate; however, specific International Classification of Diseases, Tenth Revision, Clinical Modification codes (i.e., all 7 characters) reduced this rate by almost half. CONCLUSIONS: We demonstrated the successful linkage of patient records in a trauma registry with their insurance claims. This will allow us to the collect longitudinal information so that we can follow these patients' long-term outcomes and subsequently improve their care.


Assuntos
Seguro , Registro Médico Coordenado , Humanos , Sistema de Registros , Prontuários Médicos , Hospitalização
2.
J Surg Res ; 300: 448-457, 2024 Jun 12.
Artigo em Inglês | MEDLINE | ID: mdl-38870652

RESUMO

INTRODUCTION: Ventilator-associated pneumonia (VAP) is associated with increased mortality, prolonged mechanical ventilation, and longer intensive care unit stays. The rate of VAP (VAPs per 1000 ventilator days) within a hospital is an important quality metric. Despite adoption of preventative strategies, rates of VAP in injured patients remain high in trauma centers. Here, we report variation in risk-adjusted VAP rates within a statewide quality collaborative. METHODS: Using Michigan Trauma Quality Improvement Program data from 35 American College of Surgeons-verified Level I and Level II trauma centers between November 1, 2020 and January 31, 2023, a patient-level Poisson model was created to evaluate the risk-adjusted rate of VAP across institutions given the number of ventilator days, adjusting for injury severity, physiologic parameters, and comorbid conditions. Patient-level model results were summed to create center-level estimates. We performed observed-to-expected adjustments to calculate each center's risk-adjusted VAP days and flagged outliers as hospitals whose confidence intervals lay above or below the overall mean. RESULTS: We identified 538 VAP occurrences among a total of 33,038 ventilator days within the collaborative, with an overall mean of 16.3 VAPs per 1000 ventilator days. We found wide variation in risk-adjusted rates of VAP, ranging from 0 (0-8.9) to 33.0 (14.4-65.1) VAPs per 1000 d. Several hospitals were identified as high or low outliers. CONCLUSIONS: There exists significant variation in the rate of VAP among trauma centers. Investigation of practices and factors influencing the differences between low and high outlier institutions may yield information to reduce variation and improve outcomes.

3.
Dis Colon Rectum ; 65(5): 758-766, 2022 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-35394941

RESUMO

BACKGROUND: Prospective payment models have incentivized reductions in length of stay after surgery. The benefits of abbreviated postoperative hospitalization could be undermined by increased readmissions or postacute care use, particularly for older adults or those with comorbid conditions. OBJECTIVE: The purpose of this study was to determine whether hospitals with accelerated postsurgical discharge accrue total episode savings or incur greater postdischarge payments among patients stratified by age and comorbidity. DESIGN: This was a retrospective cross-sectional study. SETTING: National data from the 100% Medicare Provider Analysis and Review files for July 2012 to June 2015 were used. PATIENTS: We included Medicare beneficiaries undergoing elective colectomy and stratified the cohort by age (65-69, 70-79, ≥80 y) and Elixhauser comorbidity score (low: ≤0; medium: 1-5; and high: >5). Patients were categorized by the hospital's mode length of stay, reflecting "usual" care. MAIN OUTCOMES MEASURES: In a multilevel model, we compared mean total episode payments and components thereof among age and comorbidity categories, stratified by hospital mode length of stay. RESULTS: Among 88,860 patients, mean total episode payments were lower in shortest versus longest length of stay hospitals across all age and comorbidity strata and were similar between age groups (65-69 y: $28,951 vs $30,566, p = 0.014; 70-79 y: $31,157 vs $32,044, p = 0.073; ≥80 y: $33,779 vs $35,771, p = 0.005) but greater among higher comorbidity (low: $23,107 vs $24,894, p = 0.001; medium: $30,809 vs $32,282, p = 0.038; high: $44,097 vs $46641, p < 0.001). Postdischarge payments were similar among length-of-stay hospitals by age (65-69 y: ∆$529; 70-79 y: ∆$291; ≥80 y: ∆$872, p = 0.25) but greater among high comorbidity (low: ∆$477; medium: ∆$480; high: ∆$1059; p = 0.02). LIMITATIONS: Administrative data do not capture patient-level factors that influence postacute care use (preference, caregiver availability). CONCLUSIONS: Hospitals achieving shortest length of stay after surgery accrue lower total episode payments without a compensatory increase in postacute care spending, even among patients at oldest age and with greatest comorbidity. See Video Abstract at http://links.lww.com/DCR/B624. CONSECUENCIAS DE LA EDAD Y LAS COMORBILIDADES ASOCIADAS, EN EL COSTO DE LA ATENCIN EN PACIENTES SOMETIDOS A COLECTOMA EN PROGRAMAS DE ALTA POSOPERATORIA ACELERADA: ANTECEDENTES:Los modelos de pago prospectivo, han sido un incentivo para reducir la estancia hospitalaria después de la cirugía. Los beneficios de una hospitalización posoperatoria "abreviada" podrían verse afectados por un aumento en los reingresos o en la necesidad de cuidados postoperatorios tempranos luego del periodo agudo, particularmente en los adultos mayores o en aquellos con comorbilidades.OBJETIVO:Determinar si los hospitales que han establecido protocolos de alta posoperatoria "acelerada" generan un ahorro en cada episodio de atención o incurren en mayores gastos después del alta, entre los pacientes estratificados por edad y por comorbilidades.DISEÑO:Estudio transversal retrospectivo.AJUSTE:Revisión a partir de la base de datos nacional del 100% de los archivos del Medicare Provider Analysis and Review desde julio de 2012 hasta junio de 2015.PACIENTES:Se incluye a los beneficiarios de Medicare a quienes se les practicó una colectomía electiva. La cohorte se estratificó por edad (65-69 años, 70-79, ≥80) y por la puntuación de comorbilidad de Elixhauser (baja: ≤0; media: 1-5; y alta: > 5). Los pacientes se categorizaron de acuerdo con la modalidad de la duración de la estancia hospitalaria del hospital, lo que representa lo que se considera es una atención usual para dicho centro.PRINCIPALES MEDIDAS DE RESULTADO:En un modelo multinivel, comparamos la media de los pagos por episodio y los componentes de los mismos, entre las categorías de edad y comorbilidad, estratificados por la modalidad de la duración de la estancia hospitalaria.RESULTADOS:En los 88,860 pacientes, los pagos promedio por episodio fueron menores en los hospitales con una modalidad de estancia más corta frente a los de mayor duración, en todos los estratos de edad y comorbilidad, y fueron similares entre los grupos de edad (65-69: $28,951 vs $30,566, p = 0,014; 70-79: $31,157 vs $32,044, p = 0,073; ≥ 80 $33,779 vs $35,771, p = 0,005), pero mayor entre los pacientes con comorbilidades más altas (baja: $23,107 vs $24,894, p = 0,001; media $30,809 vs $32,282, p = 0,038; alta: $44,097 vs $46,641, p <0,001). Los pagos generados luego del alta hospitalaria fueron similares con relación a la estancia hospitalaria de los diferentes hospitales con respecto a la edad (65-69 años: ∆ $529; 70-79 años: ∆ $291; ≥80 años: ∆ $872, p = 0,25), pero mayores en aquellos con más alta comorbilidad (baja ∆ $477, medio ∆ $480, alto ∆ $1059, p = 0,02).LIMITACIONES:Las bases de datos administrativas no capturan los factores del paciente que influyen en el cuidado luego del estado posoperatorio agudo (preferencia, disponibilidad del proveedor del cuidado).CONCLUSIONES:Los hospitales que logran una estancia hospitalaria más corta después de la cirugía, acumulan pagos más bajos por episodio, sin un incremento compensatorio del gasto en la atención pos-aguda, incluso entre pacientes de mayor edad y con mayor comorbilidad. Consulte Video Resumen en http://links.lww.com/DCR/B624. (Traducción-Dr Eduardo Londoño-Schimmer).


Assuntos
Assistência ao Convalescente , Alta do Paciente , Idoso , Colectomia , Comorbidade , Estudos Transversais , Humanos , Medicare , Complicações Pós-Operatórias/epidemiologia , Estudos Retrospectivos , Estados Unidos/epidemiologia
4.
Ann Surg ; 274(2): 199-205, 2021 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-33351489

RESUMO

OBJECTIVE: To evaluate real-world effects of enhanced recovery protocol (ERP) dissemination on clinical and economic outcomes after colectomy. SUMMARY BACKGROUND DATA: Hospitals aiming to accelerate discharge and reduce spending after surgery are increasingly adopting perioperative ERPs. Despite their efficacy in specialty institutions, most studies have lacked adequate control groups and diverse hospital settings and have considered only in-hospital costs. There remain concerns that accelerated discharge might incur unintended consequences. METHODS: Retrospective, population-based cohort including patients in 72 hospitals in the Michigan Surgical Quality Collaborative clinical registry (N = 13,611) and/or Michigan Value Collaborative claims registry (N = 14,800) who underwent elective colectomy, 2012 to 2018. Marginal effects of ERP on clinical outcomes and risk-adjusted, price-standardized 90-day episode payments were evaluated using mixed-effects models to account for secular trends and hospital performance unrelated to ERP. RESULTS: In 24 ERP hospitals, patients Post-ERP had significantly shorter length of stay than those Pre-ERP (5.1 vs 6.5 days, P < 0.001), lower incidence of complications (14.6% vs 16.9%, P < 0.001) and readmissions (10.4% vs 11.3%, P = 0.02), and lower episode payments ($28,550 vs $31,192, P < 0.001) and postacute care ($3,384 vs $3,909, P < 0.001). In mixed-effects adjusted analyses, these effects were significantly attenuated-ERP was associated with a marginal length of stay reduction of 0.4 days (95% confidence interval 0.2-0.6 days, P = 0.001), and no significant difference in complications, readmissions, or overall spending. CONCLUSIONS: ERPs are associated with small reduction in postoperative length of hospitalization after colectomy, without unwanted increases in readmission or postacute care spending. The real-world effects across a variety of hospitals may be smaller than observed in early-adopting specialty centers.


Assuntos
Colectomia/economia , Recuperação Pós-Cirúrgica Melhorada , Adulto , Idoso , Feminino , Humanos , Tempo de Internação/estatística & dados numéricos , Masculino , Michigan , Pessoa de Meia-Idade , Estudos Retrospectivos
5.
J Surg Res ; 244: 521-527, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31336245

RESUMO

BACKGROUND: Data accuracy is essential to obtaining correct results and making appropriate conclusions in outcomes research. Few have examined the quality of data that is used in studies involving orthopedic surgery. A nonspecific data entry has the potential to affect the results of a study or the ability to appropriately risk adjust for treatments and outcomes. This study evaluated the proportion of Not Further Specified (NFS) orthopedic injury codes found into two large trauma registries. MATERIALS: Data from the National Trauma Data Bank (NTDB) from 2011 to 2015 and from the Michigan Trauma Quality Improvement Program (MTQIP) 2011-2017 were used. We selected multiple orthopedic injuries classified via the Abbreviated Injury Scale, version 2005 (AIS2005) and calculated the percentage of NFS entries for each specific injury. RESULTS: There were a substantial proportion of fractures classified as NFS in each registry, 18.5% (range 2.4%-67.9%) in MTQIP and 27% (range 6.0%-68.5%) in the NTDB. There were significantly more NFS entries when the fractures were complex versus simple in both MTQIP (34.5% versus 9.6%, P < 0.001) and the NTDB (41.8% versus 15.7%, P < 0.001). The level of trauma center affected the proportion of NFS codes differently between the registries. CONCLUSIONS: The proportion of nonspecific entries in these two large trauma registries is concerning. These data can affect the results and conclusions from research studies as well as impact our ability to truly risk adjust for treatments and outcomes. Further studies should explore the reasons for these findings.


Assuntos
Fraturas Ósseas/epidemiologia , Traumatismo Múltiplo/epidemiologia , Sistema de Registros , Fraturas Ósseas/classificação , Fraturas Ósseas/cirurgia , Humanos , Traumatismo Múltiplo/classificação , Traumatismo Múltiplo/cirurgia , Procedimentos Ortopédicos , Especialidades Cirúrgicas
6.
Ann Surg ; 262(4): 577-85, 2015 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-26366537

RESUMO

OBJECTIVE: Trauma patients are at high risk for life-threatening venous thromboembolic (VTE) events. We examined the relationship between prophylactic inferior vena cava (IVC) filter use, mortality, and VTE. SUMMARY BACKGROUND DATA: The prevalence of prophylactic placement of IVC filters has increased among trauma patients. However, there exists little data on the overall efficacy of prophylactic IVC filters with regard to outcomes. METHODS: Trauma quality collaborative data from 2010 to 2014 were analyzed. Patients were excluded with no signs of life, Injury Severity Score <9, hospitalization <3 days, or who received IVC filter after occurrence of VTE event. Risk-adjusted rates of IVC filter placement were calculated and hospitals placed into quartiles of IVC filter use. Mortality rates by quartile were compared. We also determined the association of deep venous thrombosis (DVT) with the presence of an IVC filter, accounting for type and timing of initiation of pharmacological VTE prophylaxis. RESULTS: A prophylactic IVC filter was placed in 803 (2%) of 39,456 patients. Hospitals exhibited significant variability (0.6% to 9.6%) in adjusted rates of IVC filter utilization. Rates of IVC placement within quartiles were 0.7%, 1.3%, 2.1%, and 4.6%, respectively. IVC filter use quartiles showed no variation in mortality. Adjusting for pharmacological VTE prophylaxis and patient factors, prophylactic IVC filter placement was associated with an increased incidence of DVT (OR = 1.83; 95% CI, 1.15-2.93, P-value = 0.01). CONCLUSIONS: High rates of prophylactic IVC filter placement have no effect on reducing trauma patient mortality and are associated with an increase in DVT events.


Assuntos
Filtros de Veia Cava , Tromboembolia Venosa/prevenção & controle , Ferimentos e Lesões/mortalidade , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Feminino , Humanos , Modelos Logísticos , Masculino , Michigan , Pessoa de Meia-Idade , Análise Multivariada , Padrões de Prática Médica/estatística & dados numéricos , Embolia Pulmonar/etiologia , Embolia Pulmonar/prevenção & controle , Fatores de Risco , Resultado do Tratamento , Filtros de Veia Cava/estatística & dados numéricos , Tromboembolia Venosa/etiologia , Trombose Venosa/etiologia , Trombose Venosa/prevenção & controle , Ferimentos e Lesões/complicações , Ferimentos e Lesões/terapia , Adulto Jovem
8.
Am Surg ; : 31348241256070, 2024 May 21.
Artigo em Inglês | MEDLINE | ID: mdl-38770751

RESUMO

BACKGROUND: Optimization of antibiotic stewardship requires determining appropriate antibiotic treatment and duration of use. Our current method of identifying infectious complications alone does not attempt to measure the resources actually utilized to treat infections in patients. We sought to develop a method accounting for treatment of infections and length of antibiotic administration to allow benchmarking of trauma hospitals with regard to days of antibiotic use. METHODS: Using trauma quality collaborative data from 35 American College of Surgeons (ACS)-verified level I and level II trauma centers between November 1, 2020, and January 31, 2023, a two-part model was created to account for (1) the odds of any antibiotic use, using logistic regression; and (2) the duration of usage, using negative binomial distribution. We adjusted for injury severity, presence/type of infection (eg, ventilator-acquired pneumonia), infectious complications, and comorbid conditions. We performed observed-to-expected adjustments to calculate each center's risk-adjusted antibiotic days, bootstrapped Observed/Expected (O/E) ratios to create confidence intervals, and flagged potential high or low outliers as hospitals whose confidence intervals lay above or below the overall mean. RESULTS: The mean antibiotic treatment days was 1.98°days with a total of 88,403 treatment days. A wide variation existed in risk-adjusted antibiotic treatment days (.76°days to 2.69°days). Several hospitals were identified as low (9 centers) or high (6 centers) outliers. CONCLUSION: There exists a wide variation in the duration of risk-adjusted antibiotic use amongst trauma centers. Further study is needed to address the underlying cause of variation and for improved antibiotic stewardship.

9.
Am J Manag Care ; 29(8): e250-e256, 2023 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-37616153

RESUMO

OBJECTIVES: To evaluate hospital performance and behaviors in the first 2 years of a statewide commercial insurance episode-based incentive pay-for-performance (P4P) program. STUDY DESIGN: Retrospective cohort study of price- and risk-standardized episode-of-care spending from the Michigan Value Collaborative claims data registry. METHODS: Changes in hospital-level episode spending between baseline and performance years were estimated during the program years (PYs) 2018 and 2019. The distribution and hospital characteristics associated with P4P points earned were described for both PYs. A difference-in-differences (DID) analysis compared changes in patient-level episode spending associated with program implementation. RESULTS: Hospital-level episode spending for all conditions declined significantly from the baseline year to the performance year in PY 2018 (-$671; 95% CI, -$1113 to -$230) but was not significantly different for PY 2019 ($177; 95% CI, -$412 to $767). Hospitals earned a mean (SD) total of 6.3 (3.1) of 10 points in PY 2018 and 4.5 (2.9) of 10 points in PY 2019, with few significant differences in P4P points across hospital characteristics. The highest-scoring hospitals were more likely to have changes in case mix index and decreases in spending across the entire episode of care compared with the lowest-scoring hospitals. DID analysis revealed no significant changes in patient-level episode spending associated with program implementation. CONCLUSIONS: There was little evidence for overall reductions in spending associated with the program, but the performance of the hospitals that achieved greatest savings and incentives provides insights into the ongoing design of hospital P4P metrics.


Assuntos
Seguradoras , Motivação , Humanos , Reembolso de Incentivo , Estudos Retrospectivos , Hospitais
10.
Surgery ; 174(5): 1255-1262, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37709648

RESUMO

BACKGROUND: Excessive opioid prescribing has resulted in opioid diversion and misuse. In July 2018, Michigan's Public Act 251 established a state-wide policy limiting opioid prescriptions for acute pain to a 7-day supply. Traumatic injury increases the risk for new persistent opioid use, yet the impact of prescribing policy in trauma patients remains unknown. To determine the relationship between policy enactment and prescribing in trauma patients, we compared oral morphine equivalents prescribed at discharge before and after implementation of Public Act 251. METHODS: In this cross-sectional study, adult patients who received any oral opioids at discharge from a Level 1 trauma center between January 1, 2016, and June 30, 2021, were identified. The exposure was patients admitted starting July 1, 2018. Inpatient oral morphine equivalents per day 48 hours before discharge and discharge prescription oral morphine equivalents per day were calculated. Student's t test and an interrupted time series analysis were performed to compare mean oral morphine equivalents per day pre- and post-policy. Multivariable risk adjustment accounted for patient/injury factors and inpatient oral morphine equivalent use. RESULTS: A total of 3,748 patients were included in the study (pre-policy n = 1,685; post-policy n = 2,063). Implementation of the prescribing policy was associated with a significant decrease in mean discharge oral morphine equivalents per day (34.8 ± 49.5 vs 16.7 ± 32.3, P < .001). After risk adjustment, post-policy discharge prescriptions differed by -19.2 oral morphine equivalents per day (95% CI -21.7 to -16.8, P < .001). The proportion of patients obtaining a refill prescription 30 days post-discharge did not increase after implementation (0.38 ± 0.48 vs 0.37 ± 0.48, P = .7). CONCLUSION: Discharge prescription amounts for opioids in trauma patients decreased by approximately one-half after the implementation of opioid prescribing policies, and there was no compensatory increase in subsequent refill prescriptions. Future work is needed to evaluate the effect of these policies on the adequacy of pain management and functional recovery after injury.


Assuntos
Assistência ao Convalescente , Analgésicos Opioides , Adulto , Humanos , Analgésicos Opioides/uso terapêutico , Estudos Transversais , Dor Pós-Operatória/tratamento farmacológico , Dor Pós-Operatória/etiologia , Alta do Paciente , Padrões de Prática Médica , Morfina
11.
J Trauma Acute Care Surg ; 93(2): 176-186, 2022 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-35444147

RESUMO

BACKGROUND: Failure to rescue (FTR) is defined as mortality following a complication. Failure to rescue has come under scrutiny as a quality metric to compare trauma centers. In contrast to elective surgery, trauma has an early period of high expected mortality because of injury sequelae rather than a complication. Here, we report FTR in early and late mortality using an externally validated trauma patient database, hypothesizing that centers with higher risk-adjusted mortality rates have higher risk-adjusted FTR rates. METHODS: The study included 114,220 patients at 34 Levels I and II trauma centers in a statewide quality collaborative (2016-2020) with Injury Severity Score of ≥5. Emergency department deaths were excluded. Multivariate regression models were used to produce center-level adjusted rates for mortality and major complications. Centers were ranked on adjusted mortality rate and divided into quintiles. Early deaths (within 48 hours of presentation) and late deaths (after 48 hours) were analyzed. RESULTS: Overall, 6.7% of patients had a major complication and 3.1% died. There was no difference in the mean risk-adjusted complication rate among the centers. Failure to rescue was significantly different across the quintiles (13.8% at the very low-mortality centers vs. 23.4% at the very-high-mortality centers, p < 0.001). For early deaths, there was no difference in FTR rates among the highest and lowest mortality quintiles. For late deaths, there was a twofold increase in the FTR rate between the lowest and highest mortality centers (9.7% vs. 19.3%, p < 0.001), despite no difference in the rates of major complications (5.9% vs. 6.0%, p = 0.42). CONCLUSION: Low-performing trauma centers have higher mortality rates and lower rates of rescue following major complications. These differences are most evident in patients who survive the first 48 hours after injury. A better understanding of the complications and their role in mortality after 48 hours is an area of interest for quality improvement efforts. LEVEL OF EVIDENCE: Prognostic and Epidemiologic; Level III.


Assuntos
Falha da Terapia de Resgate , Centros de Traumatologia , Mortalidade Hospitalar , Humanos , Escala de Gravidade do Ferimento , Complicações Pós-Operatórias , Melhoria de Qualidade , Estudos Retrospectivos
12.
J Laparoendosc Adv Surg Tech A ; 32(7): 768-774, 2022 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-35041519

RESUMO

Background: It is unknown if surgeons are more likely to adopt or abandon robotic techniques given that bariatric procedures are already performed by surgeons with advanced laparoscopic skills. Methods: We used a statewide bariatric-specific data registry to evaluate surgeon-specific volumes of robotic bariatric cases between 2010 and 2019. Operative volume, procedure type, and patient characteristics were compared between the highest utilizers of robotic bariatric procedures (adopters) and surgeons who stopped performing robotic cases, despite demonstrating prior use (abandoners). Results: A total of 44 surgeons performed 3149 robotic bariatric procedures in Michigan between 2010 and 2019. Robotic utilization peaked in 2019, representing 7.24% of all bariatric cases. We identified 7 surgeons (16%) who performed 95% of the total number of robotic cases (adopters) and 12 surgeons (27%) who stopped performing bariatric cases during the study period (abandoners). Adopters performed a higher proportion of gastric bypass both robotically (22.9% versus 3.1%, P < .001) and laparoscopically (27.5% versus 15.1%, P < .001), when compared with abandoners. Surgeon experience (no. of years in practice), type of practice (teaching versus nonteaching hospital), and patient populations were similar between groups. Conclusions: Robotic bariatric utilization increased during the study period. The majority of robotic cases were performed by a small number of surgeons who were more likely to perform more complex cases such as gastric bypass in their own practice. Robotic adoption may be influenced by surgeon-specific preferences based upon procedure-specific volumes and may play a greater role in performing more complex surgical procedures in the future.


Assuntos
Cirurgia Bariátrica , Derivação Gástrica , Laparoscopia , Obesidade Mórbida , Procedimentos Cirúrgicos Robóticos , Robótica , Cirurgiões , Cirurgia Bariátrica/métodos , Humanos , Obesidade Mórbida/cirurgia , Estudos Retrospectivos
13.
Obes Surg ; 32(12): 3932-3941, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-36253661

RESUMO

CONTEXT: Weight loss after bariatric surgery can be accurately predicted using an outcomes calculator; however, outliers exist that do not meet the 1 year post-surgery weight projections. OBJECTIVE: Our goal was to determine how soon after surgery these outliers can be identified. DESIGN: We conducted a retrospective cohort study. SETTING, PATIENTS, AND INTERVENTION: Using a bariatric surgery outcomes calculator formulated by the Michigan Bariatric Surgery Collaborative (MBSC), predicted weight loss at 1 year post-surgery was calculated on all patients who underwent primary bariatric surgery at a single-center academic institution between 2006 and 2015 who also had a documented 1-year follow-up weight (n = 1050). MAIN OUTCOME MEASURES: Weight loss curves were compared between high, low, and non-outliers as defined by their observed-to-expected (O:E) weight loss ratio based on total body weight loss (TBWL) %. RESULTS: Mean predicted weight loss for the study group was 39.1 ± 9.9 kg, while mean actual weight loss was 39.7 ± 17.1 kg resulting in a mean O:E 1.01 (± 0.35). Based on analysis of the O:E ratios at 1 year post-surgery, the study group was sub-classified. Low outliers (n = 188, O:E 0.51) had significantly lower weight loss at 2 months (13.1% vs 15.6% and 16.5% TBWL, p < 0. 001) and at 6 months (19% vs 26% and 30% TBWL, p < 0.001) when compared to non-outliers (n = 638, O:E 1.00) and high outliers (n = 224, O:E 1.46), respectively. CONCLUSIONS: Weight loss curves based on individually calculated outcomes can help identify low outliers for additional interventions as early as 2 months after bariatric surgery.


Assuntos
Cirurgia Bariátrica , Obesidade Mórbida , Humanos , Obesidade Mórbida/cirurgia , Estudos Retrospectivos , Redução de Peso , Michigan , Resultado do Tratamento
14.
Ann Surg Open ; 3(4): e218, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37600283

RESUMO

The objective of this study was to evaluate how much variation in postacute care (PAC) spending after traumatic hip fracture exists between hospitals, and to what degree this variation is explained by patient factors, hospital factors, PAC setting, and PAC intensity. Background: Traumatic hip fracture is a common and costly event. This is particularly relevant given our aging population and that a substantial proportion of these patients are discharged to PAC settings. Methods: It is a cross-sectional retrospective study. In a retrospective review using Medicare claims data between 2014 and 2019, we identified PAC payments within 90 days of hospitalization discharges and grouped hospitals into quintiles of PAC spending. The degree of variation present in PAC spending across hospital quintiles was evaluated after accounting for patient case-mix factors and hospital characteristics using multivariable regression models, adjusting for PAC setting choice by fixing the proportion of PAC discharge disposition across hospital quintiles, and adjusting for PAC intensity by fixing the amount of PAC spending across hospital quintiles. The study pool included 125,745 Medicare beneficiaries who underwent operative management for traumatic hip fracture in 2078 hospitals. The primary outcome was PAC spending within 90 days of discharge following hospitalization for traumatic hip fracture. Results: Mean PAC spending varied widely between top versus bottom spending hospital quintiles ($31,831 vs $17,681). After price standardization, the difference between top versus bottom spending hospital quintiles was $8,964. Variation between hospitals decreased substantially after adjustment for PAC setting ($25,392 vs $21,274) or for PAC intensity ($25,082 vs $21,292) with little variation explained by patient or hospital factors. Conclusions: There was significant variation in PAC payments after a traumatic hip fracture between the highest- and lowest-spending hospital quintiles. Most of this variation was explained by choice of PAC discharge setting and intensity of PAC spending, not patient or hospital characteristics. These findings suggest potential systems-level inefficiencies that can be targeted for intervention to improve the appropriateness and value of healthcare spending.

15.
Surgery ; 172(3): 1015-1020, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-35811165

RESUMO

BACKGROUND: Meaningful reporting of quality metrics relies on detecting a statistical difference when a true difference in performance exists. Larger cohorts and longer time frames can produce higher rates of statistical differences. However, older data are less relevant when attempting to enact change in the clinical setting. The selection of time frames must reflect a balance between being too small (type II errors) and too long (stale data). We explored the use of power analysis to optimize time frame selection for trauma quality reporting. METHODS: Using data from 22 Level III trauma centers, we tested for differences in 4 outcomes within 4 cohorts of patients. With bootstrapping, we calculated the power for rejecting the null hypothesis that no difference exists amongst the centers for different time frames. From the entire sample for each site, we simulated randomly generated datasets. Each simulated dataset was tested for whether a difference was observed from the average. Power was calculated as the percentage of simulated datasets where a difference was observed. This process was repeated for each outcome. RESULTS: The power calculations for the 4 cohorts revealed that the optimal time frame for Level III trauma centers to assess whether a single site's outcomes are different from the overall average was 2 years based on an 80% cutoff. CONCLUSION: Power analysis with simulated datasets allows testing of different time frames to assess outcome differences. This type of analysis allows selection of an optimal time frame for benchmarking of Level III trauma center data.


Assuntos
Benchmarking , Centros de Traumatologia , Humanos
16.
J Trauma Acute Care Surg ; 90(1): 54-63, 2021 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-32890341

RESUMO

BACKGROUND: Patients are at a high risk for developing venous thromboembolism (VTE) following traumatic injury. We examined the relationship between timing of initiation of pharmacologic prophylaxis with VTE complications. METHODS: Trauma quality collaborative data from 34 American College of Surgeons Committee on Trauma-verified levels I and II trauma centers were analyzed. Patients were excluded if they were on anticoagulant therapy at the time of injury, had hospitalization <48 hours, or received no or nonstandard pharmacologic VTE prophylaxis (heparin drip). Patient comparison groups were based on timing of initiation of VTE prophylaxis relative to hospital presentation (0 to <24 hours, 24 to <48 hours, ≥48 hours). Risk-adjusted rates of VTE events were calculated accounting for patient factors including type of pharmacologic agent in addition to standard trauma patient confounders. A sensitivity analysis was performed excluding patients who received blood in the first 4 hours and/or patients with a significant traumatic brain injury. RESULTS: Within the 79,386 patients analyzed, there were 1,495 (1.9%) who experienced a VTE complication and 1,437 (1.8%) who died. After adjusting for type of prophylaxis and patient factors, the risk of a VTE event was significantly increased in the 24- to <48-hour (odds ratio, 1.26; 95% confidence interval, 1.09-1.47; p = 0.002) and ≥48-hour (odds ratio, 2.35; 95% confidence interval, 2.04-2.70; p < 0.001) cohorts relative to patients initiated at 0 to <24 hours. These VTE event findings remained significant after exclusion of perceived higher-risk patients in a sensitivity analysis. CONCLUSION: Early initiation of pharmacologic VTE prophylaxis in stable trauma patients is associated with lower rates of VTE. LEVEL OF EVIDENCE: Diagnostic, level III.


Assuntos
Anticoagulantes/uso terapêutico , Tromboembolia Venosa/prevenção & controle , Ferimentos e Lesões/terapia , Adolescente , Adulto , Idoso , Anticoagulantes/administração & dosagem , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Fatores de Tempo , Tromboembolia Venosa/etiologia , Tromboembolia Venosa/mortalidade , Ferimentos e Lesões/complicações , Ferimentos e Lesões/mortalidade , Adulto Jovem
17.
Pharmacotherapy ; 40(7): 604-613, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-32515829

RESUMO

BACKGROUND: Warfarin has been the oral anticoagulant of choice for the treatment of thromboembolic disease. However, upward of 50% of all new anticoagulant prescriptions are now for direct oral anticoagulants (DOAC). Despite this, outcome data evaluating preinjury anticoagulants remain scarce following traumatic brain injury (TBI). Our study objective is to determine the effects of preinjury anticoagulation on outcomes in older adults with TBI. METHODS: Patient data were obtained from 29 level 1 and 2 trauma centers from 2012 to June 30, 2018. Overall, 8312 patients who were aged 65 years or older, suffering a ground level fall, and with an Abbreviated Injury Scale (AIS) head score of ≥ 3 were identified. Patients were excluded if they presented with no signs of life or a traumatic mechanism besides ground level fall. Statistical comparisons were made using multivariable analyses with anticoagulant/antiplatelet use as the independent variable. RESULTS: Of the total patients with TBI, 3293 were on antiplatelet agents (AP), 669 on warfarin, 414 on warfarin + AP, 188 on DOACs, 116 on DOAC + AP, and 3632 on no anticoagulant. There were 185 (27.7%) patients on warfarin and 43 (22.9%) on a DOAC with a combined outcome of mortality or hospice as compared to 575 (15.8%) in the no anticoagulant group (p<0.001). After adjusting for patient factors, there was an increased risk of mortality or hospice in the warfarin (OR 1.60; 95% CI 1.27-2.01) and DOAC group (OR 1.67; 95% CI 1.07-2.59) as compared to no anticoagulant. Warfarin + AP was associated with an increased risk of mortality or hospice (OR 1.61; 95% CI 1.18-2.21) that was not seen with DOAC + AP (OR 0.93; 95% CI 0.46-1.87) as compared to no anticoagulant. CONCLUSIONS: In older adults with TBI, preinjury treatment with warfarin or DOACs resulted in an increased risk of mortality or hospice whereas preinjury AP therapy did not increase risk. Future studies are needed with larger sample sizes to directly compare TBI outcomes associated with preinjury warfarin versus DOAC use.


Assuntos
Acidentes por Quedas , Anticoagulantes/administração & dosagem , Lesões Encefálicas Traumáticas/mortalidade , Varfarina/administração & dosagem , Administração Oral , Idoso , Idoso de 80 Anos ou mais , Feminino , Humanos , Escala de Gravidade do Ferimento , Masculino , Michigan
18.
J Trauma Acute Care Surg ; 89(1): 199-207, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-31914009

RESUMO

BACKGROUND: Accurate and reliable data are pivotal to credible risk-adjusted modeling and hospital benchmarking. Evidence assessing the reliability and accuracy of data elements considered as variables in risk-adjustment modeling and measurement of outcomes is lacking. This deficiency holds the potential to compromise benchmarking integrity. We detail the findings of a longitudinal program to evaluate the impact of external data validation on data validity and reliability for variables utilized in benchmarking of trauma centers. METHODS: A collaborative quality initiative-based study was conducted of 29 trauma centers from March 2010 through December 2018. Case selection criteria were applied to identify high-yield cases that were likely to challenge data abstractors. There were 127,238 total variables validated (i.e., reabstracted, compared, and reported to trauma centers). Study endpoints included data accuracy (agreement between registry data and contemporaneous documentation) and reliability (consistency of accuracy within and between hospitals). Data accuracy was assessed by mean error rate and type (under capture, inaccurate capture, or over capture). Cohen's kappa estimates were calculated to evaluate reliability. RESULTS: There were 185,120 patients that met the collaborative inclusion criteria. There were 1,243 submissions reabstracted. The initial validation visit demonstrated the highest mean error rate at 6.2% ± 4.7%, and subsequent validation visits demonstrated a statistically significant decrease in error rate compared with the first visit (p < 0.05). The mean hospital error rate within the collaborative steadily improved over time (2010, 8.0%; 2018, 3.2%) compared with the first year (p < 0.05). Reliability of substantial or higher (kappa ≥0.61) was demonstrated in 90% of the 20 comorbid conditions considered in the benchmark risk-adjustment modeling, 39% of these variables exhibited a statistically significant (p < 0.05) interval decrease in error rate from the initial visit. CONCLUSION: Implementation of an external data validation program is correlated with increased data accuracy and reliability. Improved data reliability both within and between trauma centers improved risk-adjustment model validity and quality improvement program feedback.


Assuntos
Benchmarking , Controle de Formulários e Registros/normas , Melhoria de Qualidade , Centros de Traumatologia/normas , Humanos , Michigan , Sistema de Registros , Reprodutibilidade dos Testes , Estudos Retrospectivos , Estados Unidos
19.
Trauma Surg Acute Care Open ; 5(1): e000630, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33376809

RESUMO

BACKGROUND: Increased time to operative intervention is associated with a greater risk of mortality and complications in adults with a hip fracture. This study sought to determine factors associated with timeliness of operation in elderly patients presenting with an isolated hip fracture and the influence of surgical delay on outcomes. METHODS: Trauma quality collaborative data (July 2016 to June 2019) were analyzed. Inclusion criteria were patients ≥65 years with an injury mechanism of fall, Abbreviated Injury Scale (AIS) 2005 diagnosis of hip fracture, and AIS extremity ≤3. Exclusion criteria included AIS in other body regions >1 and non-operative management. We examined the association of demographic, hospital, injury presentation, and comorbidity factors on a surgical delay >48 hours and patient outcomes using multivariable regression analysis. RESULTS: 10 182 patients fit our study criteria out of 212 620 patients. Mean age was 82.7±8.6 years and 68.7% were female. Delay in operation >48 hours occurred in 965 (9.5%) of patients. Factors that significantly increased mortality or discharge to hospice were increased age, male gender, emergency department hypotension, functionally dependent health status (FDHS), advanced directive, liver disease, angina, and congestive heart failure (CHF). Delay >48 hours was associated with increased mortality or discharge to hospice (OR 1.52; 95% CI 1.13 to 2.06; p<0.01). Trauma center verification level, admission service, and hip fracture volume were not associated with mortality or discharge to hospice. Factors associated with operative delay >48 hours were male gender, FDHS, CHF, chronic renal failure, and advanced directive. Admission to the orthopedic surgery service was associated with less incidence of delay >48 hours (OR 0.43; 95% CI 0.29 to 0.64; p<0.001). DISCUSSION: Hospital verification level, admission service, and patient volume did not impact the outcome of mortality/discharge to hospice. Delay to operation >48 hours was associated with increased mortality. The only measured modifiable characteristic that reduced delay to operative intervention was admission to the orthopedic surgery service. LEVEL OF EVIDENCE: III.

20.
J Trauma Acute Care Surg ; 88(6): 839-846, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-32459449

RESUMO

OBJECTIVE: The American Association for the Surgery of Trauma (AAST) developed an anatomic grading system to assess disease severity through increasing grades of inflammation. Severity grading can then be utilized in risk-adjustment and stratification of patient outcomes for clinical benchmarking. We sought to validate the AAST appendicitis grading system by examining the ability of AAST grade to predict clinical outcomes used for clinical benchmarking. METHODS: Surgical quality program data were prospectively collected on all adult patients undergoing appendectomy for acute appendicitis at our institution between December 2013 and May 2018. The AAST acute appendicitis grade from 1 to 5 was assigned for all patients undergoing open or laparoscopic appendectomy. Primary outcomes were occurrence of major complications, any complications, and index hospitalization length of stay. Multivariable models were constructed for each outcome without and with inclusion of the AAST grade as an ordinal variable. We also developed models using International Classification of Diseases, 9th or 10th Rev.-Clinical Modification codes to determine presence of perforation for comparison. RESULTS: A total of 734 patients underwent appendectomy for acute appendicitis. The AAST score distribution included 561 (76%) in grade 1, 49 (6.7%) in grade 2, 79 (10.8%) in grade 3, 33 (4.5%) in grade 4, and 12 (1.6%) in grade 5. The mean age was 35.3 ± 14.7 years, 47% were female, 20% were nonwhite, and 69% had private insurance. Major complications, any complications, and hospital length of stay were all positively associated with AAST grade (p < 0.05). Risk-adjustment model fit improved after including AAST grade in the major complications, any complications, and length of stay multivariable regression models. The AAST grade was a better predictor than perforation status derived from diagnosis codes for all primary outcomes studied. CONCLUSION: Increasing AAST grade is associated with higher complication rates and longer length of stay in patients with acute appendicitis. The AAST grade can be prospectively collected and improves risk-adjusted modeling of appendicitis outcomes. LEVEL OF EVIDENCE: Prospective/Epidemiologic, Level III.


Assuntos
Apendicectomia/efeitos adversos , Apendicite/diagnóstico , Benchmarking/métodos , Complicações Pós-Operatórias/epidemiologia , Índice de Gravidade de Doença , Adulto , Apendicite/cirurgia , Feminino , Humanos , Tempo de Internação , Masculino , Pessoa de Meia-Idade , Complicações Pós-Operatórias/etiologia , Estudos Prospectivos , Risco Ajustado/métodos , Sociedades Médicas , Traumatologia , Resultado do Tratamento , Estados Unidos/epidemiologia , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA