RESUMO
OBJECTIVES: To estimate the use of albumin among adults undergoing thoracic surgery in the United States, compare baseline characteristics, clinical and cost outcomes of recipients versus nonrecipients, and determine albumin's contribution to total hospital costs. DESIGN: Retrospective cohort study. SETTING: Nationwide sample of US hospitals. PARTICIPANTS: Adults undergoing open and minimally invasive thoracic surgery between 2011 and 2017. INTERVENTIONS: Albumin on the day of surgery (identified using itemized hospital billing logs). MEASUREMENTS AND MAIN RESULTS: Albumin was used in 170 of 342 US hospitals, among 13% and 7% of 14,672 and 22,532 patients who, respectively, underwent open and minimally invasive thoracic surgery (median volume 500 mL). Baseline comorbidities and organ-supportive treatments were several-fold more prevalent among recipients (particularly vasopressors, mechanical ventilation, and red cell transfusions). In standardized mortality ratio propensity score weighted analysis, albumin use was not associated with in-hospital mortality (adjusted relative risk 1.17 [0.72, 1.92] and 1.51 [0.97, 2.34], with open and minimally invasive procedures), but was associated with morbidity and higher costs, more so with minimally invasive procedures than with open surgery. Total costs among recipients were higher by $4,744 ($3,591, $5,897) and $5,088 ($4,075, $6,100) for open and minimally invasive procedures, respectively. Albumin accounted for 2.6% of this difference (median $124 [$83-$189] per patient). CONCLUSIONS: Albumin use varies widely across hospitals, and 9% of patients receive it (median 500 mL). Use was not associated with in-hospital mortality and was associated with more morbidity and cost. The cost of albumin accounted for a trivial portion of hospital costs. Clinical trials must examine the effects of albumin on complications and costs after thoracic surgery.
Assuntos
Albuminas , Custos Hospitalares , Procedimentos Cirúrgicos Torácicos , Humanos , Feminino , Masculino , Estudos Retrospectivos , Estados Unidos/epidemiologia , Custos Hospitalares/tendências , Pessoa de Meia-Idade , Procedimentos Cirúrgicos Torácicos/economia , Albuminas/economia , Idoso , Mortalidade Hospitalar , Assistência Perioperatória/economia , Assistência Perioperatória/métodos , Adulto , Estudos de Coortes , Resultado do TratamentoRESUMO
BACKGROUND: The preferences of autism stakeholders regarding the top priorities for future autism research are largely unknown. OBJECTIVE: This study had two objectives: First, to examine what autism stakeholders think new research investments should be and the attributes of investment that they consider important, and second, to explore the feasibility, acceptability and outcomes of two prioritization exercises among autism stakeholders regarding their priorities for future research in autism. DESIGN: This was a prospective stakeholder-engaged iterative study consisting of best-worst scaling (BWS) and direct prioritization exercise. SETTING AND PARTICIPANTS: A national snowball sample of 219 stakeholders was included: adults with autism, caregivers, service providers and researchers. MAIN OUTCOME MEASURES: The main outcomes measures were attributes that participants value in future research investments, and priority research investments for future research. RESULTS: Two hundred and nineteen participants completed the exercises, of whom 11% were adults with autism, 58% were parents/family members, 37% were service providers and 21% were researchers. Among stakeholders, the BWS exercises were easier to understand than the direct prioritization, less frequently skipped and yielded more consistent results. The proportion of children with autism affected by the research was the most important attribute for all types of stakeholders. The top three priorities among future research investments were (1) evidence on which child, family and intervention characteristics lead to the best/worst outcomes; (2) evidence on how changes in one area of a child's life are related to changes in other areas; and (3) evidence on dietary interventions. Priorities were similar for all stakeholder types. CONCLUSIONS: The values and priorities examined here provide a road map for investigators and funders to pursue autism research that matters to stakeholders. PATIENT OR PUBLIC CONTRIBUTION: Stakeholders completed a BWS and direct prioritization exercise to inform us about their priorities for future autism research.
Assuntos
Transtorno Autístico , Pesquisa Biomédica , Prioridades em Saúde , Adulto , Transtorno Autístico/terapia , Cuidadores , Criança , Estudos de Viabilidade , Humanos , Pais , Estudos ProspectivosRESUMO
To extend previous simulations on the performance of propensity score (PS) weighting and trimming methods to settings without and with unmeasured confounding, Poisson outcomes, and various strengths of treatment prediction (PS c statistic), we simulated studies with a binary intended treatment T as a function of 4 measured covariates. We mimicked treatment withheld and last-resort treatment by adding 2 "unmeasured" dichotomous factors that directed treatment to change for some patients in both tails of the PS distribution. The number of outcomes Y was simulated as a Poisson function of T and confounders. We estimated the PS as a function of measured covariates and trimmed the tails of the PS distribution using 3 strategies ("Crump," "Stürmer," and "Walker"). After trimming and reestimation, we used alternative PS weights to estimate the treatment effect (rate ratio): inverse probability of treatment weighting, standardized mortality ratio (SMR)-treated, SMR-untreated, the average treatment effect in the overlap population (ATO), matching, and entropy. With no unmeasured confounding, the ATO (123%) and "Crump" trimming (112%) improved relative efficiency compared with untrimmed inverse probability of treatment weighting. With unmeasured confounding, untrimmed estimates were biased irrespective of weighting method, and only Stürmer and Walker trimming consistently reduced bias. In settings where unmeasured confounding (e.g., frailty) may lead physicians to withhold treatment, Stürmer and Walker trimming should be considered before primary analysis.
Assuntos
Viés , Estudos Epidemiológicos , Modelos Estatísticos , Projetos de Pesquisa , Simulação por Computador , Humanos , Modelos Logísticos , Pontuação de PropensãoRESUMO
BACKGROUND: Inverse probability of treatment weighting (IPTW) may be biased by influential observations, which can occur from misclassification of strong exposure predictors. METHODS: We evaluated bias and precision of IPTW estimators in the presence of a misclassified confounder and assessed the effect of propensity score (PS) trimming. We generated 1000 plasmode cohorts of size N = 10 000, sampled with replacement from 6063 NHANES respondents (1999-2014) age 40 to 79 with labs and no statin use. We simulated statin exposure as a function of demographics and CVD risk factors; and outcomes as a function of 10-year CVD risk score and statin exposure (rate ratio [RR] = 0.5). For 5% of the people in selected populations (eg, all patients, exposed, those with outcomes), we randomly misclassified a confounder that strongly predicted exposure. We fit PS models and estimated RRs using IPTW and 1:1 PS matching, with and without asymmetric trimming. RESULTS: IPTW bias was substantial when misclassification was differential by outcome (RR range: 0.38-0.63) and otherwise minimal (RR range: 0.51-0.53). However, trimming reduced bias for IPTW, nearly eliminating it at 5% trimming (RR range: 0.49-0.52). In one scenario, when the confounder was misclassified for 5% of those with outcomes (0.3% of cohort), untrimmed IPTW was more biased and less precise (RR = 0.37 [SE(logRR) = 0.21]) than matching (RR = 0.50 [SE(logRR) = 0.13]). After 1% trimming, IPTW estimates were unbiased and more precise (RR = 0.49 [SE(logRR) = 0.12]) than matching (RR = 0.51 [SE(logRR) = 0.14]). CONCLUSIONS: Differential misclassification of a strong predictor of exposure resulted in biased and imprecise IPTW estimates. Asymmetric trimming reduced bias, with more precise estimates than matching.
Assuntos
Pontuação de Propensão , Adulto , Idoso , Viés , Simulação por Computador , Humanos , Pessoa de Meia-Idade , Método de Monte Carlo , Inquéritos NutricionaisRESUMO
Confounding can cause substantial bias in nonexperimental studies that aim to estimate causal effects. Propensity score methods allow researchers to reduce bias from measured confounding by summarizing the distributions of many measured confounders in a single score based on the probability of receiving treatment. This score can then be used to mitigate imbalances in the distributions of these measured confounders between those who received the treatment of interest and those in the comparator population, resulting in less biased treatment effect estimates. This methodology was formalized by Rosenbaum and Rubin in 1983 and, since then, has been used increasingly often across a wide variety of scientific disciplines. In this review article, we provide an overview of propensity scores in the context of real-world evidence generation with a focus on their use in the setting of single treatment decisions, that is, choosing between two therapeutic options. We describe five aspects of propensity score analysis: alignment with the potential outcomes framework, implications for study design, estimation procedures, implementation options, and reporting. We add context to these concepts by highlighting how the types of comparator used, the implementation method, and balance assessment techniques have changed over time. Finally, we discuss evolving applications of propensity scores.
Assuntos
Cognição , Projetos de Pesquisa , Viés , Causalidade , Humanos , Pontuação de PropensãoRESUMO
BACKGROUND: Optimal administration of fluids is an important part of enhanced recovery after surgery (ERAS) protocols. We sought to examine the relationship between perioperative crystalloid volume and adverse outcomes in five common types of surgical procedures with ERAS fluid guidelines in place where large randomized controlled trials have not been conducted: breast reconstruction, bariatric, major urologic, gynoncologic, and head and neck oncologic procedures. METHODS: This retrospective cohort study included patients who had undergone any one of the aforementioned procedures within any facility in a large multihospital alliance (Premier, Inc, Charlotte, NC) between 2008 and 2014. We used multivariable generalized additive models to examine relationships between the total crystalloid volume (TCV) on the day of surgery and a composite adverse outcome of prolonged (>75th percentile) hospital or intensive care unit stay or in-hospital mortality. Models were constructed separately within each surgical category and adjusted for demographic, clinical, and hospital characteristics. Informed consent requirements were waived because deidentified data were used. RESULTS: We identified 83,685 patients within 312 US hospitals undergoing breast reconstruction (n = 8738), bariatric surgery (n = 8067), major urologic surgery (n = 28,654), gynoncologic surgery (n = 34,559), and head/neck oncology surgery (n = 3667). There was significant patient-independent variation in TCV. Probabilities of adverse outcomes increased at a TCV below 3 L and above 6 L for all types of surgeries except bariatric surgery, where larger volumes were associated with progressively better outcomes. CONCLUSIONS AND RELEVANCE: Relationships between TCV and adverse outcomes were generally J shaped with higher volumes (>6 L) associated with increased risk. As per current ERAS guidelines, it is important to avoid excessive crystalloid volume in most surgical procedures except for bariatric surgery.
Assuntos
Soluções Cristaloides/administração & dosagem , Recuperação Pós-Cirúrgica Melhorada , Procedimentos Cirúrgicos Operatórios/efeitos adversos , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Estudos RetrospectivosRESUMO
BACKGROUND: While US Food and Drug Administration (FDA) black box warnings are common, their impact on perioperative outcomes is unclear. Hydroxyethyl starch (HES) is associated with increased bleeding and kidney injury in patients with sepsis, leading to an FDA black box warning in 2013. Among patients undergoing musculoskeletal surgery in a subset of hospitals where colloid use changed from HES to albumin following the FDA warning, we examined the rate of major perioperative bleeding post- versus pre-FDA warning. METHODS: The authors of this article used a retrospective, quasi-experimental, repeated cross-sectional, interrupted time series study of patients undergoing musculoskeletal surgery in hospitals within the Premier Healthcare Database, in the year before and year after the 2013 FDA black box warning. We examined patients in 23 "switcher" hospitals (where the percentage of colloid recipients receiving HES exceeded 50% before the FDA warning and decreased by at least 25% in absolute terms after the FDA warning) and patients in 279 "nonswitcher" hospitals. Among patients having surgery in "switcher" and "nonswitcher" hospitals, we determined monthly rates of major perioperative bleeding during the 12 months after the FDA warning, compared to 12 months before the FDA warning. Among patients who received surgery in "switcher" hospitals, we conducted a propensity-weighted segmented regression analysis assessing differences-in-differences (DID), using patients in "nonswitcher" hospitals as a control group. RESULTS: Among 3078 patients treated at "switcher" hospitals (1892 patients treated pre-FDA warning versus 1186 patients treated post-FDA warning), demographic and clinical characteristics were well-balanced. Two hundred fifty-one (13.3%) received albumin pre-FDA warning, and 900 (75.9%) received albumin post-FDA warning. Among patients undergoing surgery in "switcher" hospitals during the pre-FDA warning period, 282 of 1892 (14.9%) experienced major bleeding during the hospitalization, compared to 149 of 1186 (12.6%) following the warning. In segmented regression, the adjusted ratio of slopes for major perioperative bleeding post- versus pre-FDA warning was 0.98 (95% confidence interval [CI], 0.93-1.04). In the DID estimate using "nonswitcher" hospitals as a control group, the ratio of ratios was 0.93 (95% CI, 0.46-1.86), indicating no significant difference. CONCLUSIONS: We identified a subset of hospitals where colloid use for musculoskeletal surgery changed following a 2013 FDA black box warning regarding HES use in sepsis. Among patients undergoing musculoskeletal surgery at these "switcher" hospitals, there was no significant decrease in the rate of major perioperative bleeding following the warning, possibly due to incomplete practice change. Evaluation of the impact of systemic changes in health care may contribute to the understanding of patient outcomes in perioperative medicine.
Assuntos
Albuminas/uso terapêutico , Perda Sanguínea Cirúrgica/estatística & dados numéricos , Rotulagem de Medicamentos , Derivados de Hidroxietil Amido/uso terapêutico , Sistema Musculoesquelético/cirurgia , Substitutos do Plasma/uso terapêutico , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Estudos Transversais , Bases de Dados Factuais , Feminino , Hospitais , Humanos , Análise de Séries Temporais Interrompida , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos , Resultado do Tratamento , Estados Unidos , United States Food and Drug Administration , Adulto JovemRESUMO
OBJECTIVE: Multimodal analgesia has gained popularity in total hip arthroplasty (THA) and total knee arthroplasty (TKA), but large multicenter studies evaluating specific analgesic combinations are lacking. DESIGN: A retrospective study using the Premier Healthcare Database (2009-2014). SUBJECTS: Adults who underwent elective primary THA or TKA. METHODS: We categorized day-of-surgery analgesic exposure using eight mutually exclusive categories: acetaminophen (Ac), nonsteroidal anti-inflammatory drugs (Ns), gabapentinoids (Ga; gabapentin or pregabalin), Ac+Ns, Ac+Ga, Ns+Ga, Ac+Ns+Ga, and none of the three drugs. Multilevel models measured associations of the analgesic categories with a composite of postoperative pulmonary complications (PPCs). RESULTS: Among 863,139 patients, 75.2% received at least one of the three drugs. In multilevel models, compared with none of the three drugs, Ga use was associated with increased odds of PPCs when used alone (adjusted odds ratio [aOR] = 1.35, 95% confidence interval [CI] = 1.27 to 1.44), combined with Ac (aOR = 1.16, 95% CI = 1.08 to 1.26), or combined with Ns (aOR = 1.28, 95% CI = 1.21 to 1.34). In contrast, the Ac+Ns pair was associated with decreased odds of PPCs (OR = 0.86, 95% CI = 0.83 to 0.90) and lower opioid consumption. Ac+Ns+Ga was not associated with PPCs, whereas it was associated with the lowest opioid consumption on the day of surgery. CONCLUSIONS: Gabapentinoids, alone and in single combination with either acetaminophen or nonsteroidal anti-inflammatory drugs, were associated with higher PPCs, whereas the Ac+Ns pair was associated with fewer PPCs and an opioid-sparing effect. Ac+Ns+Ga was not associated with PPCs, whereas it was associated with the lowest opioid consumption on the day of surgery.
Assuntos
Artroplastia de Quadril , Artroplastia do Joelho , Acetaminofen/uso terapêutico , Adulto , Anti-Inflamatórios não Esteroides/uso terapêutico , Artroplastia de Quadril/efeitos adversos , Artroplastia do Joelho/efeitos adversos , Humanos , Dor Pós-Operatória/tratamento farmacológico , Complicações Pós-Operatórias/epidemiologia , Estudos RetrospectivosRESUMO
Nonexperimental studies of the effectiveness of seasonal influenza vaccine in older adults have found 40%-60% reductions in all-cause mortality associated with vaccination, potentially due to confounding by frailty. We restricted our cohort to initiators of medications in preventive drug classes (statins, antiglaucoma drugs, and ß blockers) as an approach to reducing confounding by frailty by excluding frail older adults who would not initiate use of these drugs. Using a random 20% sample of US Medicare beneficiaries, we framed our study as a series of nonrandomized "trials" comparing vaccinated beneficiaries with unvaccinated beneficiaries who had an outpatient health-care visit during the 5 influenza seasons occurring in 2010-2015. We pooled data across trials and used standardized-mortality-ratio-weighted Cox proportional hazards models to estimate the association between influenza vaccination and all-cause mortality before influenza season, expecting a null association. Weighted hazard ratios among preventive drug initiators were generally closer to the null than those in the nonrestricted cohort. Restriction of the study population to statin initiators with an uncensored approach resulted in a weighted hazard ratio of 1.00 (95% confidence interval: 0.84, 1.19), and several other hazard ratios were above 0.95. Restricting the cohort to initiators of medications in preventive drug classes can reduce confounding by frailty in this setting, but further work is required to determine the most appropriate criteria to use.
Assuntos
Idoso Fragilizado , Vacinas contra Influenza/administração & dosagem , Farmacoepidemiologia , Antagonistas Adrenérgicos beta/uso terapêutico , Idoso , Causas de Morte , Fatores de Confusão Epidemiológicos , Feminino , Glaucoma/tratamento farmacológico , Humanos , Inibidores de Hidroximetilglutaril-CoA Redutases/uso terapêutico , Influenza Humana/mortalidade , Influenza Humana/prevenção & controle , Masculino , Medicare , Estações do Ano , Estados Unidos/epidemiologiaRESUMO
OBJECTIVE: The aim of this study was to determine the association between gabapentinoids on the day of surgery and adverse postoperative outcomes in patients undergoing colorectal surgery in the United States. BACKGROUND: Gabapentinoids, gabapentin and pregabalin, are recommended in multimodal analgesia protocols for acute postoperative pain management after colorectal surgery. However, current literature focuses on the efficacy in reducing opioid consumption, but provides limited information about adverse risks. METHODS: This was a retrospective study including 175,787 patients undergoing elective colorectal surgery using the Premier database between 2009 and 2014. Multilevel regression models measured associations of receipt of gabapentinoids with naloxone use after surgery, non-invasive ventilation (NIV), invasive ventilation (IMV), hospital length of stay (LOS), and parental morphine equivalents (PMEs) on the day of surgery and on the day before discharge. RESULTS: Overall, 4677 (2.7%) patients received gabapentinoids on the day of surgery, with use doubling (1.7% in 2009 to 4.3% in 2014). Compared with patients who were unexposed to ganapentinoids, gabapentinoid exposure was associated with lower PMEs on the day of surgery [-2.7âmg; 95% confidence interval (CI), -5.2 to -0.0âmg], and with higher odds of NIV [odds ratio (OR) 1.22, 95% CI, 1.00-1.49] and receipt of naloxone (OR 1.58, 95% CI, 1.11-2.26). There was no difference between the groups with respect to IMV or PMEs on the day before discharge. CONCLUSIONS: Although use of gabapentinoids on the day of surgery was associated with slightly lower PMEs on the day of surgery, it was associated with higher odds of NIV and naloxone use after surgery.
Assuntos
Analgésicos/uso terapêutico , Procedimentos Cirúrgicos do Sistema Digestório/efeitos adversos , Gabapentina/uso terapêutico , Complicações Pós-Operatórias/etiologia , Pregabalina/uso terapêutico , Respiração Artificial , Adulto , Colo/cirurgia , Procedimentos Cirúrgicos Eletivos , Feminino , Humanos , Tempo de Internação , Masculino , Morfina , Complicações Pós-Operatórias/tratamento farmacológico , Reto/cirurgia , Estudos RetrospectivosRESUMO
A propensity score (PS) model's ability to control confounding can be assessed by evaluating covariate balance across exposure groups after PS adjustment. The optimal strategy for evaluating a disease risk score (DRS) model's ability to control confounding is less clear. DRS models cannot be evaluated through balance checks within the full population, and they are usually assessed through prediction diagnostics and goodness-of-fit tests. A proposed alternative is the "dry-run" analysis, which divides the unexposed population into "pseudo-exposed" and "pseudo-unexposed" groups so that differences on observed covariates resemble differences between the actual exposed and unexposed populations. With no exposure effect separating the pseudo-exposed and pseudo-unexposed groups, a DRS model is evaluated by its ability to retrieve an unconfounded null estimate after adjustment in this pseudo-population. We used simulations and an empirical example to compare traditional DRS performance metrics with the dry-run validation. In simulations, the dry run often improved assessment of confounding control, compared with the C statistic and goodness-of-fit tests. In the empirical example, PS and DRS matching gave similar results and showed good performance in terms of covariate balance (PS matching) and controlling confounding in the dry-run analysis (DRS matching). The dry-run analysis may prove useful in evaluating confounding control through DRS models.
Assuntos
Fatores de Confusão Epidemiológicos , Métodos Epidemiológicos , Causalidade , Simulação por Computador , Humanos , Pontuação de PropensãoRESUMO
PURPOSE: To improve control of confounding by frailty when estimating the effect of influenza vaccination on all-cause mortality by controlling for a published set of claims-based predictors of dependency in activities of daily living (ADL). METHODS: Using Medicare claims data, a cohort of beneficiaries >65 years of age was followed from September 1, 2007, to April 12, 2008, with covariates assessed in the 6 months before follow-up. We estimated Cox proportional hazards models of all-cause mortality, with influenza vaccination as a time-varying exposure. We controlled for common demographics, comorbidities, and health care utilization variables and then added 20 ADL dependency predictors. To gauge residual confounding, we estimated pre-influenza season hazard ratios (HRs) between September 1, 2007 and January 5, 2008, which should be 1.0 in the absence of bias. RESULTS: A cohort of 2 235 140 beneficiaries was created, with a median follow-up of 224 days. Overall, 52% were vaccinated and 4% died during follow-up. During the pre-influenza season period, controlling for demographics, comorbidities, and health care use resulted in a HR of 0.66 (0.64, 0.67). Adding the ADL dependency predictors moved the HR to 0.68 (0.67, 0.70). Controlling for demographics and ADL dependency predictors alone resulted in a HR of 0.68 (0.66, 0.70). CONCLUSIONS: Results were consistent with those in the literature, with significant uncontrolled confounding after adjustment for demographics, comorbidities, and health care use. Adding ADL dependency predictors moved HRs slightly closer to the null. Of the comorbidities, health care use variables, and ADL dependency predictors, the last set reduced confounding most. However, substantial uncontrolled confounding remained.
Assuntos
Atividades Cotidianas , Fragilidade , Vacinas contra Influenza/imunologia , Influenza Humana/prevenção & controle , Idoso , Idoso de 80 Anos ou mais , Registros Eletrônicos de Saúde , Humanos , Medicare , Estudos Retrospectivos , Estados UnidosRESUMO
BACKGROUND: Despite different pharmacologic properties, little is known about the comparative safety of sodium ferric gluconate versus iron sucrose in hemodialysis patients. STUDY DESIGN: Retrospective cohort study using the clinical database of a large dialysis provider (2004-2005) merged with administrative data from the US Renal Data System. SETTING & PARTICIPANTS: 66,207 patients with Medicare coverage who received center-based hemodialysis. PREDICTORS: Iron formulation use assessed during repeated 1-month exposure periods (n=278,357). OUTCOMES: All-cause mortality, infection-related hospitalizations and mortality, and cardiovascular-related hospitalizations and mortality occurring during a 3-month follow-up period. MEASUREMENTS: For all outcomes, we estimated 90-day risk differences between the formulations using propensity score weighting of Kaplan-Meier functions, which controlled for a wide range of demographic, clinical, and laboratory variables. Risk differences were also estimated within various clinically important subgroups. RESULTS: Ferric gluconate was administered in 11.4%; iron sucrose, in 48.9%; and no iron in 39.7% of the periods. Risks for most study outcomes did not differ between ferric gluconate and iron sucrose; however, among patients with a hemodialysis catheter, use of ferric gluconate was associated with a slightly decreased risk for both infection-related death (risk difference, -0.3%; 95% CI, -0.5% to 0.0%) and infection-related hospitalization (risk difference, -1.5%; 95% CI, -2.3% to -0.6%). Bolus dosing was associated with an increase in infection-related events among both ferric gluconate and iron sucrose users. LIMITATIONS: Residual confounding and outcome measurement error. CONCLUSIONS: Overall, the 2 iron formulations studied exhibited similar safety profiles; however, ferric gluconate was associated with a slightly decreased risk for infection-related outcomes compared to iron sucrose among patients with a hemodialysis catheter. These associations should be explored further using other data or study designs.
Assuntos
Anemia Ferropriva/tratamento farmacológico , Compostos Férricos/uso terapêutico , Ácido Glucárico/uso terapêutico , Hematínicos/uso terapêutico , Diálise Renal , Estudos de Coortes , Feminino , Compostos Férricos/efeitos adversos , Óxido de Ferro Sacarado , Ácido Glucárico/efeitos adversos , Hematínicos/efeitos adversos , Humanos , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos , Fatores de TempoRESUMO
BACKGROUND: The potential effects of iron-dosing strategies and erythropoiesis-stimulating agents (ESAs) on health-related quality of life (HRQoL) in the dialysis population are unclear. We examined the independent associations of bolus versus maintenance iron dosing and high versus low ESA dosing on HRQoL. STUDY DESIGN: Retrospective cohort design. SETTING & PARTICIPANTS: Clinical data (2008-2010) from a large dialysis organization merged with data from the US Renal Data System. 13,039 patients receiving center-based hemodialysis were included. PREDICTOR: Iron and ESA dosing were assessed during 1-month (n=14,901) and 2-week (n=15,296) exposure periods. OUTCOMES: HRQoL was measured by the Kidney Disease Quality of Life (KDQOL) instrument (0-100 scale) during a 3-month follow-up period. MEASUREMENTS: Generalized linear mixed models, adjusting for several covariates, were used to estimate associations between iron and ESA dosing and HRQoL overall and for clinically relevant subgroups. RESULTS: For the 1-month exposure period, patients with lower baseline hemoglobin levels who received higher ESA dosing had higher physical health and kidney disease symptom scores (by 2.4 [95% CI, 0.6-4.2] and 5.6 [95% CI, 2.8-8.4] points, respectively) in follow-up than patients who received lower ESA dosing. For the 2-week exposure period, patients with low baseline hemoglobin levels who received bolus dosing had higher mental health scores (by 1.9 [95% CI, 0.0-3.8] points) in follow-up. Within the low-baseline-hemoglobin subgroup, individuals with a catheter or dialysis vintage less than 1 year who received higher ESA dosing had higher HRQoL scores in follow-up (by 5.0-9.9 points) and individuals with low baseline transferrin saturations who received bolus dosing had higher HRQoL scores in follow-up (by 2.6-5.8 points). LIMITATIONS: Observational design; short duration of observation. CONCLUSIONS: For individuals with low baseline hemoglobin levels, higher ESA dosing and bolus iron dosing were associated with slightly higher HRQoL scores in follow-up. These differences became more pronounced and clinically relevant for specific subgroups.
Assuntos
Eritropoese/efeitos dos fármacos , Nível de Saúde , Hematínicos/administração & dosagem , Ferro/administração & dosagem , Qualidade de Vida , Diálise Renal/efeitos adversos , Adulto , Idoso , Estudos de Coortes , Relação Dose-Resposta a Droga , Eritropoese/fisiologia , Feminino , Seguimentos , Humanos , Falência Renal Crônica/diagnóstico , Falência Renal Crônica/terapia , Masculino , Pessoa de Meia-Idade , Diálise Renal/métodos , Estudos Retrospectivos , Resultado do TratamentoRESUMO
Nonexperimental studies of preventive interventions are often biased because of the healthy-user effect and, in frail populations, because of confounding by functional status. Bias is evident when estimating influenza vaccine effectiveness, even after adjustment for claims-based indicators of illness. We explored bias reduction methods while estimating vaccine effectiveness in a cohort of adult hemodialysis patients. Using the United States Renal Data System and linked data from a commercial dialysis provider, we estimated vaccine effectiveness using a Cox proportional hazards marginal structural model of all-cause mortality before and during 3 influenza seasons in 2005/2006 through 2007/2008. To improve confounding control, we added frailty indicators to the model, measured time-varying confounders at different time intervals, and restricted the sample in multiple ways. Crude and baseline-adjusted marginal structural models remained strongly biased. Restricting to a healthier population removed some unmeasured confounding; however, this reduced the sample size, resulting in wide confidence intervals. We estimated an influenza vaccine effectiveness of 9% (hazard ratio = 0.91, 95% confidence interval: 0.72, 1.15) when bias was minimized through cohort restriction. In this study, the healthy-user bias could not be controlled through statistical adjustment; however, sample restriction reduced much of the bias.
Assuntos
Fatores de Confusão Epidemiológicos , Nível de Saúde , Vacinas contra Influenza , Falência Renal Crônica , Idoso , Estudos de Coortes , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Mortalidade , Modelos de Riscos ProporcionaisRESUMO
BACKGROUND: Medications are an integral component of management for many chronic conditions, and suboptimal adherence limits medication effectiveness among persons with multiple chronic conditions (MCC). Medical homes may provide a mechanism for increasing adherence among persons with MCC, thereby enhancing management of chronic conditions. OBJECTIVE: To examine the association between medical home enrollment and adherence to newly initiated medications among Medicaid enrollees with MCC. RESEARCH DESIGN: Retrospective cohort study comparing Community Care of North Carolina medical home enrollees to nonenrollees using merged North Carolina Medicaid claims data (fiscal years 2008-2010). SUBJECTS: Among North Carolina Medicaid-enrolled adults with MCC, we created separate longitudinal cohorts of new users of antidepressants (N=9303), antihypertensive agents (N=12,595), oral diabetic agents (N=6409), and statins (N=9263). MEASURES: Outcomes were the proportion of days covered (PDC) on treatment medication each month for 12 months and a dichotomous measure of adherence (PDC>0.80). Our primary analysis utilized person-level fixed effects models. Sensitivity analyses included propensity score and person-level random-effect models. RESULTS: Compared with nonenrollees, medical home enrollees exhibited higher PDC by 4.7, 6.0, 4.8, and 5.1 percentage points for depression, hypertension, diabetes, and hyperlipidemia, respectively (P's<0.001). The dichotomous adherence measure showed similar increases, with absolute differences of 4.1, 4.5, 3.5, and 4.6 percentage points, respectively (P's<0.001). CONCLUSIONS: Among Medicaid enrollees with MCC, adherence to new medications is greater for those enrolled in medical homes.
Assuntos
Doença Crônica/tratamento farmacológico , Medicaid/estatística & dados numéricos , Adesão à Medicação/estatística & dados numéricos , Conduta do Tratamento Medicamentoso/organização & administração , Assistência Centrada no Paciente/organização & administração , Assistência Centrada no Paciente/estatística & dados numéricos , Adulto , Antidepressivos/uso terapêutico , Estudos de Coortes , Depressão/tratamento farmacológico , Feminino , Humanos , Inibidores de Hidroximetilglutaril-CoA Redutases/uso terapêutico , Hiperlipidemias/tratamento farmacológico , Hipertensão/tratamento farmacológico , Masculino , Pessoa de Meia-Idade , North Carolina , Estudos Retrospectivos , Estados UnidosRESUMO
PURPOSE: We use simulations and an empirical example to evaluate the performance of disease risk score (DRS) matching compared with propensity score (PS) matching when controlling large numbers of covariates in settings involving newly introduced treatments. METHODS: We simulated a dichotomous treatment, a dichotomous outcome, and 100 baseline covariates that included both continuous and dichotomous random variables. For the empirical example, we evaluated the comparative effectiveness of dabigatran versus warfarin in preventing combined ischemic stroke and all-cause mortality. We matched treatment groups on a historically estimated DRS and again on the PS. We controlled for a high-dimensional set of covariates using 20% and 1% samples of Medicare claims data from October 2010 through December 2012. RESULTS: In simulations, matching on the DRS versus the PS generally yielded matches for more treated individuals and improved precision of the effect estimate. For the empirical example, PS and DRS matching in the 20% sample resulted in similar hazard ratios (0.88 and 0.87) and standard errors (0.04 for both methods). In the 1% sample, PS matching resulted in matches for only 92.0% of the treated population and a hazard ratio and standard error of 0.89 and 0.19, respectively, while DRS matching resulted in matches for 98.5% and a hazard ratio and standard error of 0.85 and 0.16, respectively. CONCLUSIONS: When PS distributions are separated, DRS matching can improve the precision of effect estimates and allow researchers to evaluate the treatment effect in a larger proportion of the treated population. However, accurately modeling the DRS can be challenging compared with the PS.
Assuntos
Pesquisa Comparativa da Efetividade/métodos , Simulação por Computador , Dabigatrana/uso terapêutico , Pontuação de Propensão , Varfarina/uso terapêutico , Idoso , Idoso de 80 Anos ou mais , Fibrilação Atrial/tratamento farmacológico , Fibrilação Atrial/mortalidade , Feminino , Humanos , Masculino , Mortalidade/tendências , Farmacoepidemiologia/métodos , Acidente Vascular Cerebral/mortalidade , Acidente Vascular Cerebral/prevenção & controle , Resultado do Tratamento , Estados Unidos/epidemiologiaRESUMO
This study examined the effects of a waitlist policy for state psychiatric hospitals on length of stay and time to readmission using data from North Carolina for 2004-2010. Cox proportional hazards models tested the hypothesis that patients were discharged "quicker-but-sicker" post-waitlist, as hospitals struggled to manage admission delays and quickly admit waitlisted patients. Results refute this hypothesis, indicating that waitlists were associated with increased length of stay and time to readmission. Further research is needed to evaluate patients' clinical outcomes directly and to examine the impact of state hospital waitlists in other areas, such as state hospital case mix, local emergency departments, and outpatient mental health agencies.
Assuntos
Hospitais Psiquiátricos/organização & administração , Hospitais Estaduais/organização & administração , Tempo de Internação/estatística & dados numéricos , Política Organizacional , Readmissão do Paciente/estatística & dados numéricos , Transtornos Relacionados ao Uso de Substâncias/epidemiologia , Listas de Espera , Adolescente , Adulto , Diagnóstico Duplo (Psiquiatria) , Feminino , Hospitalização/estatística & dados numéricos , Humanos , Masculino , Transtornos Mentais/epidemiologia , Pessoa de Meia-Idade , North Carolina/epidemiologia , Alta do Paciente/estatística & dados numéricos , Modelos de Riscos Proporcionais , Fatores Sexuais , Fatores de Tempo , Adulto JovemRESUMO
The covariate-balancing propensity score (CBPS) extends logistic regression to simultaneously optimize covariate balance and treatment prediction. Although the CBPS has been shown to perform well in certain settings, its performance has not been evaluated in settings specific to pharmacoepidemiology and large database research. In this study, we use both simulations and empirical data to compare the performance of the CBPS with logistic regression and boosted classification and regression trees. We simulated various degrees of model misspecification to evaluate the robustness of each propensity score (PS) estimation method. We then applied these methods to compare the effect of initiating glucagonlike peptide-1 agonists versus sulfonylureas on cardiovascular events and all-cause mortality in the US Medicare population in 2007-2009. In simulations, the CBPS was generally more robust in terms of balancing covariates and reducing bias compared with misspecified logistic PS models and boosted classification and regression trees. All PS estimation methods performed similarly in the empirical example. For settings common to pharmacoepidemiology, logistic regression with balance checks to assess model specification is a valid method for PS estimation, but it can require refitting multiple models until covariate balance is achieved. The CBPS is a promising method to improve the robustness of PS models.
Assuntos
Modelos Logísticos , Pontuação de Propensão , Viés , Doenças Cardiovasculares/epidemiologia , Doenças Cardiovasculares/prevenção & controle , Comorbidade , Simulação por Computador , Fatores de Confusão Epidemiológicos , Diabetes Mellitus/tratamento farmacológico , Diabetes Mellitus/epidemiologia , Peptídeo 1 Semelhante ao Glucagon/agonistas , Humanos , Funções Verossimilhança , Farmacoepidemiologia , Compostos de Sulfonilureia/uso terapêuticoRESUMO
BACKGROUND: Little is known about the quality of care received by Medicaid enrollees with multiple chronic conditions (MCCs) and whether quality is different for those with mental illness. OBJECTIVES: To examine cancer screening and single-disease quality of care measures in a Medicaid population with MCC and to compare quality measures among persons with MCC with varying medical comorbidities with and without depression or schizophrenia. RESEARCH DESIGN: Secondary data analysis using a unique data source combining Medicaid claims with other administrative datasets from North Carolina's mental health system. SUBJECTS: Medicaid-enrolled adults aged 18 and older with ≥2 of 8 chronic conditions (asthma, chronic obstructive pulmonary disease, diabetes, hypertension, hyperlipidemia, seizure disorder, depression, or schizophrenia). Medicare/Medicaid dual enrollees were excluded due to incomplete data on their medical care utilization. MEASURES: We examined a number of quality measures, including cancer screening, disease-specific metrics, such as receipt of hemoglobin A1C tests for persons with diabetes, and receipt of psychosocial therapies for persons with depression or schizophrenia, and medication adherence. RESULTS: Quality of care metrics was generally lower among those with depression or schizophrenia, and often higher among those with increasing levels of medical comorbidities. A number of exceptions to these trends were noted. CONCLUSIONS: Cancer screening and single-disease quality measures may provide a benchmark for overall quality of care for persons with MCC; these measures were generally lower among persons with MCC and mental illness. Further research on quality measures that better reflect the complex care received by persons with MCC is essential.