Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 24
Filter
1.
Infect Control Hosp Epidemiol ; 44(8): 1314-1320, 2023 08.
Article in English | MEDLINE | ID: mdl-36330692

ABSTRACT

OBJECTIVE: To describe the natural course of procalcitonin (PCT) in patients with coronavirus disease 2019 (COVID-19) and the correlation between PCT and antimicrobial prescribing to provide insight into best practices for PCT data utilization in antimicrobial stewardship in this population. DESIGN: Single-center, retrospective, observational study. SETTING: Michigan Medicine. PATIENTS: Inpatients aged ≥18 years hospitalized March 1, 2020, through October 31, 2021, who were positive for severe acute respiratory coronavirus virus 2 (SARS-CoV-2), with ≥1 PCT measurement. Exclusion criteria included antibiotics for nonpulmonary bacterial infection on admission, treatment with azithromycin only for chronic obstructive pulmonary disease (COPD) exacerbation, and pre-existing diagnosis of cystic fibrosis with positive respiratory cultures. METHODS: A structured query was used to extract data. For patients started on antibiotics, bacterial pneumonia (bPNA) was determined through chart review. Multivariable models were used to assess associations of PCT level and bPNA with antimicrobial use. RESULTS: Of 793 patients, 224 (28.2%) were initiated on antibiotics: 33 (14.7%) had proven or probable bPNA, 125 (55.8%) had possible bPNA, and 66 (29.5%) had no bPNA. Patients had a mean of 4.1 (SD, ±5.2) PCT measurements if receiving antibiotics versus a mean of 2.0 (SD, ±2.6) if not. Initial PCT level was highest for those with proven/probable bPNA and was associated with antibiotic initiation (odds ratio 95% confidence interval [CI], 1.17-1.30). Initial PCT (rate ratio [RR] 95% CI, 1.01-1.08), change in PCT over time (RR 95% CI, 1.01-1.05), and bPNA group (RR 95% CI, 1.23-1.84) were associated with antibiotic duration. CONCLUSIONS: PCT trends are associated with the decision to initiate antibiotics and duration of treatment, independent of bPNA status and comorbidities. Prospective studies are needed to determine whether PCT level can be used to safely make decisions regarding antibiotic treatment for COVID-19.


Subject(s)
COVID-19 , Pneumonia, Bacterial , Humans , Adolescent , Adult , Procalcitonin , Retrospective Studies , SARS-CoV-2 , Anti-Bacterial Agents/therapeutic use , Pneumonia, Bacterial/drug therapy , Biomarkers
3.
Am J Infect Control ; 49(6): 694-700, 2021 06.
Article in English | MEDLINE | ID: mdl-33631305

ABSTRACT

BACKGROUND: With a unique influenza season occurring in the midst of a pandemic, there is interest in assessing the role of the influenza vaccine in COVID-19 susceptibility and severity. METHODS: In this retrospective cohort study, patients receiving a laboratory test for COVID-19 were identified. The primary outcome was comparison of positive COVID-19 testing in those who received the influenza vaccine versus those who did not. Secondary end points in patients testing positive for COVID-19 included mortality, need for hospitalization, length of stay, need for intensive care, and mechanical ventilation. RESULTS: A total of 27,201 patients received laboratory testing for COVID-19. The odds of testing positive for COVID-19 was reduced in patients who received an influenza vaccine compared to those who did not (odds ratio 0.76, 95% CI 0.68-0.86; P < .001). Vaccinated patients testing positive for COVID-19 were less likely to require hospitalization (odds ratio, 0.58, 95% CI 0.46-0.73; P < .001), or mechanical ventilation (odds ratio, 0.45, 95% CI 0.27-0.78; P = .004) and had a shorter hospital length of stay (risk ratio, 0.76, 95% CI 0.65-0.89; P < .001). CONCLUSION: Influenza vaccination is associated with decreased positive COVID-19 testing and improved clinical outcomes and should be promoted to reduce the burden of COVID-19.


Subject(s)
COVID-19 , Influenza Vaccines , Influenza, Human , COVID-19 Testing , Hospitalization , Humans , Influenza, Human/epidemiology , Influenza, Human/prevention & control , Retrospective Studies , SARS-CoV-2
4.
Sci Rep ; 10(1): 4723, 2020 03 13.
Article in English | MEDLINE | ID: mdl-32170215

ABSTRACT

Triggering events for acute aortic dissections are incompletely understood. We sought to investigate whether there is an association between admission for acute type A aortic dissection (ATAAD) to the University of Michigan Medical Center and the reported annual influenza activity by the Michigan Department of Health and Human Services. From 1996-2019 we had 758 patients admitted for ATAAD with 3.1 admissions per month during November-March and 2.5 admissions per month during April-October (p = 0.01). Influenza reporting data by the Michigan Department of Health and Human Services became available in 2009. ATAAD admissions for the period 2009-2019 (n = 455) were 4.8 cases/month during peak influenza months compared to 3.5 cases/month during non-peak influenza months (p = 0.001). ATAAD patients admitted during influenza season had increased in-hospital mortality (11.0% vs. 5.8%, p = 0.024) and increased 30-day mortality (9.7 vs. 5.4%, p = 0.048). The results point to higher admission rates for ATAAD during months with above average influenza rates. Future studies need to investigate whether influenza virus infection affects susceptibility for aortic dissection, and whether this risk can be attenuated with the annual influenza vaccine in this patient population.


Subject(s)
Aortic Aneurysm/mortality , Aortic Dissection/mortality , Disease Outbreaks , Hospital Mortality , Influenza, Human/epidemiology , Patient Admission/statistics & numerical data , Acute Disease , Aged , Aortic Dissection/etiology , Aortic Aneurysm/etiology , Disease Susceptibility/etiology , Female , Humans , Influenza, Human/complications , Male , Michigan/epidemiology , Middle Aged , Risk , Seasons , Time Factors
5.
JAMA Intern Med ; 179(11): 1519-1527, 2019 Nov 01.
Article in English | MEDLINE | ID: mdl-31449295

ABSTRACT

IMPORTANCE: Treatment of asymptomatic bacteriuria (ASB) with antibiotics is a common factor in inappropriate antibiotic use, but risk factors and outcomes associated with treatment of ASB in hospitalized patients are not well defined. OBJECTIVE: To evaluate factors associated with treatment of ASB among hospitalized patients and the possible association between treatment and clinical outcomes. DESIGN, SETTING, AND PARTICIPANTS: A retrospective cohort study was conducted from January 1, 2016, through February 1, 2018, at 46 hospitals participating in the Michigan Hospital Medicine Safety Consortium. A total of 2733 hospitalized medical patients with ASB, defined as a positive urine culture without any documented signs or symptoms attributable to urinary tract infection, were included in the analysis. EXPOSURES: One or more antibiotic dose for treatment of ASB. MAIN OUTCOMES AND MEASURES: Estimators of antibiotic treatment of ASB. Secondary outcomes included 30-day mortality, 30-day hospital readmission, 30-day emergency department visit, discharge to post-acute care settings, Clostridioides difficile infection (formerly known as Clostridium difficile) at 30 days, and duration of hospitalization after urine testing. RESULTS: Of 2733 patients with ASB, 2138 were women (78.2%); median age was 77 years (interquartile range [IQR], 66-86 years). A total of 2259 patients (82.7%) were treated with antibiotics for a median of 7 days (IQR, 4-9 days). Factors associated with ASB treatment included older age (odds ratio [OR], 1.10 per 10-year increase; 95% CI, 1.02-1.18), dementia (OR, 1.57; 95% CI, 1.15-2.13), acutely altered mental status (OR, 1.93; 95% CI, 1.23-3.04), urinary incontinence (OR, 1.81; 95% CI, 1.36-2.41), leukocytosis (white blood cell count >10 000/µL) (OR, 1.55; 95% CI, 1.21-2.00), positive urinalysis (presence of leukocyte esterase or nitrite, or >5 white blood cells per high-power field) (OR, 2.83; 95% CI, 2.05-3.93), and urine culture with a bacterial colony count greater than 100 000 colony-forming units per high-power field (OR, 2.30; 95% CI, 1.83-2.91). Treatment of ASB was associated with longer duration of hospitalization after urine testing (4 vs 3 days; relative risk, 1.37; 95% CI, 1.28-1.47). No other differences in secondary outcomes were identified after propensity weighting. CONCLUSIONS AND RELEVANCE: Hospitalized patients with ASB commonly receive inappropriate antibiotic therapy. Antibiotic treatment did not appear to be associated with improved outcomes; rather, treatment may be associated with longer duration of hospitalization after urine testing. To possibly reduce inappropriate antibiotic use, stewardship efforts should focus on improving urine testing practices and management strategies for elderly patients with altered mental status.

6.
Ann Intern Med ; 171(3): 153-163, 2019 08 06.
Article in English | MEDLINE | ID: mdl-31284301

ABSTRACT

Background: Randomized trials demonstrate no benefit from antibiotic treatment exceeding the shortest effective duration. Objective: To examine predictors and outcomes associated with excess duration of antibiotic treatment. Design: Retrospective cohort study. Setting: 43 hospitals in the Michigan Hospital Medicine Safety Consortium. Patients: 6481 general care medical patients with pneumonia. Measurements: The primary outcome was the rate of excess antibiotic treatment duration (excess days per 30-day period). Excess days were calculated by subtracting each patient's shortest effective (expected) treatment duration (based on time to clinical stability, pathogen, and pneumonia classification [community-acquired vs. health care-associated]) from the actual duration. Negative binomial generalized estimating equations (GEEs) were used to calculate rate ratios to assess predictors of 30-day rates of excess duration. Patient outcomes, assessed at 30 days via the medical record and telephone calls, were evaluated using logit GEEs that adjusted for patient characteristics and probability of treatment. Results: Two thirds (67.8% [4391 of 6481]) of patients received excess antibiotic therapy. Antibiotics prescribed at discharge accounted for 93.2% of excess duration. Patients who had respiratory cultures or nonculture diagnostic testing, had a longer stay, received a high-risk antibiotic in the prior 90 days, had community-acquired pneumonia, or did not have a total antibiotic treatment duration documented at discharge were more likely to receive excess treatment. Excess treatment was not associated with lower rates of any adverse outcomes, including death, readmission, emergency department visit, or Clostridioides difficile infection. Each excess day of treatment was associated with a 5% increase in the odds of antibiotic-associated adverse events reported by patients after discharge. Limitation: Retrospective design; not all patients could be contacted to report 30-day outcomes. Conclusion: Patients hospitalized with pneumonia often receive excess antibiotic therapy. Excess antibiotic treatment was associated with patient-reported adverse events. Future interventions should focus on whether reducing excess treatment and improving documentation at discharge improves outcomes. Primary Funding Source: Blue Cross Blue Shield of Michigan (BCBSM) and Blue Care Network as part of the BCBSM Value Partnerships program.


Subject(s)
Anti-Bacterial Agents/administration & dosage , Anti-Bacterial Agents/adverse effects , Hospitalization , Pneumonia, Bacterial/drug therapy , Aged , Aged, 80 and over , Community-Acquired Infections/drug therapy , Duration of Therapy , Female , Humans , Inappropriate Prescribing , Male , Michigan , Middle Aged , Retrospective Studies
7.
Ann Intern Med ; 171(1): 10-18, 2019 07 02.
Article in English | MEDLINE | ID: mdl-31158846

ABSTRACT

Background: Existing guidelines, including Choosing Wisely recommendations, endorse avoiding placement of peripherally inserted central catheters (PICCs) in patients with chronic kidney disease (CKD). Objective: To describe the frequency of and characteristics associated with PICC use in hospitalized patients with stage 3b or greater CKD (glomerular filtration rate [GFR] <45 mL/min/1.73 m2). Design: Prospective cohort study. Setting: 52 hospitals participating in the Michigan Hospital Medicine Safety Consortium. Participants: Hospitalized medical patients who received a PICC between November 2013 and September 2016. Measurements: Percentage of patients receiving PICCs who had CKD, frequency of PICC-related complications, and variation in the proportion of PICCs placed in patients with CKD. Results: Of 20 545 patients who had PICCs placed, 4743 (23.1% [95% CI, 20.9% to 25.3%]) had an estimated GFR (eGFR) less than 45 mL/min/1.73 m2 and 699 (3.4%) were receiving hemodialysis. In the intensive care unit (ICU), 30.9% (CI, 29.7% to 32.2%) of patients receiving PICCs had an eGFR less than 45 mL/min/1.73 m2; the corresponding percentage in wards was 19.3% (CI, 18.8% to 19.9%). Among patients with an eGFR less than 45 mL/min/1.73 m2, multilumen PICCs were placed more frequently than single-lumen PICCs. In wards, PICC-related complications occurred in 15.3% of patients with an eGFR less than 45 mL/min/1.73 m2 and in 15.2% of those with an eGFR of 45 mL/min/1.73 m2 or higher. The corresponding percentages in ICU settings were 22.4% and 23.9%. In patients with an eGFR less than 45 mL/min/1.73 m2, PICC placement varied widely across hospitals (interquartile range, 23.7% to 37.8% in ICUs and 12.8% to 23.7% in wards). Limitation: Nephrologist approval for placement could not be determined, and 2.7% of eGFR values were unknown and excluded. Conclusion: In this sample of hospitalized patients who received PICCs, placement in those with CKD was common and not concordant with clinical guidelines. Primary Funding Source: Blue Cross Blue Shield of Michigan and Blue Care Network.


Subject(s)
Catheterization, Central Venous/methods , Catheterization, Central Venous/statistics & numerical data , Kidney Failure, Chronic/therapy , Aged , Anti-Bacterial Agents/administration & dosage , Catheterization, Central Venous/adverse effects , Female , Glomerular Filtration Rate , Guideline Adherence , Hospitalization , Humans , Infusions, Intravenous , Kidney Failure, Chronic/physiopathology , Longitudinal Studies , Male , Michigan , Middle Aged , Practice Guidelines as Topic , Procedures and Techniques Utilization , Prospective Studies , Renal Dialysis
8.
Clin Infect Dis ; 69(8): 1269-1277, 2019 09 27.
Article in English | MEDLINE | ID: mdl-30759198

ABSTRACT

BACKGROUND: Fluoroquinolones increase the risk of Clostridioides difficile infection and antibiotic resistance. Hospitals often use pre-prescription approval or prospective audit and feedback to target fluoroquinolone prescribing. Whether these strategies impact aggregate fluoroquinolone use is unknown. METHODS: This study is a 48-hospital, retrospective cohort of general-care, medical patients hospitalized with pneumonia or positive urine culture between December 2015-September 2017. Hospitals were surveyed on their use of pre-prescription approval and/or prospective audit and feedback to target fluoroquinolone prescribing during hospitalization (fluoroquinolone stewardship). After controlling for hospital clustering and patient factors, aggregate (inpatient and post-discharge) fluoroquinolone (ciprofloxacin, levofloxacin, moxifloxacin) exposure was compared between hospitals with and without fluoroquinolone stewardship. RESULTS: There were 11 748 patients (6820 pneumonia; 4928 positive urine culture) included at 48 hospitals. All hospitals responded to the survey: 29.2% (14/48) reported using pre-prescription approval and/or prospective audit and feedback to target fluoroquinolone prescribing. After adjustment, fluoroquinolone stewardship was associated with fewer patients receiving a fluoroquinolone (37.1% vs 48.2%; P = .01) and fewer fluoroquinolone treatment days per 1000 patients (2282 vs 3096 days/1000 patients; P = .01), driven by lower inpatient prescribing. However, most (66.6%) fluoroquinolone treatment days occurred after discharge, and hospitals with fluoroquinolone stewardship had twice as many new fluoroquinolone starts after discharge as hospitals without (15.6% vs 8.4%; P = .003). CONCLUSIONS: Hospital-based stewardship interventions targeting fluoroquinolone prescribing were associated with less fluoroquinolone prescribing during hospitalization, but not at discharge. To limit aggregate fluoroquinolone exposure, stewardship programs should target both inpatient and discharge prescribing.


Subject(s)
Anti-Bacterial Agents/therapeutic use , Antimicrobial Stewardship , Clostridium Infections/drug therapy , Drug Prescriptions/statistics & numerical data , Fluoroquinolones/therapeutic use , Pneumonia/drug therapy , Aged , Ciprofloxacin/therapeutic use , Clostridium Infections/microbiology , Cohort Studies , Drug Resistance, Bacterial , Female , Hospitals , Humans , Levofloxacin/therapeutic use , Male , Michigan , Moxifloxacin/therapeutic use , Pneumonia/microbiology , Retrospective Studies , Risk , Surveys and Questionnaires
10.
Am J Med ; 131(6): 651-660, 2018 06.
Article in English | MEDLINE | ID: mdl-29408616

ABSTRACT

BACKGROUND: Catheter exchange over a guidewire is frequently performed for malfunctioning peripherally inserted central catheters (PICCs). Whether such exchanges are associated with venous thromboembolism is not known. METHODS: We performed a retrospective cohort study to assess the association between PICC exchange and risk of thromboembolism. Adult hospitalized patients that received a PICC during clinical care at one of 51 hospitals participating in the Michigan Hospital Medicine Safety consortium were included. The primary outcome was hazard of symptomatic venous thromboembolism (radiographically confirmed upper-extremity deep vein thrombosis and pulmonary embolism) in those that underwent PICC exchange vs those that did not. RESULTS: Of 23,010 patients that underwent PICC insertion in the study, 589 patients (2.6%) experienced a PICC exchange. Almost half of all exchanges were performed for catheter dislodgement or occlusion. A total of 480 patients (2.1%) experienced PICC-associated deep vein thrombosis. The incidence of deep vein thrombosis was greater in those that underwent PICC exchange vs those that did not (3.6% vs 2.0%, P < .001). Median time to thrombosis was shorter among those that underwent exchange vs those that did not (5 vs 11 days, P = .02). Following adjustment, PICC exchange was independently associated with twofold greater risk of thrombosis (hazard ratio [HR] 1.98; 95% confidence interval [CI], 1.37-2.85) vs no exchange. The effect size of PICC exchange on thrombosis was second in magnitude to device lumens (HR 2.06; 95% CI, 1.59-2.66 and HR 2.31; 95% CI, 1.6-3.33 for double- and triple-lumen devices, respectively). CONCLUSION: Guidewire exchange of PICCs may be associated with increased risk of thrombosis. As some exchanges may be preventable, consideration of risks and benefits of exchanges in clinical practice is needed.


Subject(s)
Catheterization, Central Venous/adverse effects , Catheterization, Central Venous/methods , Pulmonary Embolism/etiology , Upper Extremity Deep Vein Thrombosis/etiology , Aged , Cohort Studies , Female , Hospitalization , Humans , Male , Retrospective Studies , Risk Factors
11.
J Hosp Med ; 13(2): 76-82, 2018 02.
Article in English | MEDLINE | ID: mdl-29377971

ABSTRACT

BACKGROUND: The guidelines for peripherally inserted central catheters (PICCs) recommend avoiding insertion if the anticipated duration of use is =5 days. However, short-term PICC use is common in hospitals. We sought to identify patient, provider, and device characteristics and the clinical outcomes associated with short-term PICCs. METHODS: Between January 2014 and June 2016, trained abstractors at 52 Michigan Hospital Medicine Safety (HMS) Consortium sites collected data from medical records of adults that received PICCs during hospitalization. Patients were prospectively followed until PICC removal, death, or 70 days after insertion. Multivariable logistic regression models were fit to identify factors associated with short-term PICCs, defined as dwell time of =5 days. Complications associated with short-term use, including major (eg, venous thromboembolism [VTE] or central lineassociated bloodstream infection [CLABSI]) or minor (eg, catheter occlusion, tip migration) events were assessed. RESULTS: Of the 15,397 PICCs placed, 3902 (25.3%) had a dwell time of =5 days. Most (95.5%) short-term PICCs were removed during hospitalization. Compared to PICCs placed for >5 days, variables associated with short-term PICCs included difficult venous access (odds ratio [OR], 1.54; 95% confidence interval [CI], 1.40-1.69), multilumen devices (OR, 1.53; 95% CI, 1.39-1.69), and teaching hospitals (OR, 1.25; 95% CI, 1.04-1.52). Among those with short-term PICCs, 374 (9.6%) experienced a complication, including 99 (2.5%) experiencing VTE and 17 (0.4%) experiencing CLABSI events. The most common minor complications were catheter occlusion (4%) and tip migration (2.2%). CONCLUSION: Short-term use of PICCs is common and associated with patient, provider, and device factors. As PICC placement, even for brief periods, is associated with complications, efforts targeted at factors underlying such use appear necessary.


Subject(s)
Catheter-Related Infections/drug therapy , Catheterization, Peripheral/adverse effects , Female , Hospitalization/statistics & numerical data , Hospitals, Teaching , Humans , Male , Michigan , Middle Aged , Prospective Studies , Risk Assessment , Time Factors , Venous Thromboembolism/etiology
12.
Infect Control Hosp Epidemiol ; 38(10): 1155-1166, 2017 10.
Article in English | MEDLINE | ID: mdl-28807074

ABSTRACT

BACKGROUND Peripherally inserted central catheters (PICCs) are associated with central-line-associated bloodstream infections (CLABSIs). However, no tools to predict risk of PICC-CLABSI have been developed. OBJECTIVE To operationalize or prioritize CLABSI risk factors when making decisions regarding the use of PICCs using a risk model to estimate an individual's risk of PICC-CLABSI prior to device placement. METHODS Using data from the Michigan Hospital Medicine Safety consortium, patients that experienced PICC-CLABSI between January 2013 and October 2016 were identified. A Cox proportional hazards model with robust sandwich standard error estimates was then used to identify factors associated with PICC-CLABSI. Based on regression coefficients, points were assigned to each predictor and summed for each patient to create the Michigan PICC-CLABSI (MPC) score. The predictive performance of the score was assessed using time-dependent area-under-the-curve (AUC) values. RESULTS Of 23,088 patients that received PICCs during the study period, 249 patients (1.1%) developed a CLABSI. Significant risk factors associated with PICC-CLABSI included hematological cancer (3 points), CLABSI within 3 months of PICC insertion (2 points), multilumen PICC (2 points), solid cancers with ongoing chemotherapy (2 points), receipt of total parenteral nutrition (TPN) through the PICC (1 point), and presence of another central venous catheter (CVC) at the time of PICC placement (1 point). The MPC score was significantly associated with risk of CLABSI (P<.0001). For every point increase, the hazard ratio of CLABSI increased by 1.63 (95% confidence interval, 1.56-1.71). The area under the receiver-operating-characteristics curve was 0.67 to 0.77 for PICC dwell times of 6 to 40 days, which indicates good model calibration. CONCLUSION The MPC score offers a novel way to inform decisions regarding PICC use, surveillance of high-risk cohorts, and utility of blood cultures when PICC-CLABSI is suspected. Future studies validating the score are necessary. Infect Control Hosp Epidemiol 2017;38:1155-1166.


Subject(s)
Bacteremia/epidemiology , Catheter-Related Infections/epidemiology , Catheter-Related Infections/microbiology , Catheterization, Central Venous/adverse effects , Catheterization, Peripheral/adverse effects , Central Venous Catheters/microbiology , Aged , Comorbidity , Databases, Factual , Decision Making , Female , Humans , Male , Michigan/epidemiology , Middle Aged , Proportional Hazards Models , Risk Assessment/methods , Risk Factors
13.
Stat Med ; 36(27): 4243-4265, 2017 Nov 30.
Article in English | MEDLINE | ID: mdl-28786131

ABSTRACT

Two paradigms for the evaluation of surrogate markers in randomized clinical trials have been proposed: the causal effects paradigm and the causal association paradigm. Each of these paradigms rely on assumptions that must be made to proceed with estimation and to validate a candidate surrogate marker (S) for the true outcome of interest (T). We consider the setting in which S and T are Gaussian and are generated from structural models that include an unobserved confounder. Under the assumed structural models, we relate the quantities used to evaluate surrogacy within both the causal effects and causal association frameworks. We review some of the common assumptions made to aid in estimating these quantities and show that assumptions made within one framework can imply strong assumptions within the alternative framework. We demonstrate that there is a similarity, but not exact correspondence between the quantities used to evaluate surrogacy within each framework, and show that the conditions for identifiability of the surrogacy parameters are different from the conditions, which lead to a correspondence of these quantities.


Subject(s)
Biomarkers , Causality , Normal Distribution , Data Interpretation, Statistical , Humans , Models, Statistical , Randomized Controlled Trials as Topic/methods
14.
Biostatistics ; 16(2): 400-12, 2015 Apr.
Article in English | MEDLINE | ID: mdl-25236906

ABSTRACT

Because of the time and expense required to obtain clinical outcomes of interest, such as functional limitations or death, clinical trials often focus the effects of treatment on earlier and more easily obtained surrogate markers. Preliminary work to define surrogates focused on the fraction of a treatment effect "explained" by a marker in a regression model, but as notions of causality have been formalized in the statistical setting, formal definitions of high-quality surrogate markers have been developed in the causal inference framework, using either the "causal effect" or "causal association" settings. In the causal effect setting, high-quality surrogate markers have a large fraction of the total treatment effect explained by the effect of the treatment on the marker net of the treatment on the outcome. In the causal association setting, high-quality surrogate markers have large treatment effects on the outcome when there are large treatment effects on the marker, and small effects on the outcome when there are small effects on the marker. A particularly important feature of a surrogate marker is that the direction of a treatment effect be the same for both the marker and the outcome. Settings in which the marker and outcome are positively associated but the marker and outcome have beneficial and harmful or harmful and beneficial treatment effects, respectively, have been referred to as "surrogate paradoxes". If this outcome always occurs, it is not problematic; however, as correlations among the outcome, marker, and their treatment effects weaken, it may occur for some trials and not for others, leading to potentially incorrect conclusions, and real-life examples that shortened thousands of lives are unfortunately available. We propose measures for assessing the risk of the surrogate paradox using the meta-analytic causal association framework, which allows us to focus on the probability that a given treatment will yield treatment effect in different directions between the marker and the outcome, and to determine the size of a beneficial effect of the treatment on the marker required to minimize the risk of a harmful effect of the treatment on the outcome. We provide simulations and consider two applications.


Subject(s)
Biomarkers , Meta-Analysis as Topic , Outcome Assessment, Health Care/statistics & numerical data , Glaucoma/therapy , Humans , Intraocular Pressure/physiology , Randomized Controlled Trials as Topic/statistics & numerical data
15.
Clin Trials ; 12(4): 317-22, 2015 Aug.
Article in English | MEDLINE | ID: mdl-25490988

ABSTRACT

BACKGROUND: The validation of intermediate markers as surrogate markers (S) for the true outcome of interest (T) in clinical trials offers the possibility for trials to be run more quickly and cheaply by using the surrogate endpoint in place of the true endpoint. PURPOSE: Working within a principal stratification framework, we propose causal quantities to evaluate surrogacy using a Gaussian copula model for an ordinal surrogate and time-to-event final outcome. The methods are applied to data from four colorectal cancer clinical trials, where S is tumor response and T is overall survival. METHODS: For the Gaussian copula model, a Bayesian estimation strategy is used and, as some parameters are not identifiable from the data, we explore the use of informative priors that are consistent with reasonable assumptions in the surrogate marker setting to aid in estimation. RESULTS: While there is some bias in the estimation of the surrogacy quantities of interest, the estimation procedure does reasonably well at distinguishing between poor and good surrogate markers. LIMITATIONS: Some of the parameters of the proposed model are not identifiable from the data, and therefore, assumptions must be made in order to aid in their estimation. CONCLUSIONS: The proposed quantities can be used in combination to provide evidence about the validity of S as a surrogate marker for T.


Subject(s)
Biomarkers/analysis , Models, Statistical , Bayes Theorem , Clinical Trials as Topic , Colorectal Neoplasms , Humans , Multivariate Analysis , Normal Distribution
16.
PLoS One ; 9(11): e109945, 2014.
Article in English | MEDLINE | ID: mdl-25372569

ABSTRACT

PURPOSE: Recently, much media attention has been given to the premature deaths in professional wrestlers. Since no formal studies exist that have statistically examined the probability of premature mortality in professional wrestlers, we determined survival estimates for active wresters over the past quarter century to establish the factors contributing to the premature mortality of these individuals. METHODS: Data including cause of death was obtained from public records and wrestling publications in wrestlers who were active between January 1, 1985 and December 31, 2011. 557 males were considered consistently active wrestlers during this time period. 2007 published mortality rates from the Center for Disease Control were used to compare the general population to the wrestlers by age, BMI, time period, and cause of death. Survival estimates and Cox hazard regression models were fit to determine incident premature deaths and factors associated with lower survival. Cumulative incidence function (CIF) estimates given years wrestled was obtained using a competing risks model for cause of death. RESULTS: The mortality for all wrestlers over the 26-year study period was.007 deaths/total person-years or 708 per 100,000 per year, and 16% of deaths occurred below age 50 years. Among wrestlers, the leading cause of deaths based on CIF was cardiovascular-related (38%). For cardiovascular-related deaths, drug overdose-related deaths and cancer deaths, wrestler mortality rates were respectively 15.1, 122.7 and 6.4 times greater than those of males in the general population. Survival estimates from hazard models indicated that BMI is significantly associated with the hazard of death from total time wrestling (p<0.0001). CONCLUSION: Professional wrestlers are more likely to die prematurely from cardiovascular disease compared to the general population and morbidly obese wrestlers are especially at risk. Results from this study may be useful for professional wrestlers, as well as wellness policy and medical care implementation.


Subject(s)
Cardiovascular Diseases/etiology , Mortality, Premature , Wrestling/statistics & numerical data , Adult , Cardiovascular Diseases/mortality , Humans , Male , Wrestling/physiology
17.
Radiother Oncol ; 110(2): 291-7, 2014 Feb.
Article in English | MEDLINE | ID: mdl-24507766

ABSTRACT

PURPOSE: To evaluate rectal dose and post-treatment patient-reported bowel quality of life (QOL) following radiation therapy for prostate cancer. METHODS: Patient-reported QOL was measured at baseline and 2-years via the expanded prostate cancer index composite (EPIC) for 90 patients. Linear regression modeling was performed using the baseline score for the QUANTEC normal tissue complication probability model and dose volume histogram (DVH) parameters for the whole and segmented rectum (superior, middle, and inferior). RESULTS: At 2-years the mean summary score declined from a baseline of 96.0-91.8. The median volume of rectum treated to ≥70 Gy (V70) was 11.7% for the whole rectum and 7.0%, 24.4%, and 1.3% for the inferior, middle, and superior rectum, respectively. Mean dose to the whole and inferior rectum correlated with declines in bowel QOL while dose to the mid and superior rectum did not. Low (V25-V40), intermediate (V50-V60) and high (V70-V80) doses to the inferior rectum influenced bleeding, incontinence, urgency, and overall bowel problems. Only the highest dose (V80) to the mid-rectum correlated with rectal bleeding and overall bowel problems. CONCLUSIONS: Segmental DVH analysis of the rectum reveals associations between bowel QOL and inferior rectal dose that could significantly influence radiation planning and prognostic models.


Subject(s)
Prostatic Neoplasms/radiotherapy , Radiation Injuries/etiology , Rectum/radiation effects , Aged , Cohort Studies , Gastrointestinal Hemorrhage/etiology , Gastrointestinal Hemorrhage/physiopathology , Humans , Linear Models , Male , Quality of Life , Radiation Injuries/physiopathology , Radiotherapy Dosage , Radiotherapy, Conformal/adverse effects , Rectal Diseases/etiology , Rectal Diseases/physiopathology , Rectum/physiopathology
18.
J Pain ; 15(5): 468-75, 2014 May.
Article in English | MEDLINE | ID: mdl-24462504

ABSTRACT

UNLABELLED: Aromatase inhibitors (AIs), which are used to treat breast cancer, inhibit estrogen production in postmenopausal women. AI-associated musculoskeletal symptoms occur in approximately half of treated women and lead to treatment discontinuation in 20 to 30%. The etiology may be due in part to estrogen deprivation. In premenopausal women, lower estrogen levels have been associated with increased pain as well as with impairment of descending pain inhibitory pathways, which may be a risk factor for developing chronic pain. We prospectively tested whether AI-induced estrogen deprivation alters pain sensitivity, thereby increasing the risk of developing AI-associated musculoskeletal symptoms. Fifty postmenopausal breast cancer patients underwent pressure pain testing and conditioned pain modulation (CPM) assessment prior to AI initiation and after 3 and 6 months. At baseline, 26 of 40 (65%) assessed patients demonstrated impaired CPM, which was greater in those who had previously received chemotherapy (P = .006). No statistically significant change in pressure pain threshold or CPM was identified following estrogen deprivation. In addition, there was no association with either measure of pain sensitivity and change in patient-reported pain with AI therapy. AI-associated musculoskeletal symptoms are not likely due to decreased pain threshold or impaired CPM prior to treatment initiation, or to effects of estrogen depletion on pain sensitivity. PERSPECTIVE: This article presents our findings of the effect of estrogen deprivation on objective measures of pain sensitivity. In postmenopausal women, medication-induced estrogen depletion did not result in an identifiable change in pressure pain threshold or CPM. Impaired CPM may be associated with chemotherapy.


Subject(s)
Analgesics/therapeutic use , Aromatase Inhibitors/therapeutic use , Breast Neoplasms/physiopathology , Estrogens/deficiency , Pain Threshold/drug effects , Pain/drug therapy , Adult , Aged , Estradiol/blood , Female , Humans , Middle Aged , Pain/physiopathology , Pain Measurement , Pain Threshold/physiology , Physical Stimulation , Pressure , Prospective Studies , Treatment Outcome
19.
Biostatistics ; 15(2): 266-83, 2014 Apr.
Article in English | MEDLINE | ID: mdl-24285772

ABSTRACT

In clinical trials, a surrogate outcome variable (S) can be measured before the outcome of interest (T) and may provide early information regarding the treatment (Z) effect on T. Using the principal surrogacy framework introduced by Frangakis and Rubin (2002. Principal stratification in causal inference. Biometrics 58, 21-29), we consider an approach that has a causal interpretation and develop a Bayesian estimation strategy for surrogate validation when the joint distribution of potential surrogate and outcome measures is multivariate normal. From the joint conditional distribution of the potential outcomes of T, given the potential outcomes of S, we propose surrogacy validation measures from this model. As the model is not fully identifiable from the data, we propose some reasonable prior distributions and assumptions that can be placed on weakly identified parameters to aid in estimation. We explore the relationship between our surrogacy measures and the surrogacy measures proposed by Prentice (1989. Surrogate endpoints in clinical trials: definition and operational criteria. Statistics in Medicine 8, 431-440). The method is applied to data from a macular degeneration study and an ovarian cancer study.


Subject(s)
Biomarkers , Endpoint Determination , Models, Statistical , Research Design/standards , Treatment Outcome , Bayes Theorem , Clinical Trials as Topic , Humans , Reproducibility of Results , Sensitivity and Specificity
20.
Biometrics ; 69(3): 569-72, 2013 Sep.
Article in English | MEDLINE | ID: mdl-24073862
SELECTION OF CITATIONS
SEARCH DETAIL
...