Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 75
Filter
2.
PM R ; 15(6): 800-804, 2023 Jun.
Article in English | MEDLINE | ID: mdl-37029465
3.
J Clin Epidemiol ; 155: 64-72, 2023 03.
Article in English | MEDLINE | ID: mdl-36736709

ABSTRACT

OBJECTIVES: A "null field" is a scientific field where there is nothing to discover and where observed associations are thus expected to simply reflect the magnitude of bias. We aimed to characterize a null field using a known example, homeopathy (a pseudoscientific medical approach based on using highly diluted substances), as a prototype. STUDY DESIGN AND SETTING: We identified 50 randomized placebo-controlled trials of homeopathy interventions from highly cited meta-analyses. The primary outcome variable was the observed effect size in the studies. Variables related to study quality or impact were also extracted. RESULTS: The mean effect size for homeopathy was 0.36 standard deviations (Hedges' g; 95% confidence interval: 0.21, 0.51) better than placebo, which corresponds to an odds ratio of 1.94 (95% CI: 1.69, 2.23) in favor of homeopathy. 80% of studies had positive effect sizes (favoring homeopathy). Effect size was significantly correlated with citation counts from journals in the directory of open-access journals and CiteWatch. We identified common statistical errors in 25 studies. CONCLUSION: A null field like homeopathy can exhibit large effect sizes, high rates of favorable results, and high citation impact in the published scientific literature. Null fields may represent a useful negative control for the scientific process.


Subject(s)
Homeopathy , Humans , Homeopathy/methods , Bias , Odds Ratio
4.
Sports Med ; 53(2): 313-325, 2023 02.
Article in English | MEDLINE | ID: mdl-36208412

ABSTRACT

BACKGROUND AND OBJECTIVE: Meta-analysis and meta-regression are often highly cited and may influence practice. Unfortunately, statistical errors in meta-analyses are widespread and can lead to flawed conclusions. The purpose of this article was to review common statistical errors in meta-analyses and to document their frequency in highly cited meta-analyses from strength and conditioning research. METHODS: We identified five errors in one highly cited meta-regression from strength and conditioning research: implausible outliers; overestimated effect sizes that arise from confusing standard deviation with standard error; failure to account for correlated observations; failure to account for within-study variance; and a focus on within-group rather than between-group results. We then quantified the frequency of these errors in 20 of the most highly cited meta-analyses in the field of strength and conditioning research from the past 20 years. RESULTS: We found that 85% of the 20 most highly cited meta-analyses in strength and conditioning research contained statistical errors. Almost half (45%) contained at least one effect size that was mistakenly calculated using standard error rather than standard deviation. In several cases, this resulted in obviously wrong effect sizes, for example, effect sizes of 11 or 14 standard deviations. Additionally, 45% failed to account for correlated observations despite including numerous effect sizes from the same study and often from the same group within the same study. CONCLUSIONS: Statistical errors in meta-analysis and meta-regression are common in strength and conditioning research. We highlight five errors that authors, editors, and readers should check for when preparing or critically reviewing meta-analyses.

5.
Orthop J Sports Med ; 10(9): 23259671221123588, 2022 Sep.
Article in English | MEDLINE | ID: mdl-36157087

ABSTRACT

Background: Bone stress injuries (BSIs) are common in athletes. Risk factors for BSI may differ by skeletal anatomy and relative contribution of trabecular-rich and cortical-rich bone. Hypothesis: We hypothesized that Female Athlete Triad (Triad) risk factors would be more strongly associated with BSIs sustained at trabecular-rich versus cortical-rich skeletal sites. Study Design: Cohort study; Level of evidence, 2. Methods: The study population comprised 321 female National Collegiate Athletic Association Division I athletes participating in 16 sports from 2008 to 2014. Triad risk factors and a Triad cumulative risk score were assessed using responses to preparticipation examination and dual energy x-ray absorptiometry to measure lumbar spine and whole-body bone mineral density (BMD). Sports-related BSIs were diagnosed by a physician and confirmed radiologically. Athletes were grouped into those sustaining a subsequent trabecular-rich BSI, a subsequent cortical-rich BSI, and those without a BSI. Data were analyzed with multinomial logistic regression adjusted for participation in cross-country running versus other sports. Results: A total of 19 participants sustained a cortical-rich BSI (6%) and 10 sustained a trabecular-rich BSI (3%) over the course of collegiate sports participation. The Triad cumulative risk score was significantly related to both trabecular-rich and cortical-rich BSI. However, lower BMD and weight were associated with significantly greater risk for trabecular-rich than cortical-rich BSIs. For every value lower than 1 SD, the odds ratios (95% CIs) for trabecular-rich versus cortical-rich BSI were 3.08 (1.25-7.56) for spine BMD; 2.38 (1.22-4.64) for whole-body BMD; and 5.26 (1.48-18.70) for weight. Taller height was a significantly better predictor of cortical-rich than trabecular-rich BSI. Conclusion: The Triad cumulative risk score was significantly associated with both trabecular-rich and cortical-rich BSI, but Triad-related risk factors appeared more strongly related to trabecular-rich BSI. In particular, low BMD and low weight were associated with significantly higher increases in the risk of trabecular-rich BSI than cortical-rich BSI. These findings suggest Triad risk factors are more common in athletes sustaining BSI in trabecular-rich than cortical-rich locations.

7.
Front Neurol ; 12: 727171, 2021.
Article in English | MEDLINE | ID: mdl-34744968

ABSTRACT

Background and Purpose: Prediction models for functional outcomes after ischemic stroke are useful for statistical analyses in clinical trials and guiding patient expectations. While there are models predicting dichotomous functional outcomes after ischemic stroke, there are no models that predict ordinal mRS outcomes. We aimed to create a model that predicts, at the time of hospital discharge, a patient's modified Rankin Scale (mRS) score on day 90 after ischemic stroke. Methods: We used data from three multi-center prospective studies: CRISP, DEFUSE 2, and DEFUSE 3 to derive and validate an ordinal logistic regression model that predicts the 90-day mRS score based on variables available during the stroke hospitalization. Forward selection was used to retain independent significant variables in the multivariable model. Results: The prediction model was derived using data on 297 stroke patients from the CRISP and DEFUSE 2 studies. National Institutes of Health Stroke Scale (NIHSS) at discharge and age were retained as significant (p < 0.001) independent predictors of the 90-day mRS score. When applied to the external validation set (DEFUSE 3, n = 160), the model accurately predicted the 90-day mRS score within one point for 78% of the patients in the validation cohort. Conclusions: A simple model using age and NIHSS score at time of discharge can predict 90-day mRS scores in patients with ischemic stroke. This model can be useful for prognostication in routine clinical care and to impute missing data in clinical trials.

8.
PM R ; 13(9): 1050-1055, 2021 09.
Article in English | MEDLINE | ID: mdl-33905601
9.
Sports (Basel) ; 10(1)2021 Dec 21.
Article in English | MEDLINE | ID: mdl-35050966

ABSTRACT

Sun exposure is a risk factor for skin cancer. Knowledge and behaviors around sun exposure protective measures are poorly described in athletes including runners. Our primary objective was to describe sun exposure behaviors and knowledge in a population of runners. A cross-sectional online survey was administered to 697 runners to measure the frequency of seven sun protective behaviors: sunscreen use on the face or body; wearing a hat, sunglasses, or long sleeves; running in shade; and avoidance of midday running. Between 54% and 84% of runners reported that they engaged in these behaviors at least sometimes, but only 7% to 45% reported frequent use. Of 525 runners who gave a primary reason for not using sunscreen regularly, 49.0% cited forgetfulness; 17.3% cited discomfort; and only a small percentage cited maintaining a tan (6.1%) or optimizing vitamin D (5.1%). Of 689 runners who responded to a question about what factor most influences their overall sun exposure habits, 39.2% cited fear of skin cancer, 28.7% cited comfort level, and 15.8% cited fear of skin aging. In addition to the seven individual behaviors, we also asked runners how frequently they took precautions to protect against the sun overall. We explored associations between participant characteristics and the overall use of sun protection using ordinal logistic regression. Overall, sun protection was used more frequently in runners who were female, older, or had a history of skin cancer. Runners appear to recognize the importance of sun protection and the potential consequences of not using it, but report forgetfulness and discomfort as the biggest barriers to consistent use. Interventions using habit-formation strategies and self-regulation training may prove to be most useful in closing this gap between knowledge and practice.

11.
PM R ; 13(9): 945-953, 2021 09.
Article in English | MEDLINE | ID: mdl-33037847

ABSTRACT

INTRODUCTION: Determinants of bone health and injury are important to identify in athletes. Bone mineral density (BMD) is commonly measured in athletes with Female Athlete Triad (Triad) risk factors; the trabecular bone score (TBS) has been proposed to predict fracture risk independent of BMD. Evaluation of TBS and spine BMD in relation bone stress injury (BSI) risk has not been studied in female collegiate athletes. OBJECTIVE: We hypothesized that spine BMD and TBS would each independently predict BSI and that the combined measures would improve injury prediction in female collegiate athletes. We also hypothesized that each measure would be correlated with Triad risk factors. DESIGN: Retrospective cohort. SETTING: Academic Institution. METHODS: Dual energy x-ray absorptiometry (DXA) of the lumbar spine was used to calculate BMD and TBS values. Chart review was used to identify BSI that occurred after the DXA measurement and to obtain Triad risk factors. We used logistic regression to examine the ability of TBS and BMD alone or in combination to predict prospective BSI. RESULTS: Within 321 athletes, 29 (9.0%) sustained a BSI after DXA. BMD and TBS were highly correlated (Pearson correlation r = 0.62, P < .0001). Spine BMD and TBS had similar ability to predict BSI; the C-statistic and 95% confidence intervals were 0.69 (0.58 to 0.81) for spine BMD versus 0.68 (0.57 to 0.79) for TBS. No improvement in discrimination was observed with combined BMD + TBS (C-statistic 0.70, 0.59 to 0.81). Both TBS and BMD predicted trabecular-rich BSI (defined as pelvis, femoral neck, and calcaneus) better than cortical-rich BSI. Both measures had similar correlations with Triad risk factors. CONCLUSION: Lower BMD and TBS values are associated with elevated risk for BSI and similar correlation to Triad risk factors. TBS does not improve prediction of BSI. Collectively, our findings suggest that BMD may be a sufficient measure of skeletal integrity from DXA in female collegiate athletes.


Subject(s)
Bone Density , Cancellous Bone , Absorptiometry, Photon , Athletes , Cancellous Bone/diagnostic imaging , Female , Humans , Lumbar Vertebrae/diagnostic imaging , Prospective Studies , Retrospective Studies , Risk Factors
12.
PLoS One ; 15(6): e0235318, 2020.
Article in English | MEDLINE | ID: mdl-32589653

ABSTRACT

Magnitude-based inference (MBI) is a controversial statistical method that has been used in hundreds of papers in sports science despite criticism from statisticians. To better understand how this method has been applied in practice, we systematically reviewed 232 papers that used MBI. We extracted data on study design, sample size, and choice of MBI settings and parameters. Median sample size was 10 per group (interquartile range, IQR: 8-15) for multi-group studies and 14 (IQR: 10-24) for single-group studies; few studies reported a priori sample size calculations (15%). Authors predominantly applied MBI's default settings and chose "mechanistic/non-clinical" rather than "clinical" MBI even when testing clinical interventions (only 16 studies out of 232 used clinical MBI). Using these data, we can estimate the Type I error rates for the typical MBI study. Authors frequently made dichotomous claims about effects based on the MBI criterion of a "likely" effect and sometimes based on the MBI criterion of a "possible" effect. When the sample size is n = 8 to 15 per group, these inferences have Type I error rates of 12%-22% and 22%-45%, respectively. High Type I error rates were compounded by multiple testing: Authors reported results from a median of 30 tests related to outcomes; and few studies specified a primary outcome (14%). We conclude that MBI has promoted small studies, promulgated a "black box" approach to statistics, and led to numerous papers where the conclusions are not supported by the data. Amidst debates over the role of p-values and significance testing in science, MBI also provides an important natural experiment: we find no evidence that moving researchers away from p-values or null hypothesis significance testing makes them less prone to dichotomization or over-interpretation of findings.


Subject(s)
Science/statistics & numerical data , Sports Medicine/statistics & numerical data
15.
PM R ; 12(2): 211-215, 2020 02.
Article in English | MEDLINE | ID: mdl-31850680

Subject(s)
Statistics as Topic
16.
PLoS Med ; 16(12): e1002994, 2019 12.
Article in English | MEDLINE | ID: mdl-31869328

ABSTRACT

BACKGROUND: Vaccine hesitancy, the reluctance or refusal to receive vaccination, is a growing public health problem in the United States and globally. State policies that eliminate nonmedical ("personal belief") exemptions to childhood vaccination requirements are controversial, and their effectiveness to improve vaccination coverage remains unclear given limited rigorous policy analysis. In 2016, a California policy (Senate Bill 277) eliminated nonmedical exemptions from school entry requirements. The objective of this study was to estimate the association between California's 2016 policy and changes in vaccine coverage. METHODS AND FINDINGS: We used a quasi-experimental state-level synthetic control analysis and a county-level difference-in-differences analysis to estimate the impact of the 2016 California policy on vaccination coverage and prevalence of exemptions to vaccine requirements (nonmedical and medical). We used publicly available state-level data from the US Centers for Disease Control and Prevention on coverage of measles, mumps, and rubella (MMR) vaccination, nonmedical exemption, and medical exemption in children entering kindergarten. We used county-level data individually requested from state departments of public health on overall vaccine coverage and exemptions. Based on data availability, we included state-level data for 45 states, including California, from 2011 to 2017 and county-level data for 17 states from 2010 to 2017. The prespecified primary study outcome was MMR vaccination in the state analysis and overall vaccine coverage in the county analysis. In the state-level synthetic control analysis, MMR coverage in California increased by 3.3% relative to its synthetic control in the postpolicy period (top 2 of 43 states evaluated in the placebo tests, top 5%), nonmedical exemptions decreased by 2.4% (top 2 of 43 states evaluated in the placebo tests, top 5%), and medical exemptions increased by 0.4% (top 1 of 44 states evaluated in the placebo tests, top 2%). In the county-level analysis, overall vaccination coverage increased by 4.3% (95% confidence interval [CI] 2.9%-5.8%, p < 0.001), nonmedical exemptions decreased by 3.9% (95% CI 2.4%-5.4%, p < 0.001), and medical exemptions increased by 2.4% (95% CI 2.0%-2.9%, p < 0.001). Changes in vaccination coverage across counties after the policy implementation from 2015 to 2017 ranged from -6% to 26%, with larger increases in coverage in counties with lower prepolicy vaccine coverage. Results were robust to alternative model specifications. The limitations of the study were the exclusion of a subset of US states from the analysis and the use of only 2 years of postpolicy data based on data availability. CONCLUSIONS: In this study, implementation of the California policy that eliminated nonmedical childhood vaccine exemptions was associated with an estimated increase in vaccination coverage and a reduction in nonmedical exemptions at state and county levels. The observed increase in medical exemptions was offset by the larger reduction in nonmedical exemptions. The largest increases in vaccine coverage were observed in the most "high-risk" counties, meaning those with the lowest prepolicy vaccine coverage. Our findings suggest that government policies removing nonmedical exemptions can be effective at increasing vaccination coverage.


Subject(s)
Health Policy/legislation & jurisprudence , Policy Making , Vaccination Coverage/legislation & jurisprudence , Vaccination/legislation & jurisprudence , Vaccines/economics , California , Child , Child, Preschool , Humans , Measles/prevention & control , Public Health/legislation & jurisprudence , Schools/statistics & numerical data , United States , Vaccination/methods
18.
PM R ; 11(6): 654-656, 2019 06.
Article in English | MEDLINE | ID: mdl-31033199
20.
JACC CardioOncol ; 1(1): 24-36, 2019 Sep.
Article in English | MEDLINE | ID: mdl-34396159

ABSTRACT

OBJECTIVES: This study quantified the change in blood pressure (BP) during antivascular endothelial growth factor (VEGF) tyrosine kinase inhibitor (TKI) therapy, compared BPs between TKIs, and analyzed change in BP during antihypertensive therapy. BACKGROUND: TKIs targeting VEGF are associated with hypertension. The absolute change in BP during anti-VEGF TKI treatment is not well characterized outside clinical trials. METHODS: A retrospective single-center study included patients with metastatic renal cell carcinoma who received anti-VEGF TKIs between 2007 and 2018. Mixed models analyzed 3,088 BPs measured at oncology clinics. RESULTS: In 228 patients (baseline systolic blood pressure [SBP] 130.2 ± 16.3 mm Hg, diastolic blood pressure [DBP] 76.8 ± 9.3 mm Hg), anti-VEGF TKIs were associated with mean increases in SBP of 8.5 mm Hg (p < 0.0001) and DBP of 6.7 mm Hg (p <0.0001). Of the anti-VEGF TKIs evaluated, axitinib was associated with the greatest BP increase, with an increase in SBP of 12.6 mm Hg (p < 0.0001) and in DBP of 10.3 mm Hg (p < 0.0001) relative to baseline. In pairwise comparisons between agents, axitinib was associated with greater SBPs than cabozantinib by 8.4 mm Hg (p = 0.004) and pazopanib by 5.1 mm Hg (p = 0.01). Subsequent anti-VEGF TKI courses were associated with small increases in DBP, but not SBP, relative to the first course. During anti-VEGF TKI therapy, calcium-channel blockers and potassium-sparing diuretic agents were associated with the largest BP reductions, with decreases in SBP of 5.6 mm Hg (p < 0.0001) and 9.9 mm Hg (p = 0.007), respectively. CONCLUSIONS: Anti-VEGF TKIs are associated with increased BP; greatest increases are observed with axitinib. Calcium-channel blockers and potassium-sparing diuretic agents were associated with the largest reductions in BP.

SELECTION OF CITATIONS
SEARCH DETAIL
...