Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 76
Filter
Add more filters

Country/Region as subject
Publication year range
1.
PLoS Med ; 16(12): e1002994, 2019 12.
Article in English | MEDLINE | ID: mdl-31869328

ABSTRACT

BACKGROUND: Vaccine hesitancy, the reluctance or refusal to receive vaccination, is a growing public health problem in the United States and globally. State policies that eliminate nonmedical ("personal belief") exemptions to childhood vaccination requirements are controversial, and their effectiveness to improve vaccination coverage remains unclear given limited rigorous policy analysis. In 2016, a California policy (Senate Bill 277) eliminated nonmedical exemptions from school entry requirements. The objective of this study was to estimate the association between California's 2016 policy and changes in vaccine coverage. METHODS AND FINDINGS: We used a quasi-experimental state-level synthetic control analysis and a county-level difference-in-differences analysis to estimate the impact of the 2016 California policy on vaccination coverage and prevalence of exemptions to vaccine requirements (nonmedical and medical). We used publicly available state-level data from the US Centers for Disease Control and Prevention on coverage of measles, mumps, and rubella (MMR) vaccination, nonmedical exemption, and medical exemption in children entering kindergarten. We used county-level data individually requested from state departments of public health on overall vaccine coverage and exemptions. Based on data availability, we included state-level data for 45 states, including California, from 2011 to 2017 and county-level data for 17 states from 2010 to 2017. The prespecified primary study outcome was MMR vaccination in the state analysis and overall vaccine coverage in the county analysis. In the state-level synthetic control analysis, MMR coverage in California increased by 3.3% relative to its synthetic control in the postpolicy period (top 2 of 43 states evaluated in the placebo tests, top 5%), nonmedical exemptions decreased by 2.4% (top 2 of 43 states evaluated in the placebo tests, top 5%), and medical exemptions increased by 0.4% (top 1 of 44 states evaluated in the placebo tests, top 2%). In the county-level analysis, overall vaccination coverage increased by 4.3% (95% confidence interval [CI] 2.9%-5.8%, p < 0.001), nonmedical exemptions decreased by 3.9% (95% CI 2.4%-5.4%, p < 0.001), and medical exemptions increased by 2.4% (95% CI 2.0%-2.9%, p < 0.001). Changes in vaccination coverage across counties after the policy implementation from 2015 to 2017 ranged from -6% to 26%, with larger increases in coverage in counties with lower prepolicy vaccine coverage. Results were robust to alternative model specifications. The limitations of the study were the exclusion of a subset of US states from the analysis and the use of only 2 years of postpolicy data based on data availability. CONCLUSIONS: In this study, implementation of the California policy that eliminated nonmedical childhood vaccine exemptions was associated with an estimated increase in vaccination coverage and a reduction in nonmedical exemptions at state and county levels. The observed increase in medical exemptions was offset by the larger reduction in nonmedical exemptions. The largest increases in vaccine coverage were observed in the most "high-risk" counties, meaning those with the lowest prepolicy vaccine coverage. Our findings suggest that government policies removing nonmedical exemptions can be effective at increasing vaccination coverage.


Subject(s)
Health Policy/legislation & jurisprudence , Policy Making , Vaccination Coverage/legislation & jurisprudence , Vaccination/legislation & jurisprudence , Vaccines/economics , California , Child , Child, Preschool , Humans , Measles/prevention & control , Public Health/legislation & jurisprudence , Schools/statistics & numerical data , United States , Vaccination/methods
2.
Br J Sports Med ; 53(4): 237-242, 2019 Feb.
Article in English | MEDLINE | ID: mdl-30580252

ABSTRACT

OBJECTIVES: Bone stress injuries (BSI) are common in runners of both sexes. The purpose of this study was to determine if a modified Female Athlete Triad Cumulative Risk Assessment tool would predict BSI in male distance runners. METHODS: 156 male runners at two collegiate programmes were studied using mixed retrospective and prospective design for a total of 7 years. Point values were assigned using risk assessment categories including low energy availability, low body mass index (BMI), low bone mineral density (BMD) and prior BSI. The outcome was subsequent development of BSI. Statistical models used a mixed effects Poisson regression model with p<0.05 as threshold for significance. Two regression analyses were performed: (1) baseline risk factors as the independent variable; and (2) annual change in risk factors (longitudinal data) as the independent variable. RESULTS: 42/156 runners (27%) sustained 61 BSIs over an average 1.9 years of follow-up. In the baseline risk factor model, each 1 point increase in prior BSI score was associated with a 57% increased risk for prospective BSI (p=0.0042) and each 1 point increase in cumulative risk score was associated with a 37% increase in prospective BSI risk (p=0.0079). In the longitudinal model, each 1 point increase in cumulative risk score was associated with a 27% increase in prospective BSI risk (p=0.05). BMI (rate ratio (RR)=1.91, p=0.11) and BMD (RR=1.58, p=0.19) risk scores were not associated with BSI. CONCLUSION: A modified cumulative risk assessment tool may help identify male runners at elevated risk for BSI. Identifying risk factors may guide treatment and prevention strategies.


Subject(s)
Athletic Injuries/diagnosis , Fractures, Stress/diagnosis , Risk Assessment/methods , Running/injuries , Adolescent , Female Athlete Triad Syndrome , Humans , Male , Prospective Studies , Retrospective Studies , Risk Factors , Young Adult
3.
Cogn Behav Neurol ; 30(3): 81-89, 2017 09.
Article in English | MEDLINE | ID: mdl-28926415

ABSTRACT

BACKGROUND AND OBJECTIVE: Semantic memory measures may be useful in tracking and predicting progression of Alzheimer disease. We investigated relationships among semantic memory tasks and their 1-year predictive value in women with Alzheimer disease. METHODS: We conducted secondary analyses of a randomized clinical trial of raloxifene in 42 women with late-onset mild-to-moderate Alzheimer disease. We assessed semantic memory with tests of oral confrontation naming, category fluency, semantic recognition and semantic naming, and semantic density in written narrative discourse. We measured global cognition (Alzheimer Disease Assessment Scale, cognitive subscale), dementia severity (Clinical Dementia Rating sum of boxes), and daily function (Activities of Daily Living Inventory) at baseline and 1 year. RESULTS: At baseline and 1 year, most semantic memory scores correlated highly or moderately with each other and with global cognition, dementia severity, and daily function. Semantic memory task performance at 1 year had worsened one-third to one-half standard deviation. Factor analysis of baseline test scores distinguished processes in semantic and lexical retrieval (semantic recognition, semantic naming, confrontation naming) from processes in lexical search (semantic density, category fluency). The semantic-lexical retrieval factor predicted global cognition at 1 year. Considered separately, baseline confrontation naming and category fluency predicted dementia severity, while semantic recognition and a composite of semantic recognition and semantic naming predicted global cognition. No individual semantic memory test predicted daily function. CONCLUSIONS: Semantic-lexical retrieval and lexical search may represent distinct aspects of semantic memory. Semantic memory processes are sensitive to cognitive decline and dementia severity in Alzheimer disease.


Subject(s)
Activities of Daily Living/psychology , Alzheimer Disease/complications , Memory Disorders/etiology , Neuropsychological Tests/standards , Aged , Disease Progression , Female , Humans , Male
5.
J Cardiothorac Vasc Anesth ; 29(5): 1140-7, 2015 Oct.
Article in English | MEDLINE | ID: mdl-26154572

ABSTRACT

OBJECTIVE: To test the hypothesis that obstructive sleep apnea (OSA) is a risk factor for development of postoperative atrial fibrillation (POAF) after cardiac surgery. DESIGN: Retrospective analysis. SETTING: Single-center university hospital. PARTICIPANTS: Five hundred forty-five patients in sinus rhythm preoperatively undergoing coronary artery bypass grafting (CABG), aortic valve replacement, mitral valve replacement/repair, or combined valve/CABG surgery from January 2008 to April 2011. INTERVENTIONS: Retrospective review of medical records. MEASUREMENTS AND MAIN RESULTS: Postoperative atrial fibrillation was defined as atrial fibrillation requiring therapeutic intervention. Of 545 cardiac surgical patients, 226 (41%) patients developed POAF. The risk was higher in 72 OSA patients than 473 patients without OSA (67% v 38%, adjusted hazard ratio 1.83 [95% CI: 1.30-2.58], p<0.001). Of the 32 OSA patients who used home positive airway pressure (PAP) therapy, 18 (56%) developed POAF compared with 29 of 38 (76%) patients who did not use PAP at home (unadjusted hazard ratio 0.63 [95% CI: 0.35-1.15], p = 0.13). CONCLUSION: OSA is significantly associated with POAF in cardiac surgery patients. Further investigation is needed to determine whether or not use of positive airway pressure in OSA patients reduces the risk of POAF.


Subject(s)
Atrial Fibrillation/epidemiology , Cardiac Surgical Procedures , Heart Diseases/epidemiology , Heart Diseases/surgery , Postoperative Complications/epidemiology , Sleep Apnea, Obstructive/epidemiology , Aged , Cohort Studies , Female , Humans , Male , Retrospective Studies , Risk Factors
7.
PM R ; 2024 Jun 04.
Article in English | MEDLINE | ID: mdl-38837318

ABSTRACT

INTRODUCTION: Although the female athlete triad (Triad) has been associated with increased risk of bone-stress injuries (BSIs), limited research among collegiate athletes has addressed the associations between the Triad and non-BSI injuries. OBJECTIVE: To elucidate the relationship between Triad and both BSI and non-BSI in female athletes. DESIGN: Retrospective cohort study. SETTING: Primary and tertiary care student athlete clinic. PARTICIPANTS: National Collegiate Athletic Association Division I female athletes at a single institution. INTERVENTION: Participants completed a pre-participation questionnaire and dual-energy x-ray absorptiometry, which was used to generate a Triad cumulative risk assessment score (Triad score). The number of overuse musculoskeletal injuries that occurred while the athletes were still competing collegiately were identified through chart review. MAIN OUTCOME MEASURE: BSI and non-BSI were treated as count variables. The association between BSI, non-BSI, and Triad score was measured using Poisson regression to calculate rate ratios. RESULTS: Of 239 athletes, 43% of athletes (n = 103) sustained at least one injury. Of those, 40% (n = 95) sustained at least one non-BSI and 10% (n = 24) sustained at least one BSI over an average follow-up 2.5 years. After accounting for sport type (non-lean, runner, other endurance sport, or other lean advantage sport) and baseline age, we found that every additional Triad score risk point was associated with a significant 17% increase in the rate of BSI (rate ratio [RR] 1.17, 95% confidence interval [CI] 1.03-1.33; p = .016). However, Triad score was unrelated to non-BSI (1.00, 95% CI 0.91-1.11; p = .99). Compared with athletes in non-lean sports (n = 108), athletes in other lean advantage sports (n = 30) had an increased rate of non-BSI (RR: 2.09, p = .004) whereas distance runners (n = 46) had increased rates of BSI (RR: 7.65, p < .001) and non-BSI (RR: 2.25, p < .001). CONCLUSIONS: Higher Triad score is associated with an increased risk of BSI but not non-BSI in collegiate athletes.

8.
Sports Med ; 53(2): 313-325, 2023 02.
Article in English | MEDLINE | ID: mdl-36208412

ABSTRACT

BACKGROUND AND OBJECTIVE: Meta-analysis and meta-regression are often highly cited and may influence practice. Unfortunately, statistical errors in meta-analyses are widespread and can lead to flawed conclusions. The purpose of this article was to review common statistical errors in meta-analyses and to document their frequency in highly cited meta-analyses from strength and conditioning research. METHODS: We identified five errors in one highly cited meta-regression from strength and conditioning research: implausible outliers; overestimated effect sizes that arise from confusing standard deviation with standard error; failure to account for correlated observations; failure to account for within-study variance; and a focus on within-group rather than between-group results. We then quantified the frequency of these errors in 20 of the most highly cited meta-analyses in the field of strength and conditioning research from the past 20 years. RESULTS: We found that 85% of the 20 most highly cited meta-analyses in strength and conditioning research contained statistical errors. Almost half (45%) contained at least one effect size that was mistakenly calculated using standard error rather than standard deviation. In several cases, this resulted in obviously wrong effect sizes, for example, effect sizes of 11 or 14 standard deviations. Additionally, 45% failed to account for correlated observations despite including numerous effect sizes from the same study and often from the same group within the same study. CONCLUSIONS: Statistical errors in meta-analysis and meta-regression are common in strength and conditioning research. We highlight five errors that authors, editors, and readers should check for when preparing or critically reviewing meta-analyses.

9.
J Clin Epidemiol ; 155: 64-72, 2023 03.
Article in English | MEDLINE | ID: mdl-36736709

ABSTRACT

OBJECTIVES: A "null field" is a scientific field where there is nothing to discover and where observed associations are thus expected to simply reflect the magnitude of bias. We aimed to characterize a null field using a known example, homeopathy (a pseudoscientific medical approach based on using highly diluted substances), as a prototype. STUDY DESIGN AND SETTING: We identified 50 randomized placebo-controlled trials of homeopathy interventions from highly cited meta-analyses. The primary outcome variable was the observed effect size in the studies. Variables related to study quality or impact were also extracted. RESULTS: The mean effect size for homeopathy was 0.36 standard deviations (Hedges' g; 95% confidence interval: 0.21, 0.51) better than placebo, which corresponds to an odds ratio of 1.94 (95% CI: 1.69, 2.23) in favor of homeopathy. 80% of studies had positive effect sizes (favoring homeopathy). Effect size was significantly correlated with citation counts from journals in the directory of open-access journals and CiteWatch. We identified common statistical errors in 25 studies. CONCLUSION: A null field like homeopathy can exhibit large effect sizes, high rates of favorable results, and high citation impact in the published scientific literature. Null fields may represent a useful negative control for the scientific process.


Subject(s)
Homeopathy , Humans , Homeopathy/methods , Bias , Odds Ratio
10.
Cancer Causes Control ; 23(1): 133-40, 2012 Jan.
Article in English | MEDLINE | ID: mdl-22045154

ABSTRACT

BACKGROUND: Sun protection is recommended for skin cancer prevention, yet little is known about the role of sun protection on vitamin D levels. Our aim was to investigate the relationship between different types of sun protective behaviors and serum 25(OH)D levels in the general US population. METHODS: Cross-sectional, nationally representative survey of 5,920 adults aged 18-60 years in the US National Health and Nutrition Examination Survey 2003-2006. We analyzed questionnaire responses on sun protective behaviors: staying in the shade, wearing long sleeves, wearing a hat, using sunscreen and SPF level. Analyses were adjusted for multiple confounders of 25(OH)D levels and stratified by race. Our primary outcome measures were serum 25(OH)D levels (ng/ml) measured by radioimmunoassay and vitamin D deficiency, defined as 25(OH)D levels <20 ng/ml. RESULTS: Staying in the shade and wearing long sleeves were significantly associated with lower 25(OH)D levels. Subjects who reported frequent use of shade on a sunny day had -3.5 ng/ml (p (trend) < 0.001) lower 25(OH)D levels compared to subjects who reported rare use. Subjects who reported frequent use of long sleeves had -2.2 ng/ml (p (trend) = 0.001) lower 25(OH)D levels. These associations were strongest for whites, and did not reach statistical significance among Hispanics or blacks. White participants who reported frequently staying in the shade or wearing long sleeves had double the odds of vitamin D deficiency compared with those who rarely did so. Neither wearing a hat nor using sunscreen was associated with low 25(OH)D levels or vitamin D deficiency. CONCLUSIONS: White individuals who protect themselves from the sun by seeking shade or wearing long sleeves may have lower 25(OH)D levels and be at risk for vitamin D deficiency. Frequent sunscreen use does not appear to be linked to vitamin D deficiency in this population.


Subject(s)
Sunlight , Vitamin D Deficiency/blood , Vitamin D/blood , Adolescent , Adult , Cross-Sectional Studies , Female , Humans , Male , Middle Aged , Nutrition Surveys , Protective Clothing/statistics & numerical data , Sunscreening Agents/administration & dosage , United States/epidemiology , Vitamin D Deficiency/epidemiology , Young Adult
11.
Orthop J Sports Med ; 10(9): 23259671221123588, 2022 Sep.
Article in English | MEDLINE | ID: mdl-36157087

ABSTRACT

Background: Bone stress injuries (BSIs) are common in athletes. Risk factors for BSI may differ by skeletal anatomy and relative contribution of trabecular-rich and cortical-rich bone. Hypothesis: We hypothesized that Female Athlete Triad (Triad) risk factors would be more strongly associated with BSIs sustained at trabecular-rich versus cortical-rich skeletal sites. Study Design: Cohort study; Level of evidence, 2. Methods: The study population comprised 321 female National Collegiate Athletic Association Division I athletes participating in 16 sports from 2008 to 2014. Triad risk factors and a Triad cumulative risk score were assessed using responses to preparticipation examination and dual energy x-ray absorptiometry to measure lumbar spine and whole-body bone mineral density (BMD). Sports-related BSIs were diagnosed by a physician and confirmed radiologically. Athletes were grouped into those sustaining a subsequent trabecular-rich BSI, a subsequent cortical-rich BSI, and those without a BSI. Data were analyzed with multinomial logistic regression adjusted for participation in cross-country running versus other sports. Results: A total of 19 participants sustained a cortical-rich BSI (6%) and 10 sustained a trabecular-rich BSI (3%) over the course of collegiate sports participation. The Triad cumulative risk score was significantly related to both trabecular-rich and cortical-rich BSI. However, lower BMD and weight were associated with significantly greater risk for trabecular-rich than cortical-rich BSIs. For every value lower than 1 SD, the odds ratios (95% CIs) for trabecular-rich versus cortical-rich BSI were 3.08 (1.25-7.56) for spine BMD; 2.38 (1.22-4.64) for whole-body BMD; and 5.26 (1.48-18.70) for weight. Taller height was a significantly better predictor of cortical-rich than trabecular-rich BSI. Conclusion: The Triad cumulative risk score was significantly associated with both trabecular-rich and cortical-rich BSI, but Triad-related risk factors appeared more strongly related to trabecular-rich BSI. In particular, low BMD and low weight were associated with significantly higher increases in the risk of trabecular-rich BSI than cortical-rich BSI. These findings suggest Triad risk factors are more common in athletes sustaining BSI in trabecular-rich than cortical-rich locations.

12.
Front Neurol ; 12: 727171, 2021.
Article in English | MEDLINE | ID: mdl-34744968

ABSTRACT

Background and Purpose: Prediction models for functional outcomes after ischemic stroke are useful for statistical analyses in clinical trials and guiding patient expectations. While there are models predicting dichotomous functional outcomes after ischemic stroke, there are no models that predict ordinal mRS outcomes. We aimed to create a model that predicts, at the time of hospital discharge, a patient's modified Rankin Scale (mRS) score on day 90 after ischemic stroke. Methods: We used data from three multi-center prospective studies: CRISP, DEFUSE 2, and DEFUSE 3 to derive and validate an ordinal logistic regression model that predicts the 90-day mRS score based on variables available during the stroke hospitalization. Forward selection was used to retain independent significant variables in the multivariable model. Results: The prediction model was derived using data on 297 stroke patients from the CRISP and DEFUSE 2 studies. National Institutes of Health Stroke Scale (NIHSS) at discharge and age were retained as significant (p < 0.001) independent predictors of the 90-day mRS score. When applied to the external validation set (DEFUSE 3, n = 160), the model accurately predicted the 90-day mRS score within one point for 78% of the patients in the validation cohort. Conclusions: A simple model using age and NIHSS score at time of discharge can predict 90-day mRS scores in patients with ischemic stroke. This model can be useful for prognostication in routine clinical care and to impute missing data in clinical trials.

13.
Sports (Basel) ; 10(1)2021 Dec 21.
Article in English | MEDLINE | ID: mdl-35050966

ABSTRACT

Sun exposure is a risk factor for skin cancer. Knowledge and behaviors around sun exposure protective measures are poorly described in athletes including runners. Our primary objective was to describe sun exposure behaviors and knowledge in a population of runners. A cross-sectional online survey was administered to 697 runners to measure the frequency of seven sun protective behaviors: sunscreen use on the face or body; wearing a hat, sunglasses, or long sleeves; running in shade; and avoidance of midday running. Between 54% and 84% of runners reported that they engaged in these behaviors at least sometimes, but only 7% to 45% reported frequent use. Of 525 runners who gave a primary reason for not using sunscreen regularly, 49.0% cited forgetfulness; 17.3% cited discomfort; and only a small percentage cited maintaining a tan (6.1%) or optimizing vitamin D (5.1%). Of 689 runners who responded to a question about what factor most influences their overall sun exposure habits, 39.2% cited fear of skin cancer, 28.7% cited comfort level, and 15.8% cited fear of skin aging. In addition to the seven individual behaviors, we also asked runners how frequently they took precautions to protect against the sun overall. We explored associations between participant characteristics and the overall use of sun protection using ordinal logistic regression. Overall, sun protection was used more frequently in runners who were female, older, or had a history of skin cancer. Runners appear to recognize the importance of sun protection and the potential consequences of not using it, but report forgetfulness and discomfort as the biggest barriers to consistent use. Interventions using habit-formation strategies and self-regulation training may prove to be most useful in closing this gap between knowledge and practice.

14.
PM R ; 13(9): 945-953, 2021 09.
Article in English | MEDLINE | ID: mdl-33037847

ABSTRACT

INTRODUCTION: Determinants of bone health and injury are important to identify in athletes. Bone mineral density (BMD) is commonly measured in athletes with Female Athlete Triad (Triad) risk factors; the trabecular bone score (TBS) has been proposed to predict fracture risk independent of BMD. Evaluation of TBS and spine BMD in relation bone stress injury (BSI) risk has not been studied in female collegiate athletes. OBJECTIVE: We hypothesized that spine BMD and TBS would each independently predict BSI and that the combined measures would improve injury prediction in female collegiate athletes. We also hypothesized that each measure would be correlated with Triad risk factors. DESIGN: Retrospective cohort. SETTING: Academic Institution. METHODS: Dual energy x-ray absorptiometry (DXA) of the lumbar spine was used to calculate BMD and TBS values. Chart review was used to identify BSI that occurred after the DXA measurement and to obtain Triad risk factors. We used logistic regression to examine the ability of TBS and BMD alone or in combination to predict prospective BSI. RESULTS: Within 321 athletes, 29 (9.0%) sustained a BSI after DXA. BMD and TBS were highly correlated (Pearson correlation r = 0.62, P < .0001). Spine BMD and TBS had similar ability to predict BSI; the C-statistic and 95% confidence intervals were 0.69 (0.58 to 0.81) for spine BMD versus 0.68 (0.57 to 0.79) for TBS. No improvement in discrimination was observed with combined BMD + TBS (C-statistic 0.70, 0.59 to 0.81). Both TBS and BMD predicted trabecular-rich BSI (defined as pelvis, femoral neck, and calcaneus) better than cortical-rich BSI. Both measures had similar correlations with Triad risk factors. CONCLUSION: Lower BMD and TBS values are associated with elevated risk for BSI and similar correlation to Triad risk factors. TBS does not improve prediction of BSI. Collectively, our findings suggest that BMD may be a sufficient measure of skeletal integrity from DXA in female collegiate athletes.


Subject(s)
Bone Density , Cancellous Bone , Absorptiometry, Photon , Athletes , Cancellous Bone/diagnostic imaging , Female , Humans , Lumbar Vertebrae/diagnostic imaging , Prospective Studies , Retrospective Studies , Risk Factors
15.
PLoS One ; 15(6): e0235318, 2020.
Article in English | MEDLINE | ID: mdl-32589653

ABSTRACT

Magnitude-based inference (MBI) is a controversial statistical method that has been used in hundreds of papers in sports science despite criticism from statisticians. To better understand how this method has been applied in practice, we systematically reviewed 232 papers that used MBI. We extracted data on study design, sample size, and choice of MBI settings and parameters. Median sample size was 10 per group (interquartile range, IQR: 8-15) for multi-group studies and 14 (IQR: 10-24) for single-group studies; few studies reported a priori sample size calculations (15%). Authors predominantly applied MBI's default settings and chose "mechanistic/non-clinical" rather than "clinical" MBI even when testing clinical interventions (only 16 studies out of 232 used clinical MBI). Using these data, we can estimate the Type I error rates for the typical MBI study. Authors frequently made dichotomous claims about effects based on the MBI criterion of a "likely" effect and sometimes based on the MBI criterion of a "possible" effect. When the sample size is n = 8 to 15 per group, these inferences have Type I error rates of 12%-22% and 22%-45%, respectively. High Type I error rates were compounded by multiple testing: Authors reported results from a median of 30 tests related to outcomes; and few studies specified a primary outcome (14%). We conclude that MBI has promoted small studies, promulgated a "black box" approach to statistics, and led to numerous papers where the conclusions are not supported by the data. Amidst debates over the role of p-values and significance testing in science, MBI also provides an important natural experiment: we find no evidence that moving researchers away from p-values or null hypothesis significance testing makes them less prone to dichotomization or over-interpretation of findings.


Subject(s)
Science/statistics & numerical data , Sports Medicine/statistics & numerical data
16.
J Neurooncol ; 95(1): 81-85, 2009 Oct.
Article in English | MEDLINE | ID: mdl-19396401

ABSTRACT

Studies investigating whether adults have diminished survival from medulloblastoma (MB) compared with children have yielded conflicting results. We sought to determine in a population-based registry whether adults and children with MB differ in survival, and to examine whether dissimilar use of chemotherapy might contribute to any disparity. 1,226 MB subjects were identified using the Surveillance Epidemiology and End Results (SEER-9) registry (1973-2002) and survival analysis performed. MB was defined strictly to exclude non-cerebellar primitive neuro-ectodermal tumors. Patients were stratified by age at diagnosis: <3 years (infants), 3-17 years (children) and >or=18 years (adults). Because the SEER-9 registry lacks treatment data, a subset of 142 patients were identified using the San Francisco-Oakland SEER registry (1988-2003) and additional analyses performed. There was no significant difference in survival between children and adults with MB in either the SEER-9 (P = 0.17) or SFO (P = 0.89) cohorts but infants fared worse compared to both children (P < 0.01) and adults (P < 0.01). In the SFO sample, children and adults who received chemotherapy plus radiation therapy (XRT) did not differ in survival. Among patients treated with XRT alone, children showed increased survival (P = 0.04) compared with adults. Children and adults with MB do not differ with respect to overall survival, yet infants fare significantly worse. For children and adults with MB treated with both XRT and chemotherapy, we could not demonstrate a survival difference. Similar outcomes between adult and childhood MB may justify inclusion of adults in pediatric cooperative trials for MB.


Subject(s)
Cerebellar Neoplasms , Community Health Planning , Medulloblastoma/classification , Medulloblastoma/mortality , Adolescent , Adult , Age Factors , Cerebellar Neoplasms/classification , Cerebellar Neoplasms/epidemiology , Cerebellar Neoplasms/mortality , Child , Child, Preschool , Female , Humans , Male , Medulloblastoma/epidemiology , Middle Aged , San Francisco/epidemiology , Survival Analysis , Young Adult
17.
J Neurosurg ; 110(4): 725-9, 2009 Apr.
Article in English | MEDLINE | ID: mdl-19061350

ABSTRACT

OBJECT: Previous small studies disagree about which clinical risk factors influence ependymoma incidence. The authors analyzed a large, population-based cancer registry to examine the relationship of incidence to patient age, sex, race, and tumor location, and to determine incidence trends over the past 3 decades. METHODS: Data were obtained from the Surveillance, Epidemiology, and End Results (SEER-9) study, which was conducted from 1973 to 2003. Histological codes were used to define ependymomas. Age-adjusted incidence rates were compared by confidence intervals in the SEER*Stat 6.2 program. Multiplicative Poisson regression and Joinpoint analysis were used to determine annual percentage change and to look for sharp changes in incidence, respectively. RESULTS: From the SEER database, 1402 patients were identified. The incidence rate per 100,000 person-years was significantly higher in male than in female patients (males 0.227 +/- 0.029, females 0.166 +/- 0.03). For children, the age at diagnosis differed significantly by tumor location, with the mean age for patients with infratentorial tumors calculated as 5 +/- 0.4 years; for supratentorial tumors it was 7.77 +/- 0.6 years, and for spinal lesions it was 12.16 +/- 0.8 years. (Values are expressed as the mean +/- standard error [SE].) Adults showed no difference in the mean age of incidence by location, although most tumors in this age group were spinal. Between 1973 and 2003, the incidence increased significantly among adults but not among children, and there were no sharp changes at any single year, both before and after age adjustment. CONCLUSIONS: Males have a higher incidence of ependymoma than do females. A biological explanation remains elusive. Ependymoma occurs within the CNS at distinct locations at different ages, consistent with hypotheses postulating distinct populations of radial glial stem cells within the CNS. Ependymoma incidence appears to have increased over the past 3 decades, but only in adults.


Subject(s)
Brain Neoplasms/epidemiology , Ependymoma/epidemiology , Spinal Neoplasms/epidemiology , Adolescent , Adult , Age Factors , Child , Child, Preschool , Female , Humans , Infant , Infant, Newborn , Male , Middle Aged , SEER Program , Sex Factors , United States/epidemiology
18.
Pediatr Blood Cancer ; 52(1): 65-9, 2009 Jan.
Article in English | MEDLINE | ID: mdl-19006249

ABSTRACT

BACKGROUND: Studies have suggested that supratentorial ependymomas have better survival than infratentorial tumors, with spinal tumors having the best prognosis, but these data have been based on small samples. Using a population-based registry of ependymomas, we analyzed how age, gender, location, race and radiotherapy influence survival in children. METHODS: We queried the Surveillance Epidemiology End Results database (SEER-17) from 1973 to 2003, strictly defining ependymomas by histology. Site codes were used to distinguish between supratentorial, infratentorial, and spinal tumors when available. Outcomes were compared by location, age, gender, race and radiotherapy, using Kaplan-Meier analysis and logrank tests. Cox regression was completed, incorporating all significant covariates from univariate analysis. RESULTS: Six hundred thirty-five children were identified with an overall 5-year survival of 57.1 +/- standard error (SE) 2.3%. Increasing age was associated with improved survival (P < 0.0001). Five-year survival by location was 59.5 +/- SE 5.5% supratentorial, 57.1 +/- SE 4.1% infratentorial and 86.7 +/- SE 5.2% spinal. Radiotherapy of the infratentorial tumors resulted in significantly improved survival in both univariate analysis (logrank P < 0.018) and multivariate analysis restricted to this tumor location (P = 0.033). Using multivariate analysis that incorporated all tumor locations, age (P < 0.001) and location (P = 0.020) were significant predictors for survival. CONCLUSIONS: Age and location independently influence survival in ependymoma. Spinal tumors are associated with a significantly better prognosis than both supratentorial and infratentorial tumors, and may represent a distinct biological entity. Radiotherapy appears beneficial for survival in patients with infratentorial ependymoma.


Subject(s)
Ependymoma/epidemiology , Adolescent , Age Factors , Child , Child, Preschool , Ependymoma/mortality , Ependymoma/pathology , Female , Humans , Infant , Infant, Newborn , Infratentorial Neoplasms/mortality , Male , Proportional Hazards Models , Racial Groups , Radiotherapy , Registries , Sex Factors , Spinal Neoplasms/mortality , Supratentorial Neoplasms/mortality , Survival Analysis
19.
Pediatr Blood Cancer ; 52(1): 60-4, 2009 Jan.
Article in English | MEDLINE | ID: mdl-19006250

ABSTRACT

BACKGROUND: Males have a higher incidence of medulloblastoma (MB) than females, but the effect of gender on survival is unclear. Studies have yielded conflicting results, possibly due to small sample sizes or differences in how researchers defined MB. We aimed to determine the effect of gender on survival in MB using a large data set and strict criteria for defining MB. PROCEDURE: A sample of 1,226 subjects (763 males and 463 females) was identified from 1973 to 2002, using the Surveillance Epidemiology and End Results (SEER-9) registry. MB was strictly defined to exclude non-cerebellar embryonal tumors (primitive neuro-ectodermal tumors). Because children <3 years of age are known to have worse survival, patients were stratified by age <3 years at diagnosis (95 males, 82 females) and >3 years (668 males, 381 females). RESULTS: Overall, there was no significant difference in survival between males and females (log rank P = 0.22). However, among subjects >3 years, females had significantly greater survival than males (log rank P = 0.02). In children <3 years, there was a non-significant trend toward poorer survival in females (median survival: males 27 months, females 13 months; log rank P = 0.24). This interaction between age group and gender was statistically significant (P = 0.03). CONCLUSION: Females with MB have a survival advantage only in subjects >3 years. In children <3 years, females may even have poorer outcome. The effect of gender on survival and incidence in MB warrants additional biologic investigation, and may differ in very young children with MB.


Subject(s)
Medulloblastoma/epidemiology , Adolescent , Adult , Aged , Aged, 80 and over , Child , Child, Preschool , Female , Humans , Incidence , Infant , Infant, Newborn , Male , Medulloblastoma/mortality , Middle Aged , Registries , Sex Factors , Survival Analysis , Young Adult
20.
J Pediatr Hematol Oncol ; 31(12): 970-1, 2009 Dec.
Article in English | MEDLINE | ID: mdl-19887963

ABSTRACT

Earlier studies have reported changes in the incidence of medulloblastoma (MB) but have conflicted, likely because of small sample size or misclassification of MB with primitive neuroectodermal tumor (PNET). The incidence of MB and PNET from 1985 to 2002 was determined from the Central Brain Tumor Registry of the United States, a large population-based cancer registry, using strict histologic and site codes. No statistically significant change in MB incidence was observed over the last 2 decades, but there was an increase in MB and PNET combined.


Subject(s)
Cerebellar Neoplasms/epidemiology , Medulloblastoma/epidemiology , Registries/statistics & numerical data , Humans , Incidence , Time Factors , United States/epidemiology
SELECTION OF CITATIONS
SEARCH DETAIL