On the Reproducibility of Psychological Science.
J Am Stat Assoc
; 112(517): 1-10, 2017.
Article
en En
| MEDLINE
| ID: mdl-29861517
Investigators from a large consortium of scientists recently performed a multi-year study in which they replicated 100 psychology experiments. Although statistically significant results were reported in 97% of the original studies, statistical significance was achieved in only 36% of the replicated studies. This article presents a reanalysis of these data based on a formal statistical model that accounts for publication bias by treating outcomes from unpublished studies as missing data, while simultaneously estimating the distribution of effect sizes for those studies that tested nonnull effects. The resulting model suggests that more than 90% of tests performed in eligible psychology experiments tested negligible effects, and that publication biases based on p-values caused the observed rates of nonreproducibility. The results of this reanalysis provide a compelling argument for both increasing the threshold required for declaring scientific discoveries and for adopting statistical summaries of evidence that account for the high proportion of tested hypotheses that are false. Supplementary materials for this article are available online.
Texto completo:
1
Colección:
01-internacional
Banco de datos:
MEDLINE
Tipo de estudio:
Prognostic_studies
Idioma:
En
Revista:
J Am Stat Assoc
Año:
2017
Tipo del documento:
Article