Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 24
Filtrar
1.
Psychol Methods ; 2024 Jun 03.
Artigo em Inglês | MEDLINE | ID: mdl-38829357

RESUMO

We demonstrate that all conventional meta-analyses of correlation coefficients are biased, explain why, and offer solutions. Because the standard errors of the correlation coefficients depend on the size of the coefficient, inverse-variance weighted averages will be biased even under ideal meta-analytical conditions (i.e., absence of publication bias, p-hacking, or other biases). Transformation to Fisher's z often greatly reduces these biases but still does not mitigate them entirely. Although all are small-sample biases (n < 200), they will often have practical consequences in psychology where the typical sample size of correlational studies is 86. We offer two solutions: the well-known Fisher's z-transformation and new small-sample adjustment of Fisher's that renders any remaining bias scientifically trivial. (PsycInfo Database Record (c) 2024 APA, all rights reserved).

2.
Res Synth Methods ; 15(2): 313-325, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38342768

RESUMO

We demonstrate that all meta-analyses of partial correlations are biased, and yet hundreds of meta-analyses of partial correlation coefficients (PCCs) are conducted each year widely across economics, business, education, psychology, and medical research. To address these biases, we offer a new weighted average, UWLS+3 . UWLS+3 is the unrestricted weighted least squares weighted average that makes an adjustment to the degrees of freedom that are used to calculate partial correlations and, by doing so, renders trivial any remaining meta-analysis bias. Our simulations also reveal that these meta-analysis biases are small-sample biases (n < 200), and a simple correction factor of (n - 2)/(n - 1) greatly reduces these small-sample biases along with Fisher's z. In many applications where primary studies typically have hundreds or more observations, partial correlations can be meta-analyzed in standard ways with only negligible bias. However, in other fields in the social and the medical sciences that are dominated by small samples, these meta-analysis biases are easily avoidable by our proposed methods.


Assuntos
Pesquisa Biomédica , Projetos de Pesquisa , Viés , Análise dos Mínimos Quadrados
3.
Res Synth Methods ; 15(4): 590-602, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38379427

RESUMO

Using a sample of 70,399 published p-values from 192 meta-analyses, we empirically estimate the counterfactual distribution of p-values in the absence of any biases. Comparing observed p-values with counterfactually expected p-values allows us to estimate how many p-values are published as being statistically significant when they should have been published as non-significant. We estimate the extent of selectively reported p-values to range between 57.7% and 71.9% of the significant p-values. The counterfactual p-value distribution also allows us to assess shifts of p-values along the entire distribution of published p-values, revealing that particularly very small p-values (p < 0.001) are unexpectedly abundant in the published literature. Subsample analysis suggests that the extent of selective reporting is reduced in research fields that use experimental designs, analyze microeconomics research questions, and have at least some adequately powered studies.


Assuntos
Projetos de Pesquisa , Humanos , Metanálise como Assunto , Viés de Publicação , Economia , Modelos Estatísticos , Interpretação Estatística de Dados , Algoritmos , Reprodutibilidade dos Testes , Viés
4.
Res Synth Methods ; 15(3): 500-511, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38327122

RESUMO

Publication selection bias undermines the systematic accumulation of evidence. To assess the extent of this problem, we survey over 68,000 meta-analyses containing over 700,000 effect size estimates from medicine (67,386/597,699), environmental sciences (199/12,707), psychology (605/23,563), and economics (327/91,421). Our results indicate that meta-analyses in economics are the most severely contaminated by publication selection bias, closely followed by meta-analyses in environmental sciences and psychology, whereas meta-analyses in medicine are contaminated the least. After adjusting for publication selection bias, the median probability of the presence of an effect decreased from 99.9% to 29.7% in economics, from 98.9% to 55.7% in psychology, from 99.8% to 70.7% in environmental sciences, and from 38.0% to 29.7% in medicine. The median absolute effect sizes (in terms of standardized mean differences) decreased from d = 0.20 to d = 0.07 in economics, from d = 0.37 to d = 0.26 in psychology, from d = 0.62 to d = 0.43 in environmental sciences, and from d = 0.24 to d = 0.13 in medicine.


Assuntos
Economia , Metanálise como Assunto , Psicologia , Viés de Publicação , Humanos , Ecologia , Projetos de Pesquisa , Viés de Seleção , Probabilidade , Medicina
5.
R Soc Open Sci ; 11(2): 231486, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38384774

RESUMO

In their book 'Nudge: Improving Decisions About Health, Wealth and Happiness', Thaler & Sunstein (2009) argue that choice architectures are promising public policy interventions. This research programme motivated the creation of 'nudge units', government agencies which aim to apply insights from behavioural science to improve public policy. We closely examine a meta-analysis of the evidence gathered by two of the largest and most influential nudge units (DellaVigna & Linos (2022 Econometrica 90, 81-116 (doi:10.3982/ECTA18709))) and use statistical techniques to detect reporting biases. Our analysis shows evidence suggestive of selective reporting. We additionally evaluate the public pre-analysis plans from one of the two nudge units (Office of Evaluation Sciences). We identify several instances of excellent practice; however, we also find that the analysis plans and reporting often lack sufficient detail to evaluate (unintentional) reporting biases. We highlight several improvements that would enhance the effectiveness of the pre-analysis plans and reports as a means to combat reporting biases. Our findings and suggestions can further improve the evidence base for policy decisions.

6.
R Soc Open Sci ; 10(7): 230224, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37416830

RESUMO

Adjusting for publication bias is essential when drawing meta-analytic inferences. However, most methods that adjust for publication bias do not perform well across a range of research conditions, such as the degree of heterogeneity in effect sizes across studies. Sladekova et al. 2022 (Estimating the change in meta-analytic effect size estimates after the application of publication bias adjustment methods. Psychol. Methods) tried to circumvent this complication by selecting the methods that are most appropriate for a given set of conditions, and concluded that publication bias on average causes only minimal over-estimation of effect sizes in psychology. However, this approach suffers from a 'Catch-22' problem-to know the underlying research conditions, one needs to have adjusted for publication bias correctly, but to correctly adjust for publication bias, one needs to know the underlying research conditions. To alleviate this problem, we conduct an alternative analysis, robust Bayesian meta-analysis (RoBMA), which is not based on model-selection but on model-averaging. In RoBMA, models that predict the observed results better are given correspondingly larger weights. A RoBMA reanalysis of Sladekova et al.'s dataset reveals that more than 60% of meta-analyses in psychology notably overestimate the evidence for the presence of the meta-analytic effect and more than 50% overestimate its magnitude.

7.
Res Synth Methods ; 14(3): 515-519, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-36880162

RESUMO

Partial correlation coefficients are often used as effect sizes in the meta-analysis and systematic review of multiple regression analysis research results. There are two well-known formulas for the variance and thereby for the standard error (SE) of partial correlation coefficients (PCC). One is considered the "correct" variance in the sense that it better reflects the variation of the sampling distribution of partial correlation coefficients. The second is used to test whether the population PCC is zero, and it reproduces the test statistics and the p-values of the original multiple regression coefficient that PCC is meant to represent. Simulations show that the "correct" PCC variance causes random effects to be more biased than the alternative variance formula. Meta-analyses produced by this alternative formula statistically dominate those that use "correct" SEs. Meta-analysts should never use the "correct" formula for partial correlations' standard errors.


Assuntos
Viés , Metanálise como Assunto
8.
J Clin Epidemiol ; 157: 53-58, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36889450

RESUMO

OBJECTIVES: To evaluate how well meta-analysis mean estimators represent reported medical research and establish which meta-analysis method is better using widely accepted model selection measures: Akaike information criterion (AIC) and Bayesian information criterion (BIC). STUDY DESIGN AND SETTING: We compiled 67,308 meta-analyses from the Cochrane Database of Systematic Reviews (CDSR) published between 1997 and 2020, collectively encompassing nearly 600,000 medical findings. We compared unrestricted weighted least squares (UWLS) vs. random effects (RE); fixed effect was also secondarily considered. RESULTS: The probability that a randomly selected systematic review from the CDSR would favor UWLS over RE is 79.4% (95% confidence interval [CI95%]: 79.1; 79.7). The odds ratio that a Cochrane systematic review would substantially favor UWLS over RE is 9.33 (CI95%: 8.94; 9.73) using the conventional criterion that a difference in AIC (or BIC) of two or larger represents a 'substantial' improvement. UWLS's advantage over RE is most prominent in the presence of low heterogeneity. However, UWLS also has a notable advantage in high heterogeneity research, across different sizes of meta-analyses and types of outcomes. CONCLUSION: UWLS frequently dominates RE in medical research, often substantially. Thus, the UWLS should be reported routinely in the meta-analysis of clinical trials.


Assuntos
Pesquisa Biomédica , Humanos , Análise dos Mínimos Quadrados , Teorema de Bayes , Revisões Sistemáticas como Assunto
9.
Res Synth Methods ; 14(1): 99-116, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-35869696

RESUMO

Publication bias is a ubiquitous threat to the validity of meta-analysis and the accumulation of scientific evidence. In order to estimate and counteract the impact of publication bias, multiple methods have been developed; however, recent simulation studies have shown the methods' performance to depend on the true data generating process, and no method consistently outperforms the others across a wide range of conditions. Unfortunately, when different methods lead to contradicting conclusions, researchers can choose those methods that lead to a desired outcome. To avoid the condition-dependent, all-or-none choice between competing methods and conflicting results, we extend robust Bayesian meta-analysis and model-average across two prominent approaches of adjusting for publication bias: (1) selection models of p-values and (2) models adjusting for small-study effects. The resulting model ensemble weights the estimates and the evidence for the absence/presence of the effect from the competing approaches with the support they receive from the data. Applications, simulations, and comparisons to preregistered, multi-lab replications demonstrate the benefits of Bayesian model-averaging of complementary publication bias adjustment methods.


Assuntos
Modelos Estatísticos , Teorema de Bayes , Viés de Publicação , Simulação por Computador , Viés
11.
Psychol Methods ; 2022 May 12.
Artigo em Inglês | MEDLINE | ID: mdl-35549315

RESUMO

We introduce a new meta-analysis estimator, the weighted and iterated least squares (WILS), that greatly reduces publication selection bias (PSB) when selective reporting for statistical significance (SSS) is present. WILS is the simple weighted average that has smaller bias and rates of false positives than conventional meta-analysis estimators, the unrestricted weighted least squares (UWLS), and the weighted average of the adequately powered (WAAP) when there is SSS. As a simple weighted average, it is not vulnerable to violations in publication bias corrections models' assumptions too often seen in application. WILS is based on the novel idea of allowing excess statistical significance (ESS), which is a necessary condition of SSS, to identify when and how to reduce PSB. We show in comparisons with large-scale preregistered replications and in evidence-based simulations that the remaining bias is small. The routine application of WILS in the place of random effects would do much to reduce conventional meta-analysis's notable biases and high rates of false positives. (PsycInfo Database Record (c) 2023 APA, all rights reserved).

12.
Res Synth Methods ; 13(1): 88-108, 2022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-34628722

RESUMO

Recent, high-profile, large-scale, preregistered failures to replicate uncover that many highly-regarded experiments are "false positives"; that is, statistically significant results of underlying null effects. Large surveys of research reveal that statistical power is often low and inadequate. When the research record includes selective reporting, publication bias and/or questionable research practices, conventional meta-analyses are also likely to be falsely positive. At the core of research credibility lies the relation of statistical power to the rate of false positives. This study finds that high (>50%-60%) median retrospective power (MRP) is associated with credible meta-analysis and large-scale, preregistered, multi-lab "successful" replications; that is, with replications that corroborate the effect in question. When median retrospective power is low (<50%), positive meta-analysis findings should be interpreted with great caution or discounted altogether.


Assuntos
Estudos Retrospectivos , Viés de Publicação
13.
Res Synth Methods ; 12(6): 776-795, 2021 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-34196473

RESUMO

We introduce and evaluate three tests for publication selection bias based on excess statistical significance (ESS). The proposed tests incorporate heterogeneity explicitly in the formulas for expected and ESS. We calculate the expected proportion of statistically significant findings in the absence of selective reporting or publication bias based on each study's SE and meta-analysis estimates of the mean and variance of the true-effect distribution. A simple proportion of statistical significance test (PSST) compares the expected to the observed proportion of statistically significant findings. Alternatively, we propose a direct test of excess statistical significance (TESS). We also combine these two tests of excess statistical significance (TESSPSST). Simulations show that these ESS tests often outperform the conventional Egger test for publication selection bias and the three-parameter selection model (3PSM).


Assuntos
Modelos Estatísticos , Viés , Viés de Publicação , Viés de Seleção
14.
Psychol Bull ; 144(12): 1325-1346, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-30321017

RESUMO

Can recent failures to replicate psychological research be explained by typical magnitudes of statistical power, bias or heterogeneity? A large survey of 12,065 estimated effect sizes from 200 meta-analyses and nearly 8,000 papers is used to assess these key dimensions of replicability. First, our survey finds that psychological research is, on average, afflicted with low statistical power. The median of median power across these 200 areas of research is about 36%, and only about 8% of studies have adequate power (using Cohen's 80% convention). Second, the median proportion of the observed variation among reported effect sizes attributed to heterogeneity is 74% (I2). Heterogeneity of this magnitude makes it unlikely that the typical psychological study can be closely replicated when replication is defined as study-level null hypothesis significance testing. Third, the good news is that we find only a small amount of average residual reporting bias, allaying some of the often-expressed concerns about the reach of publication bias and questionable research practices. Nonetheless, the low power and high heterogeneity that our survey finds fully explain recent difficulties to replicate highly regarded psychological studies and reveal challenges for scientific progress in psychology. (PsycINFO Database Record (c) 2018 APA, all rights reserved).


Assuntos
Pesquisa Comportamental/normas , Interpretação Estatística de Dados , Metanálise como Assunto , Psicologia/normas , Viés de Publicação , Reprodutibilidade dos Testes , Projetos de Pesquisa/normas , Pesquisa Comportamental/estatística & dados numéricos , Humanos , Psicologia/estatística & dados numéricos , Projetos de Pesquisa/estatística & dados numéricos
15.
J Clin Epidemiol ; 89: 84-91, 2017 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-28365308

RESUMO

OBJECTIVE: To outline issues of importance to analytic approaches to the synthesis of quasi-experiments (QEs) and to provide a statistical model for use in analysis. STUDY DESIGN AND SETTING: We drew on studies of statistics, epidemiology, and social-science methodology to outline methods for synthesis of QE studies. The design and conduct of QEs, effect sizes from QEs, and moderator variables for the analysis of those effect sizes were discussed. RESULTS: Biases, confounding, design complexities, and comparisons across designs offer serious challenges to syntheses of QEs. Key components of meta-analyses of QEs were identified, including the aspects of QE study design to be coded and analyzed. Of utmost importance are the design and statistical controls implemented in the QEs. Such controls and any potential sources of bias and confounding must be modeled in analyses, along with aspects of the interventions and populations studied. Because of such controls, effect sizes from QEs are more complex than those from randomized experiments. A statistical meta-regression model that incorporates important features of the QEs under review was presented. CONCLUSION: Meta-analyses of QEs provide particular challenges, but thorough coding of intervention characteristics and study methods, along with careful analysis, should allow for sound inferences.


Assuntos
Modelos Estatísticos , Ensaios Clínicos Controlados não Aleatórios como Assunto/métodos , Ensaios Clínicos Controlados não Aleatórios como Assunto/estatística & dados numéricos , Humanos , Metanálise como Assunto , Projetos de Pesquisa
16.
Stat Med ; 36(10): 1580-1598, 2017 05 10.
Artigo em Inglês | MEDLINE | ID: mdl-28127782

RESUMO

The central purpose of this study is to document how a sharper focus upon statistical power may reduce the impact of selective reporting bias in meta-analyses. We introduce the weighted average of the adequately powered (WAAP) as an alternative to the conventional random-effects (RE) estimator. When the results of some of the studies have been selected to be positive and statistically significant (i.e. selective reporting), our simulations show that WAAP will have smaller bias than RE at no loss to its other statistical properties. When there is no selective reporting, the difference between RE's and WAAP's statistical properties is practically negligible. Nonetheless, when selective reporting is especially severe or heterogeneity is very large, notable bias can remain in all weighted averages. The main limitation of this approach is that the majority of meta-analyses of medical research do not contain any studies with adequate power (i.e. >80%). For such areas of medical research, it remains important to document their low power, and, as we demonstrate, an alternative unrestricted weighted least squares weighted average can be used instead of WAAP. Copyright © 2017 John Wiley & Sons, Ltd.


Assuntos
Metanálise como Assunto , Viés de Publicação/estatística & dados numéricos , Bioestatística , Simulação por Computador , Humanos , Análise dos Mínimos Quadrados , Modelos Estatísticos , Razão de Chances , Análise de Regressão
17.
Res Synth Methods ; 8(1): 19-42, 2017 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-27322495

RESUMO

Our study revisits and challenges two core conventional meta-regression estimators: the prevalent use of 'mixed-effects' or random-effects meta-regression analysis and the correction of standard errors that defines fixed-effects meta-regression analysis (FE-MRA). We show how and explain why an unrestricted weighted least squares MRA (WLS-MRA) estimator is superior to conventional random-effects (or mixed-effects) meta-regression when there is publication (or small-sample) bias that is as good as FE-MRA in all cases and better than fixed effects in most practical applications. Simulations and statistical theory show that WLS-MRA provides satisfactory estimates of meta-regression coefficients that are practically equivalent to mixed effects or random effects when there is no publication bias. When there is publication selection bias, WLS-MRA always has smaller bias than mixed effects or random effects. In practical applications, an unrestricted WLS meta-regression is likely to give practically equivalent or superior estimates to fixed-effects, random-effects, and mixed-effects meta-regression approaches. However, random-effects meta-regression remains viable and perhaps somewhat preferable if selection for statistical significance (publication bias) can be ruled out and when random, additive normal heterogeneity is known to directly affect the 'true' regression coefficient. Copyright © 2016 John Wiley & Sons, Ltd.


Assuntos
Análise dos Mínimos Quadrados , Modelos Estatísticos , Análise de Regressão , Algoritmos , Simulação por Computador , Humanos , Cadeias de Markov , Viés de Publicação , Editoração , Projetos de Pesquisa , Tamanho da Amostra
20.
J Clin Epidemiol ; 79: 41-45, 2016 11.
Artigo em Inglês | MEDLINE | ID: mdl-27079846

RESUMO

OBJECTIVES: To accommodate and correct identifiable bias and risks of bias among clinical trials of nicotine replacement therapy (NRT). STUDY DESIGN AND SETTING: Meta-regression analysis of a published Cochrane Collaboration systematic review of 122 placebo-controlled clinical trials. RESULTS: Both identified risks of bias and potential publication (or reporting or small sample) bias are associated with an increase in the reported effectiveness of NRT. Whenever multiple sources of biases are accommodated by meta-regression, no evidence of a practically notable or statistically significant overall increased rate of smoking cessation remains. Our findings are in stark contrast with the 50% to 70% increase in smoking cessation reported by the Cochrane Collaboration systematic review. CONCLUSION: After more than 100 randomized clinical trials have been conducted, the overall effectiveness of NRT is in doubt. Simple, well-established meta-regression methods can test, accommodate, and correct multiple sources biases, often mentioned but dismissed by conventional systematic reviews.


Assuntos
Abandono do Hábito de Fumar/métodos , Abandono do Hábito de Fumar/estatística & dados numéricos , Dispositivos para o Abandono do Uso de Tabaco/estatística & dados numéricos , Viés , Humanos , Ensaios Clínicos Controlados Aleatórios como Assunto , Resultado do Tratamento
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA