Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Nat Hum Behav ; 4(9): 917-927, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32747803

RESUMO

Standardized classroom experiments provide evidence about how well scientific results reproduce when nearly identical methods are used. We use a sample of around 20,000 observations to test reproducibility of behaviour in trading and ultimatum bargaining. Double-auction results are highly reproducible and are close to equilibrium predictions about prices and quantities from economic theory. Our sample also shows robust correlations between individual surplus and trading order, and autocorrelation of successive price changes, which test different theories of price dynamics. In ultimatum bargaining, the large dataset provides sufficient power to identify that equal-split offers are accepted more often and more quickly than slightly unequal offers. Our results imply a general consistency of results across a variety of different countries and cultures in two of the most commonly used designs in experimental economics.


Assuntos
Comércio/economia , Negociação , Tempo de Reação/fisiologia , Jogos Experimentais , Humanos , Reprodutibilidade dos Testes
2.
PLoS One ; 14(12): e0225826, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31805105

RESUMO

We measure how accurately replication of experimental results can be predicted by black-box statistical models. With data from four large-scale replication projects in experimental psychology and economics, and techniques from machine learning, we train predictive models and study which variables drive predictable replication. The models predicts binary replication with a cross-validated accuracy rate of 70% (AUC of 0.77) and estimates of relative effect sizes with a Spearman ρ of 0.38. The accuracy level is similar to market-aggregated beliefs of peer scientists [1, 2]. The predictive power is validated in a pre-registered out of sample test of the outcome of [3], where 71% (AUC of 0.73) of replications are predicted correctly and effect size correlations amount to ρ = 0.25. Basic features such as the sample and effect sizes in original papers, and whether reported effects are single-variable main effects or two-variable interactions, are predictive of successful replication. The models presented in this paper are simple tools to produce cheap, prognostic replicability metrics. These models could be useful in institutionalizing the process of evaluation of new findings and guiding resources to those direct replications that are likely to be most informative.


Assuntos
Laboratórios , Pesquisa , Ciências Sociais , Algoritmos , Modelos Estatísticos , Curva ROC , Análise de Regressão , Reprodutibilidade dos Testes
3.
Nat Hum Behav ; 2(9): 637-644, 2018 09.
Artigo em Inglês | MEDLINE | ID: mdl-31346273

RESUMO

Being able to replicate scientific findings is crucial for scientific progress1-15. We replicate 21 systematically selected experimental studies in the social sciences published in Nature and Science between 2010 and 201516-36. The replications follow analysis plans reviewed by the original authors and pre-registered prior to the replications. The replications are high powered, with sample sizes on average about five times higher than in the original studies. We find a significant effect in the same direction as the original study for 13 (62%) studies, and the effect size of the replications is on average about 50% of the original effect size. Replicability varies between 12 (57%) and 14 (67%) studies for complementary replicability indicators. Consistent with these results, the estimated true-positive rate is 67% in a Bayesian analysis. The relative effect size of true positives is estimated to be 71%, suggesting that both false positives and inflated effect sizes of true positives contribute to imperfect reproducibility. Furthermore, we find that peer beliefs of replicability are strongly related to replicability, suggesting that the research community could predict which results would replicate and that failures to replicate were not the result of chance alone.


Assuntos
Reprodutibilidade dos Testes , Pesquisa/estatística & dados numéricos , Ciências Sociais/estatística & dados numéricos , Teorema de Bayes , Humanos , Publicações Periódicas como Assunto/estatística & dados numéricos , Tamanho da Amostra , Ciências Sociais/métodos
4.
Science ; 351(6280): 1433-6, 2016 Mar 25.
Artigo em Inglês | MEDLINE | ID: mdl-26940865

RESUMO

The replicability of some scientific findings has recently been called into question. To contribute data about replicability in economics, we replicated 18 studies published in the American Economic Review and the Quarterly Journal of Economics between 2011 and 2014. All of these replications followed predefined analysis plans that were made publicly available beforehand, and they all have a statistical power of at least 90% to detect the original effect size at the 5% significance level. We found a significant effect in the same direction as in the original study for 11 replications (61%); on average, the replicated effect size is 66% of the original. The replicability rate varies between 67% and 78% for four additional replicability indicators, including a prediction market measure of peer beliefs.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA