Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
Behav Brain Sci ; 47: e37, 2024 Feb 05.
Article in English | MEDLINE | ID: mdl-38311437

ABSTRACT

Demonstrating the limitations of the one-at-a-time approach, crowd initiatives reveal the surprisingly powerful role of analytic and design choices in shaping scientific results. At the same time, cross-cultural variability in effects is far below the levels initially expected. This highlights the value of "medium" science, leveraging diverse stimulus sets and extensive robustness checks to achieve integrative tests of competing theories.

2.
Behav Brain Sci ; 46: e317, 2023 10 04.
Article in English | MEDLINE | ID: mdl-37789543

ABSTRACT

Contradicting our earlier claims of American moral exceptionalism, recent self-replication evidence from our laboratory indicates that implicit puritanism characterizes the judgments of people across cultures. Implicit cultural evolution may lag behind explicit change, such that differences between traditional and non-traditional cultures are greater at a deliberative than an intuitive level. Not too deep down, perhaps we are all implicit puritans.


Subject(s)
Judgment , Morals , Humans , United States
3.
Behav Brain Sci ; 45: e8, 2022 02 10.
Article in English | MEDLINE | ID: mdl-35139965

ABSTRACT

By organizing crowds of scientists to independently tackle the same research questions, we can collectively overcome the generalizability crisis. Strategies to draw inferences from a heterogeneous set of research approaches include aggregation, for instance, meta-analyzing the effect sizes obtained by different investigators, and parsing, attempting to identify theoretically meaningful moderators that explain the variability in results.


Subject(s)
Crowding , Humans
4.
Perspect Psychol Sci ; 16(6): 1255-1269, 2021 11.
Article in English | MEDLINE | ID: mdl-33645334

ABSTRACT

Science is often perceived to be a self-correcting enterprise. In principle, the assessment of scientific claims is supposed to proceed in a cumulative fashion, with the reigning theories of the day progressively approximating truth more accurately over time. In practice, however, cumulative self-correction tends to proceed less efficiently than one might naively suppose. Far from evaluating new evidence dispassionately and infallibly, individual scientists often cling stubbornly to prior findings. Here we explore the dynamics of scientific self-correction at an individual rather than collective level. In 13 written statements, researchers from diverse branches of psychology share why and how they have lost confidence in one of their own published findings. We qualitatively characterize these disclosures and explore their implications. A cross-disciplinary survey suggests that such loss-of-confidence sentiments are surprisingly common among members of the broader scientific population yet rarely become part of the public record. We argue that removing barriers to self-correction at the individual level is imperative if the scientific community as a whole is to achieve the ideal of efficient self-correction.


Subject(s)
Publications , Research Personnel , Attitude , Humans , Mental Processes , Writing
5.
Behav Brain Sci ; 43: e50, 2020 04 15.
Article in English | MEDLINE | ID: mdl-32292136

ABSTRACT

Critical aspects of the "rationality of rationalizations" thesis are open empirical questions. These include the frequency with which past behavior determines attitudes (as opposed to attitudes causing future behaviors), the extent to which post hoc justifications take on a life of their own and shape future actions, and whether rationalizers experience benefits in well-being, social influence, performance, or other desirable outcomes.


Subject(s)
Rationalization , Sexual Behavior , Attitude , Prevalence
6.
Psychol Bull ; 146(5): 451-479, 2020 05.
Article in English | MEDLINE | ID: mdl-31944796

ABSTRACT

To what extent are research results influenced by subjective decisions that scientists make as they design studies? Fifteen research teams independently designed studies to answer five original research questions related to moral judgments, negotiations, and implicit cognition. Participants from 2 separate large samples (total N > 15,000) were then randomly assigned to complete 1 version of each study. Effect sizes varied dramatically across different sets of materials designed to test the same hypothesis: Materials from different teams rendered statistically significant effects in opposite directions for 4 of 5 hypotheses, with the narrowest range in estimates being d = -0.37 to + 0.26. Meta-analysis and a Bayesian perspective on the results revealed overall support for 2 hypotheses and a lack of support for 3 hypotheses. Overall, practically none of the variability in effect sizes was attributable to the skill of the research team in designing materials, whereas considerable variability was attributable to the hypothesis being tested. In a forecasting survey, predictions of other scientists were significantly correlated with study results, both across and within hypotheses. Crowdsourced testing of research hypotheses helps reveal the true consistency of empirical support for a scientific claim. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Subject(s)
Crowdsourcing , Psychology/methods , Research Design , Adult , Humans , Random Allocation
7.
Behav Brain Sci ; 41: e153, 2018 01.
Article in English | MEDLINE | ID: mdl-31064608

ABSTRACT

The widespread replication of research findings in independent laboratories prior to publication is suggested as a complement to traditional replication approaches. The pre-publication independent replication approach further addresses three key concerns from replication skeptics by systematically taking context into account, reducing reputational costs for original authors and replicators, and increasing the theoretical value of failed replications.


Subject(s)
Research , Reproducibility of Results
8.
Sci Data ; 3: 160082, 2016 Oct 11.
Article in English | MEDLINE | ID: mdl-27727246

ABSTRACT

We present the data from a crowdsourced project seeking to replicate findings in independent laboratories before (rather than after) they are published. In this Pre-Publication Independent Replication (PPIR) initiative, 25 research groups attempted to replicate 10 moral judgment effects from a single laboratory's research pipeline of unpublished findings. The 10 effects were investigated using online/lab surveys containing psychological manipulations (vignettes) followed by questionnaires. Results revealed a mix of reliable, unreliable, and culturally moderated findings. Unlike any previous replication project, this dataset includes the data from not only the replications but also from the original studies, creating a unique corpus that researchers can use to better understand reproducibility and irreproducibility in science.


Subject(s)
Morals , Reproducibility of Results , Humans
SELECTION OF CITATIONS
SEARCH DETAIL