RESUMEN
Failures to replicate evidence of new discoveries have forced scientists to ask whether this unreliability is due to suboptimal implementation of methods or whether presumptively optimal methods are not, in fact, optimal. This paper reports an investigation by four coordinated laboratories of the prospective replicability of 16 novel experimental findings using rigour-enhancing practices: confirmatory tests, large sample sizes, preregistration and methodological transparency. In contrast to past systematic replication efforts that reported replication rates averaging 50%, replication attempts here produced the expected effects with significance testing (P < 0.05) in 86% of attempts, slightly exceeding the maximum expected replicability based on observed effect sizes and sample sizes. When one lab attempted to replicate an effect discovered by another lab, the effect size in the replications was 97% that in the original study. This high replication rate justifies confidence in rigour-enhancing methods to increase the replicability of new discoveries.
Asunto(s)
Proyectos de Investigación , Conducta Social , Humanos , Estudios Prospectivos , Reproducibilidad de los Resultados , Tamaño de la MuestraRESUMEN
The COVID-19 pandemic (and its aftermath) highlights a critical need to communicate health information effectively to the global public. Given that subtle differences in information framing can have meaningful effects on behavior, behavioral science research highlights a pressing question: Is it more effective to frame COVID-19 health messages in terms of potential losses (e.g., "If you do not practice these steps, you can endanger yourself and others") or potential gains (e.g., "If you practice these steps, you can protect yourself and others")? Collecting data in 48 languages from 15,929 participants in 84 countries, we experimentally tested the effects of message framing on COVID-19-related judgments, intentions, and feelings. Loss- (vs. gain-) framed messages increased self-reported anxiety among participants cross-nationally with little-to-no impact on policy attitudes, behavioral intentions, or information seeking relevant to pandemic risks. These results were consistent across 84 countries, three variations of the message framing wording, and 560 data processing and analytic choices. Thus, results provide an empirical answer to a global communication question and highlight the emotional toll of loss-framed messages. Critically, this work demonstrates the importance of considering unintended affective consequences when evaluating nudge-style interventions.
RESUMEN
Question-and-answer (Q&A) sessions following research talks provide key opportunities for the audience to engage in scientific discourse. Gender inequities persist in academia, where women are underrepresented as faculty and their contributions are less valued than men's. In the present research, we tested how this gender difference translates to face-to-face Q&A-session participation and its psychological correlates. Across two studies examining participation in three conferences, men disproportionately participated in Q&A sessions in a live, recorded conference (N = 189 Q&A interactions), and women were less comfortable participating in Q&A sessions and more likely to fear backlash for their participation (N = 234 conference attendees). Additionally, women were more likely to hold back questions because of anxiety, whereas men were more likely to hold back questions to make space for others to participate. To the extent that men engage more than women in Q&A sessions, men may continue to have more influence over the direction of science.
Asunto(s)
Hombres , Masculino , Humanos , Femenino , Factores SexualesRESUMEN
Transparency of research methods is vital to science, though incentives are variable, with only some journals and funders adopting transparency policies. Clearinghouses are also important stakeholders; however, to date none have implemented formal procedures that facilitate transparent research. Using data from the longest standing clearinghouse, we examine transparency practices for preventive interventions to explore the role of online clearinghouses in incentivizing researchers to make their research more transparent. We conducted a descriptive analysis of 88 evaluation reports reviewed in 2018-2019 by Blueprints for Healthy Youth Development, when the clearinghouse began checking for trial registrations, and expanded on these efforts by applying broader transparency standards to interventions eligible for an endorsement on the Blueprints website during the study period. Reports were recent, with 84% published between 2010 and 2019. We found that few reports had data, code, or research materials that were publicly available. Meanwhile, 40% had protocols that were registered, but only 8% were registered prospectively, while one-quarter were registered before conducting analyses. About one-third included details in a registered protocol describing the treatment contrast and planned inclusions, and less than 5% had a registered statistical analysis plan (e.g., planned analytical methods, pre-specified covariates). Confirmatory research was distinguished from exploratory work in roughly 40% of reports. Reports published more recently (after 2015) had higher rates of transparency. Preventive intervention research needs to be more transparent. Since clearinghouses rely on robust findings to make well-informed decisions and researchers are incentivized to meet clearinghouse standards, clearinghouses should consider policies that encourage transparency to improve the credibility of evidence-based interventions.
Asunto(s)
Proyectos de Investigación , Informe de Investigación , Adolescente , HumanosRESUMEN
To what extent are research results influenced by subjective decisions that scientists make as they design studies? Fifteen research teams independently designed studies to answer five original research questions related to moral judgments, negotiations, and implicit cognition. Participants from 2 separate large samples (total N > 15,000) were then randomly assigned to complete 1 version of each study. Effect sizes varied dramatically across different sets of materials designed to test the same hypothesis: Materials from different teams rendered statistically significant effects in opposite directions for 4 of 5 hypotheses, with the narrowest range in estimates being d = -0.37 to + 0.26. Meta-analysis and a Bayesian perspective on the results revealed overall support for 2 hypotheses and a lack of support for 3 hypotheses. Overall, practically none of the variability in effect sizes was attributable to the skill of the research team in designing materials, whereas considerable variability was attributable to the hypothesis being tested. In a forecasting survey, predictions of other scientists were significantly correlated with study results, both across and within hypotheses. Crowdsourced testing of research hypotheses helps reveal the true consistency of empirical support for a scientific claim. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Asunto(s)
Colaboración de las Masas , Psicología/métodos , Proyectos de Investigación , Adulto , Humanos , Distribución AleatoriaRESUMEN
What kind of life do people want? In psychology, a good life has typically been conceptualized in terms of either hedonic or eudaimonic well-being. We propose that psychological richness is another neglected aspect of what people consider a good life. In study 1 (9-nation cross-cultural study), we asked participants whether they ideally wanted a happy, a meaningful, or a psychologically rich life. Roughly 7 to 17% of participants chose the psychologically rich life. In study 2, we asked 1611 Americans and 680 Koreans what they regret most in their lives; then, if they could undo or reverse the regretful event, whether their lives would have been happier, more meaningful, or psychologically richer as a result. Roughly 28% of Americans and 35% of Koreans reported their lives would have been psychologically richer. Together, this work provides a foundation for the study of psychological richness as another dimension of a good life.
RESUMEN
Most scientific research is conducted by small teams of investigators who together formulate hypotheses, collect data, conduct analyses, and report novel findings. These teams operate independently as vertically integrated silos. Here we argue that scientific research that is horizontally distributed can provide substantial complementary value, aiming to maximize available resources, promote inclusiveness and transparency, and increase rigor and reliability. This alternative approach enables researchers to tackle ambitious projects that would not be possible under the standard model. Crowdsourced scientific initiatives vary in the degree of communication between project members from largely independent work curated by a coordination team to crowd collaboration on shared activities. The potential benefits and challenges of large-scale collaboration span the entire research process: ideation, study design, data collection, data analysis, reporting, and peer review. Complementing traditional small science with crowdsourced approaches can accelerate the progress of science and improve the quality of scientific research.
Asunto(s)
Colaboración de las Masas/métodos , Relaciones Interprofesionales , Ciencia/métodos , Utopias , Conducta Cooperativa , Análisis de Datos , Recolección de Datos/métodos , Humanos , Revisión por Pares , Edición , Proyectos de Investigación , EscrituraRESUMEN
Using a novel technique known as network meta-analysis, we synthesized evidence from 492 studies (87,418 participants) to investigate the effectiveness of procedures in changing implicit measures, which we define as response biases on implicit tasks. We also evaluated these procedures' effects on explicit and behavioral measures. We found that implicit measures can be changed, but effects are often relatively weak (|ds| < .30). Most studies focused on producing short-term changes with brief, single-session manipulations. Procedures that associate sets of concepts, invoke goals or motivations, or tax mental resources changed implicit measures the most, whereas procedures that induced threat, affirmation, or specific moods/emotions changed implicit measures the least. Bias tests suggested that implicit effects could be inflated relative to their true population values. Procedures changed explicit measures less consistently and to a smaller degree than implicit measures and generally produced trivial changes in behavior. Finally, changes in implicit measures did not mediate changes in explicit measures or behavior. Our findings suggest that changes in implicit measures are possible, but those changes do not necessarily translate into changes in explicit measures or behavior. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Asunto(s)
Metaanálisis en Red , Pruebas Psicológicas , Psicología Social , Percepción Social , HumanosRESUMEN
Progress in science relies in part on generating hypotheses with existing observations and testing hypotheses with new observations. This distinction between postdiction and prediction is appreciated conceptually but is not respected in practice. Mistaking generation of postdictions with testing of predictions reduces the credibility of research findings. However, ordinary biases in human reasoning, such as hindsight bias, make it hard to avoid this mistake. An effective solution is to define the research questions and analysis plan before observing the research outcomes-a process called preregistration. Preregistration distinguishes analyses and outcomes that result from predictions from those that result from postdictions. A variety of practical strategies are available to make the best possible use of preregistration in circumstances that fall short of the ideal application, such as when the data are preexisting. Services are now available for preregistration across all disciplines, facilitating a rapid increase in the practice. Widespread adoption of preregistration will increase distinctiveness between hypothesis generation and hypothesis testing and will improve the credibility of research findings.
Asunto(s)
Investigación/normas , Ciencia/normas , Humanos , Personal de Laboratorio/normas , Valor Predictivo de las Pruebas , Recursos HumanosRESUMEN
Concerns have been growing about the veracity of psychological research. Many findings in psychological science are based on studies with insufficient statistical power and nonrepresentative samples, or may otherwise be limited to specific, ungeneralizable settings or populations. Crowdsourced research, a type of large-scale collaboration in which one or more research projects are conducted across multiple lab sites, offers a pragmatic solution to these and other current methodological challenges. The Psychological Science Accelerator (PSA) is a distributed network of laboratories designed to enable and support crowdsourced research projects. These projects can focus on novel research questions, or attempt to replicate prior research, in large, diverse samples. The PSA's mission is to accelerate the accumulation of reliable and generalizable evidence in psychological science. Here, we describe the background, structure, principles, procedures, benefits, and challenges of the PSA. In contrast to other crowdsourced research networks, the PSA is ongoing (as opposed to time-limited), efficient (in terms of re-using structures and principles for different projects), decentralized, diverse (in terms of participants and researchers), and inclusive (of proposals, contributions, and other relevant input from anyone inside or outside of the network). The PSA and other approaches to crowdsourced psychological science will advance our understanding of mental processes and behaviors by enabling rigorous research and systematically examining its generalizability.
RESUMEN
Replication is vital for increasing precision and accuracy of scientific claims. However, when replications "succeed" or "fail," they could have reputational consequences for the claim's originators. Surveys of United States adults (N = 4,786), undergraduates (N = 428), and researchers (N = 313) showed that reputational assessments of scientists were based more on how they pursue knowledge and respond to replication evidence, not whether the initial results were true. When comparing one scientist that produced boring but certain results with another that produced exciting but uncertain results, opinion favored the former despite researchers' belief in more rewards for the latter. Considering idealized views of scientific practices offers an opportunity to address incentives to reward both innovation and verification.
Asunto(s)
Investigadores , Ciencia/estadística & datos numéricos , Humanos , Invenciones , Motivación , Opinión Pública , Reproducibilidad de los Resultados , Investigadores/psicología , Investigadores/estadística & datos numéricos , Recompensa , Estados UnidosRESUMEN
The social world is stratified. Social hierarchies are known but often disavowed as anachronisms or unjust. Nonetheless, hierarchies may persist in social memory. In three studies (total N > 200,000), we found evidence of social hierarchies in implicit evaluation by race, religion, and age. Participants implicitly evaluated their own racial group most positively and the remaining racial groups in accordance with the following hierarchy: Whites > Asians > Blacks > Hispanics. Similarly, participants implicitly evaluated their own religion most positively and the remaining religions in accordance with the following hierarchy: Christianity > Judaism > Hinduism or Buddhism > Islam. In a final study, participants of all ages implicitly evaluated age groups following this rule: children > young adults > middle-age adults > older adults. These results suggest that the rules of social evaluation are pervasively embedded in culture and mind.