RESUMEN
How low is the ideal first offer? Prior to any negotiation, decision-makers must balance a crucial tradeoff between two opposing effects. While lower first offers benefit buyers by anchoring the price in their favor, an overly ambitious offer increases the impasse risk, thus potentially precluding an agreement altogether. Past research with simulated laboratory or classroom exercises has demonstrated either a first offer's anchoring benefits or its impasse risk detriments, while largely ignoring the other effect. In short, there is no empirical answer to the conundrum of how low an ideal first offer should be. Our results from over 26 million incentivized real-world negotiations on eBay document (a) a linear anchoring effect of buyer offers on sales price, (b) a nonlinear, quartic effect on impasse risk, and (c) specific offer values with particularly low impasse risks but high anchoring benefits. Integrating these findings suggests that the ideal buyer offer lies at 80% of the seller's list price across all products-although this value ranges from 33% to 95% depending on the type of product, demand, and buyers' weighting of price versus impasse risk. We empirically amend the well-known midpoint bias, the assumption that buyer and seller eventually meet in the middle of their opening offers, and find evidence for a "buyer bias." Product demand moderates the (non)linear effects, the ideal buyer offer, and the buyer bias. Finally, we apply machine learning analyses to predict impasses and present a website with customizable first-offer advice configured to different products, prices, and buyers' risk preferences.
Asunto(s)
Comercio , NegociaciónRESUMEN
By organizing crowds of scientists to independently tackle the same research questions, we can collectively overcome the generalizability crisis. Strategies to draw inferences from a heterogeneous set of research approaches include aggregation, for instance, meta-analyzing the effect sizes obtained by different investigators, and parsing, attempting to identify theoretically meaningful moderators that explain the variability in results.
Asunto(s)
Aglomeración , HumanosRESUMEN
The widespread replication of research findings in independent laboratories prior to publication is suggested as a complement to traditional replication approaches. The pre-publication independent replication approach further addresses three key concerns from replication skeptics by systematically taking context into account, reducing reputational costs for original authors and replicators, and increasing the theoretical value of failed replications.
Asunto(s)
Investigación , Reproducibilidad de los ResultadosRESUMEN
Many-analysts studies explore how well an empirical claim withstands plausible alternative analyses of the same dataset by multiple, independent analysis teams. Conclusions from these studies typically rely on a single outcome metric (e.g. effect size) provided by each analysis team. Although informative about the range of plausible effects in a dataset, a single effect size from each team does not provide a complete, nuanced understanding of how analysis choices are related to the outcome. We used the Delphi consensus technique with input from 37 experts to develop an 18-item subjective evidence evaluation survey (SEES) to evaluate how each analysis team views the methodological appropriateness of the research design and the strength of evidence for the hypothesis. We illustrate the usefulness of the SEES in providing richer evidence assessment with pilot data from a previous many-analysts study.
RESUMEN
Four validity types evaluate the approximate truth of inferences communicated by primary research. However, current validity frameworks ignore the truthfulness of empirical inferences that are central to research-problem statements. Problem statements contrast a review of past research with other knowledge that extends, contradicts, or calls into question specific features of past research. Authors communicate empirical inferences, or quantitative judgments, about the frequency (e.g., "few," "most") and variability (e.g., "on the one hand," "on the other hand") in their reviews of existing theories, measures, samples, or results. We code a random sample of primary research articles and show that 83% of quantitative judgments in our sample are vague and do not have a transparent origin, making it difficult to assess their validity. We review validity threats of current practices. We propose that documenting the literature search, reporting how the search was coded, and quantifying the search results facilitates more precise judgments and makes their origin transparent. This practice enables research questions that are more closely tied to the existing body of knowledge and allows for more informed evaluations of the contribution of primary research articles, their design choices, and how they advance knowledge. We discuss potential limitations of our proposed framework.
Asunto(s)
Proyectos de Investigación , HumanosRESUMEN
Any large dataset can be analyzed in a number of ways, and it is possible that the use of different analysis strategies will lead to different results and conclusions. One way to assess whether the results obtained depend on the analysis strategy chosen is to employ multiple analysts and leave each of them free to follow their own approach. Here, we present consensus-based guidance for conducting and reporting such multi-analyst studies, and we discuss how broader adoption of the multi-analyst approach has the potential to strengthen the robustness of results and conclusions obtained from analyses of datasets in basic and applied research.
Asunto(s)
Consenso , Análisis de Datos , Conjuntos de Datos como Asunto , InvestigaciónRESUMEN
To what extent are research results influenced by subjective decisions that scientists make as they design studies? Fifteen research teams independently designed studies to answer five original research questions related to moral judgments, negotiations, and implicit cognition. Participants from 2 separate large samples (total N > 15,000) were then randomly assigned to complete 1 version of each study. Effect sizes varied dramatically across different sets of materials designed to test the same hypothesis: Materials from different teams rendered statistically significant effects in opposite directions for 4 of 5 hypotheses, with the narrowest range in estimates being d = -0.37 to + 0.26. Meta-analysis and a Bayesian perspective on the results revealed overall support for 2 hypotheses and a lack of support for 3 hypotheses. Overall, practically none of the variability in effect sizes was attributable to the skill of the research team in designing materials, whereas considerable variability was attributable to the hypothesis being tested. In a forecasting survey, predictions of other scientists were significantly correlated with study results, both across and within hypotheses. Crowdsourced testing of research hypotheses helps reveal the true consistency of empirical support for a scientific claim. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Asunto(s)
Colaboración de las Masas , Psicología/métodos , Proyectos de Investigación , Adulto , Humanos , Distribución AleatoriaRESUMEN
This research demonstrates that people can act more powerfully without having power. Researchers and practitioners advise people to obtain alternatives in social exchange relationships to enhance their power. However, alternatives are not always readily available, often forcing people to interact without having much power. Building on research suggesting that subjective power and objective outcomes are disconnected and that mental simulation can improve aspirations, we show that the mental imagery of a strong alternative can provide some of the benefits that real alternatives provide. We tested this hypothesis in one context of social exchange-negotiations-and demonstrate that imagining strong alternatives (vs. not) causes powerless individuals to negotiate more ambitiously. Negotiators reached more profitable agreements when they had a stronger tendency to simulate alternatives (Study 1) or when they were instructed to simulate an alternative (Studies 3-6). Mediation analyses suggest that mental simulation enhanced performance because it boosted negotiators' aspirations and subsequent first offers (Studies 2-6), but only when the simulated alternative was attractive (Study 5). We used various negotiation contexts, which also allowed us to identify important boundary conditions of mental simulations in interdependent settings: mental simulation no longer helped when negotiators did not make the first offer, when their opponents simultaneously engaged in mental simulation (Study 6), and even backfired in settings where negotiators' positions were difficult to reconcile (Study 7). An internal meta-analysis of the file-drawer produces conservative effect size estimates and demonstrates the robustness of the effect. We contribute to social power, negotiations, and mental simulation research. (PsycINFO Database Record
Asunto(s)
Aspiraciones Psicológicas , Conducta de Elección , Empleo , Objetivos , Imaginación , Negociación/psicología , Poder Psicológico , Adulto , Emociones , Femenino , Humanos , Solicitud de Empleo , Masculino , Modelos TeóricosRESUMEN
Researchers agree that replicability and reproducibility are key aspects of science. A collection of Data Descriptors published in Scientific Data presents data obtained in the process of attempting to replicate previously published research. These new replication data describe published and unpublished projects. The different papers in this collection highlight the many ways that scientific replications can be conducted, and they reveal the benefits and challenges of crucial replication research. The organizers of this collection encourage scientists to reuse the data contained in the collection for their own work, and also believe that these replication examples can serve as educational resources for students, early-career researchers, and experienced scientists alike who are interested in learning more about the process of replication.
Asunto(s)
Recolección de Datos , Investigadores , Humanos , Reproducibilidad de los ResultadosRESUMEN
We present the data from a crowdsourced project seeking to replicate findings in independent laboratories before (rather than after) they are published. In this Pre-Publication Independent Replication (PPIR) initiative, 25 research groups attempted to replicate 10 moral judgment effects from a single laboratory's research pipeline of unpublished findings. The 10 effects were investigated using online/lab surveys containing psychological manipulations (vignettes) followed by questionnaires. Results revealed a mix of reliable, unreliable, and culturally moderated findings. Unlike any previous replication project, this dataset includes the data from not only the replications but also from the original studies, creating a unique corpus that researchers can use to better understand reproducibility and irreproducibility in science.