Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Cortex ; 171: 330-346, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38070388

RESUMO

Replication of published results is crucial for ensuring the robustness and self-correction of research, yet replications are scarce in many fields. Replicating researchers will therefore often have to decide which of several relevant candidates to target for replication. Formal strategies for efficient study selection have been proposed, but none have been explored for practical feasibility - a prerequisite for validation. Here we move one step closer to efficient replication study selection by exploring the feasibility of a particular selection strategy that estimates replication value as a function of citation impact and sample size (Isager, van 't Veer, & Lakens, 2021). We tested our strategy on a sample of fMRI studies in social neuroscience. We first report our efforts to generate a representative candidate set of replication targets. We then explore the feasibility and reliability of estimating replication value for the targets in our set, resulting in a dataset of 1358 studies ranked on their value of prioritising them for replication. In addition, we carefully examine possible measures, test auxiliary assumptions, and identify boundary conditions of measuring value and uncertainty. We end our report by discussing how future validation studies might be designed. Our study demonstrates the importance of investigating how to implement study selection strategies in practice. Our sample and study design can be extended to explore the feasibility of other formal study selection strategies that have been proposed.


Assuntos
Neurociência Cognitiva , Humanos , Estudos de Viabilidade , Reprodutibilidade dos Testes , Incerteza , Projetos de Pesquisa
2.
R Soc Open Sci ; 10(2): 210586, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36756069

RESUMO

Increased execution of replication studies contributes to the effort to restore credibility of empirical research. However, a second generation of problems arises: the number of potential replication targets is at a serious mismatch with available resources. Given limited resources, replication target selection should be well-justified, systematic and transparently communicated. At present the discussion on what to consider when selecting a replication target is limited to theoretical discussion, self-reported justifications and a few formalized suggestions. In this Registered Report, we proposed a study involving the scientific community to create a list of considerations for consultation when selecting a replication target in psychology. We employed a modified Delphi approach. First, we constructed a preliminary list of considerations. Second, we surveyed psychologists who previously selected a replication target with regards to their considerations. Third, we incorporated the results into the preliminary list of considerations and sent the updated list to a group of individuals knowledgeable about concerns regarding replication target selection. Over the course of several rounds, we established consensus regarding what to consider when selecting a replication target. The resulting checklist can be used for transparently communicating the rationale for selecting studies for replication.

3.
Perspect Psychol Sci ; 16(4): 744-755, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-33326363

RESUMO

For almost half a century, Paul Meehl educated psychologists about how the mindless use of null-hypothesis significance tests made research on theories in the social sciences basically uninterpretable. In response to the replication crisis, reforms in psychology have focused on formalizing procedures for testing hypotheses. These reforms were necessary and influential. However, as an unexpected consequence, psychological scientists have begun to realize that they may not be ready to test hypotheses. Forcing researchers to prematurely test hypotheses before they have established a sound "derivation chain" between test and theory is counterproductive. Instead, various nonconfirmatory research activities should be used to obtain the inputs necessary to make hypothesis tests informative. Before testing hypotheses, researchers should spend more time forming concepts, developing valid measures, establishing the causal relationships between concepts and the functional form of those relationships, and identifying boundary conditions and auxiliary assumptions. Providing these inputs should be recognized and incentivized as a crucial goal in itself. In this article, we discuss how shifting the focus to nonconfirmatory research can tie together many loose ends of psychology's reform movement and help us to develop strong, testable theories, as Paul Meehl urged.


Assuntos
Teoria Psicológica , Psicologia/métodos , Causalidade , Humanos
4.
J Gerontol B Psychol Sci Soc Sci ; 75(1): 45-57, 2020 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-29878211

RESUMO

Researchers often conclude an effect is absent when a null-hypothesis significance test yields a nonsignificant p value. However, it is neither logically nor statistically correct to conclude an effect is absent when a hypothesis test is not significant. We present two methods to evaluate the presence or absence of effects: Equivalence testing (based on frequentist statistics) and Bayes factors (based on Bayesian statistics). In four examples from the gerontology literature, we illustrate different ways to specify alternative models that can be used to reject the presence of a meaningful or predicted effect in hypothesis tests. We provide detailed explanations of how to calculate, report, and interpret Bayes factors and equivalence tests. We also discuss how to design informative studies that can provide support for a null model or for the absence of a meaningful effect. The conceptual differences between Bayes factors and equivalence tests are discussed, and we also note when and why they might lead to similar or different inferences in practice. It is important that researchers are able to falsify predictions or can quantify the support for predicted null effects. Bayes factors and equivalence tests provide useful statistical tools to improve inferences about null effects.


Assuntos
Envelhecimento/fisiologia , Pesquisa Biomédica/métodos , Interpretação Estatística de Dados , Geriatria/métodos , Modelos Estatísticos , Psicologia/métodos , Projetos de Pesquisa , Adulto , Idoso , Teorema de Bayes , Dor Crônica/fisiopatologia , Regulação Emocional/fisiologia , Humanos , Masculino , Memória/fisiologia , Personalidade/fisiologia
5.
PLoS One ; 13(9): e0203263, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30192800

RESUMO

Social status is often metaphorically construed in terms of spatial relations such as height, size, and numerosity. This has led to the idea that social status might partially be represented by an analogue magnitude system, responsible for processing the magnitude of various physical and abstract dimensions. Accordingly, processing of social status should obey Weber's law. We conducted three studies to investigate whether social status comparisons would indicate behavioral outcomes derived from Weber's law: the distance effect and the size effect. Dependent variable was the latency of status comparisons for a variety of both learned and familiar hierarchies. As predicted and in line with previous findings, we observed a clear distance effect. However, the effect of size variation differed from the size effect hypothesized a priori, and an unexpected interaction between the two effects was observed. In conclusion, we provide a robust confirmation of previous observations of the distance effect in social status comparisons, but the shape of the size effect requires new theorizing.


Assuntos
Hierarquia Social , Modelos Psicológicos , Feminino , Humanos , Modelos Lineares , Masculino , Psicologia Social , Adulto Jovem
6.
Behav Brain Sci ; 41: e124, 2018 01.
Artigo em Inglês | MEDLINE | ID: mdl-31064512

RESUMO

The debate about whether replication studies should become mainstream is essentially driven by disagreements about their costs and benefits and the best ways to allocate limited resources. Determining when replications are worthwhile requires quantifying their expected utility. We argue that a formalized framework for such evaluations can be useful for both individual decision-making and collective discussions about replication.


Assuntos
Tomada de Decisões , Análise Custo-Benefício
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...