Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 59
Filter
1.
Psychometrika ; 89(2): 542-568, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38664342

ABSTRACT

When analyzing data, researchers make some choices that are either arbitrary, based on subjective beliefs about the data-generating process, or for which equally justifiable alternative choices could have been made. This wide range of data-analytic choices can be abused and has been one of the underlying causes of the replication crisis in several fields. Recently, the introduction of multiverse analysis provides researchers with a method to evaluate the stability of the results across reasonable choices that could be made when analyzing data. Multiverse analysis is confined to a descriptive role, lacking a proper and comprehensive inferential procedure. Recently, specification curve analysis adds an inferential procedure to multiverse analysis, but this approach is limited to simple cases related to the linear model, and only allows researchers to infer whether at least one specification rejects the null hypothesis, but not which specifications should be selected. In this paper, we present a Post-selection Inference approach to Multiverse Analysis (PIMA) which is a flexible and general inferential approach that considers for all possible models, i.e., the multiverse of reasonable analyses. The approach allows for a wide range of data specifications (i.e., preprocessing) and any generalized linear model; it allows testing the null hypothesis that a given predictor is not associated with the outcome, by combining information from all reasonable models of multiverse analysis, and provides strong control of the family-wise error rate allowing researchers to claim that the null hypothesis can be rejected for any specification that shows a significant effect. The inferential proposal is based on a conditional resampling procedure. We formally prove that the Type I error rate is controlled, and compute the statistical power of the test through a simulation study. Finally, we apply the PIMA procedure to the analysis of a real dataset on the self-reported hesitancy for the COronaVIrus Disease 2019 (COVID-19) vaccine before and after the 2020 lockdown in Italy. We conclude with practical recommendations to be considered when implementing the proposed procedure.


Subject(s)
Psychometrics , Humans , Psychometrics/methods , Models, Statistical , Data Interpretation, Statistical , COVID-19/epidemiology , Linear Models , Computer Simulation
2.
Behav Brain Sci ; 47: e56, 2024 Feb 05.
Article in English | MEDLINE | ID: mdl-38311446

ABSTRACT

We expect that consensus meetings, where researchers come together to discuss their theoretical viewpoints, prioritize the factors they agree are important to study, standardize their measures, and determine a smallest effect size of interest, will prove to be a more efficient solution to the lack of coordination and integration of claims in science than integrative experiments.


Subject(s)
Consensus
3.
Nat Hum Behav ; 8(4): 609-610, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38326564
5.
Cortex ; 171: 330-346, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38070388

ABSTRACT

Replication of published results is crucial for ensuring the robustness and self-correction of research, yet replications are scarce in many fields. Replicating researchers will therefore often have to decide which of several relevant candidates to target for replication. Formal strategies for efficient study selection have been proposed, but none have been explored for practical feasibility - a prerequisite for validation. Here we move one step closer to efficient replication study selection by exploring the feasibility of a particular selection strategy that estimates replication value as a function of citation impact and sample size (Isager, van 't Veer, & Lakens, 2021). We tested our strategy on a sample of fMRI studies in social neuroscience. We first report our efforts to generate a representative candidate set of replication targets. We then explore the feasibility and reliability of estimating replication value for the targets in our set, resulting in a dataset of 1358 studies ranked on their value of prioritising them for replication. In addition, we carefully examine possible measures, test auxiliary assumptions, and identify boundary conditions of measuring value and uncertainty. We end our report by discussing how future validation studies might be designed. Our study demonstrates the importance of investigating how to implement study selection strategies in practice. Our sample and study design can be extended to explore the feasibility of other formal study selection strategies that have been proposed.


Subject(s)
Cognitive Neuroscience , Humans , Feasibility Studies , Reproducibility of Results , Uncertainty , Research Design
6.
J Sports Sci ; 41(16): 1507-1517, 2023 Sep.
Article in English | MEDLINE | ID: mdl-38018365

ABSTRACT

Two factors that decrease the replicability of studies in the scientific literature are publication bias and studies with underpowered desgins. One way to ensure that studies have adequate statistical power to detect the effect size of interest is by conducting a-priori power analyses. Yet, a previous editorial published in the Journal of Sports Sciences reported a median sample size of 19 and the scarce usage of a-priori power analyses. We meta-analysed 89 studies from the same journal to assess the presence and extent of publication bias, as well as the average statistical power, by conducting a z-curve analysis. In a larger sample of 174 studies, we also examined a) the usage, reporting practices and reproducibility of a-priori power analyses; and b) the prevalence of reporting practices of t-statistic or F-ratio, degrees of freedom, exact p-values, effect sizes and confidence intervals. Our results indicate that there was some indication of publication bias and the average observed power was low (53% for significant and non-significant findings and 61% for only significant findings). Finally, the usage and reporting practices of a-priori power analyses as well as statistical results including test statistics, effect sizes and confidence intervals were suboptimal.


Subject(s)
Research Design , Humans , Publication Bias , Reproducibility of Results , Sample Size , Bias
7.
Perspect Psychol Sci ; : 17456916231182568, 2023 Aug 01.
Article in English | MEDLINE | ID: mdl-37526118

ABSTRACT

Criteria for recognizing and rewarding scientists primarily focus on individual contributions. This creates a conflict between what is best for scientists' careers and what is best for science. In this article, we show how the theory of multilevel selection provides conceptual tools for modifying incentives to better align individual and collective interests. A core principle is the need to account for indirect effects by shifting the level at which selection operates from individuals to the groups in which individuals are embedded. This principle is used in several fields to improve collective outcomes, including animal husbandry, team sports, and professional organizations. Shifting the level of selection has the potential to ameliorate several problems in contemporary science, including accounting for scientists' diverse contributions to knowledge generation, reducing individual-level competition, and promoting specialization and team science. We discuss the difficulties associated with shifting the level of selection and outline directions for future development in this domain.

9.
Psychol Methods ; 28(2): 438-451, 2023 Apr.
Article in English | MEDLINE | ID: mdl-34928679

ABSTRACT

Robust scientific knowledge is contingent upon replication of original findings. However, replicating researchers are constrained by resources, and will almost always have to choose one replication effort to focus on from a set of potential candidates. To select a candidate efficiently in these cases, we need methods for deciding which out of all candidates considered would be the most useful to replicate, given some overall goal researchers wish to achieve. In this article we assume that the overall goal researchers wish to achieve is to maximize the utility gained by conducting the replication study. We then propose a general rule for study selection in replication research based on the replication value of the set of claims considered for replication. The replication value of a claim is defined as the maximum expected utility we could gain by conducting a replication of the claim, and is a function of (a) the value of being certain about the claim, and (b) uncertainty about the claim based on current evidence. We formalize this definition in terms of a causal decision model, utilizing concepts from decision theory and causal graph modeling. We discuss the validity of using replication value as a measure of expected utility gain, and we suggest approaches for deriving quantitative estimates of replication value. Our goal in this article is not to define concrete guidelines for study selection, but to provide the necessary theoretical foundations on which such concrete guidelines could be built. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Subject(s)
Knowledge , Models, Theoretical , Humans , Uncertainty
10.
Perspect Psychol Sci ; 18(2): 508-512, 2023 03.
Article in English | MEDLINE | ID: mdl-36126652

ABSTRACT

In the January 2022 issue of Perspectives, Götz et al. argued that small effects are "the indispensable foundation for a cumulative psychological science." They supported their argument by claiming that (a) psychology, like genetics, consists of complex phenomena explained by additive small effects; (b) psychological-research culture rewards large effects, which means small effects are being ignored; and (c) small effects become meaningful at scale and over time. We rebut these claims with three objections: First, the analogy between genetics and psychology is misleading; second, p values are the main currency for publication in psychology, meaning that any biases in the literature are (currently) caused by pressure to publish statistically significant results and not large effects; and third, claims regarding small effects as important and consequential must be supported by empirical evidence or, at least, a falsifiable line of reasoning. If accepted uncritically, we believe the arguments of Götz et al. could be used as a blanket justification for the importance of any and all "small" effects, thereby undermining best practices in effect-size interpretation. We end with guidance on evaluating effect sizes in relative, not absolute, terms.


Subject(s)
Psychology , Humans
11.
Perspect Psychol Sci ; 18(2): 503-507, 2023 03.
Article in English | MEDLINE | ID: mdl-35994751

ABSTRACT

To help move researchers away from heuristically dismissing "small" effects as unimportant, recent articles have revisited arguments to defend why seemingly small effect sizes in psychological science matter. One argument is based on the idea that an observed effect size may increase in impact when generalized to a new context because of processes of accumulation over time or application to large populations. However, the field is now in danger of heuristically accepting all effects as potentially important. We aim to encourage researchers to think thoroughly about the various mechanisms that may both amplify and counteract the importance of an observed effect size. Researchers should draw on the multiple amplifying and counteracting mechanisms that are likely to simultaneously apply to the effect when that effect is being generalized to a new and likely more dynamic context. In this way, researchers should aim to transparently provide verifiable lines of reasoning to justify their claims about an effect's importance or unimportance. This transparency can help move psychological science toward a more rigorous assessment of when psychological findings matter for the contexts that researchers want to generalize to.


Subject(s)
Dissent and Disputes , Problem Solving , Humans
12.
R Soc Open Sci ; 9(12): 220946, 2022 Dec.
Article in English | MEDLINE | ID: mdl-36533197

ABSTRACT

Known methodological issues such as publication bias, questionable research practices and studies with underpowered designs are known to decrease the replicability of study findings. The presence of such issues has been widely established across different research fields, especially in psychology. Their presence raised the first concerns that the replicability of study findings could be low and led researchers to conduct large replication projects. These replication projects revealed that a significant portion of original study findings could not be replicated, giving rise to the conceptualization of the replication crisis. Although previous research in the field of sports and exercise science has identified the first warning signs, such as an overwhelming proportion of significant findings, small sample sizes and lack of data availability, their possible consequences for the replicability of our field have been overlooked. We discuss the consequences of the above issues on the replicability of our field and offer potential solutions to improve replicability.

13.
PLoS One ; 17(10): e0274976, 2022.
Article in English | MEDLINE | ID: mdl-36197884

ABSTRACT

This study investigates PhD candidates' (N = 391) perceptions about their research environment at a Dutch university in terms of the research climate, (un)ethical supervisory practices, and questionable research practices. We assessed whether their perceptions are related to career considerations. We gathered quantitative self-report estimations of the perceptions of PhD candidates using an online survey tool and then conducted descriptive and within-subject correlation analysis of the results. While most PhD candidates experience fair evaluation processes, openness, integrity, trust, and freedom in their research climate, many report lack of time and support, insufficient supervision, and witness questionable research practices. Results based on Spearman correlations indicate that those who experience a less healthy research environment (including experiences with unethical supervision, questionable practices, and barriers to responsible research), more often consider leaving academia and their current PhD position.


Subject(s)
Surveys and Questionnaires , Humans , Self Report
14.
User Model User-adapt Interact ; 32(3): 389-415, 2022.
Article in English | MEDLINE | ID: mdl-35669126

ABSTRACT

Psychological theories of habit posit that when a strong habit is formed through behavioral repetition, it can trigger behavior automatically in the same environment. Given the reciprocal relationship between habit and behavior, changing lifestyle behaviors is largely a task of breaking old habits and creating new and healthy ones. Thus, representing users' habit strengths can be very useful for behavior change support systems, for example, to predict behavior or to decide when an intervention reaches its intended effect. However, habit strength is not directly observable and existing self-report measures are taxing for users. In this paper, building on recent computational models of habit formation, we propose a method to enable intelligent systems to compute habit strength based on observable behavior. The hypothesized advantage of using computed habit strength for behavior prediction was tested using data from two intervention studies on dental behavior change ( N = 36 and N = 75 ), where we instructed participants to brush their teeth twice a day for three weeks and monitored their behaviors using accelerometers. The results showed that for the task of predicting future brushing behavior, the theory-based model that computed habit strength achieved an accuracy of 68.6% (Study 1) and 76.1% (Study 2), which outperformed the model that relied on self-reported behavioral determinants but showed no advantage over models that relied on past behavior. We discuss the implications of our results for research on behavior change support systems and habit formation.

15.
J Physiother ; 68(3): 213-214, 2022 07.
Article in English | MEDLINE | ID: mdl-35760725

Subject(s)
Reward , Humans
16.
Health Psychol ; 41(7): 463-473, 2022 Jul.
Article in English | MEDLINE | ID: mdl-35727323

ABSTRACT

OBJECTIVES: Two longitudinal studies were conducted to examine how habits and goal-related constructs determine toothbrushing behavior from a dual-process perspective. We aimed to describe the variations of habit strength, intention, and attitude and to test their associations with actual behavior at both inter- and intraindividual levels. In addition, toothbrushing behavior was measured both by self-report and sensors with the goal to compare these measures. METHOD: In Study 1, 40 young adults were instructed to brush their teeth twice a day, and their behaviors were measured by accelerometers for 3 weeks. Participants also self-reported their instrumental and affective attitude, habit strength, and behavior frequency weekly. Effects of interest were estimated using structural equation modeling. Study 2 replicated Study 1 with a larger and more diverse sample (N = 79), adding a measure of behavioral intention. RESULTS: Supporting the dual-process account, habit strength predicted future behavior in addition to goal-related constructs. Habit strength also attenuated the influences of goal-related constructs on behavior, but this pattern only emerged interindividually and for self-reported behavior. In addition, toothbrushing behavior was more strongly driven by affective rather than instrumental attitude. In both studies, associations among variables were weaker within-person and when sensor-measured behavior was modeled. CONCLUSIONS: The partial support for the dual-process account suggests the need of using habit-based interventions to complement intention-based interventions when attempting to change oral health routines. Our findings also highlight the importance of affective aspects of toothbrushing behavior and the potential to incorporate sensor-based objective measures in research and interventions. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Subject(s)
Health Behavior , Toothbrushing , Goals , Habits , Humans , Intention , Longitudinal Studies , Toothbrushing/psychology , Young Adult
17.
Behav Brain Sci ; 45: e25, 2022 02 10.
Article in English | MEDLINE | ID: mdl-35139969

ABSTRACT

Falsificationist and confirmationist approaches provide two well-established ways of evaluating generalizability. Yarkoni rejects both and invents a third approach we call neo-operationalism. His proposal cannot work for the hypothetical concepts psychologists use, because the universe of operationalizations is impossible to define, and hypothetical concepts cannot be reduced to their operationalizations. We conclude that he is wrong in his generalizability-crisis diagnosis.

18.
Exp Physiol ; 107(3): 201-212, 2022 03.
Article in English | MEDLINE | ID: mdl-35041233

ABSTRACT

Exercise physiology and sport science have traditionally made use of the null hypothesis of no difference to make decisions about experimental interventions. In this article, we aim to review current statistical approaches typically used by exercise physiologists and sport scientists for the design and analysis of experimental interventions and to highlight the importance of including equivalence and non-inferiority studies, which address different research questions from deciding whether an effect is present. Initially, we briefly describe the most common approaches, along with their rationale, to investigate the effects of different interventions. We then discuss the main steps involved in the design and analysis of equivalence and non-inferiority studies, commonly performed in other research fields, with worked examples from exercise physiology and sport science scenarios. Finally, we provide recommendations to exercise physiologists and sport scientists who would like to apply the different approaches in future research. We hope this work will promote the correct use of equivalence and non-inferiority designs in exercise physiology and sport science whenever the research context, conditions, applications, researchers' interests or reasonable beliefs justify these approaches.


Subject(s)
Sports , Exercise , Humans , Research Design
19.
Trends Ecol Evol ; 37(4): 289-290, 2022 04.
Article in English | MEDLINE | ID: mdl-35027226
20.
Psychon Bull Rev ; 29(2): 613-626, 2022 Apr.
Article in English | MEDLINE | ID: mdl-34755319

ABSTRACT

The Action-sentence Compatibility Effect (ACE) is a well-known demonstration of the role of motor activity in the comprehension of language. Participants are asked to make sensibility judgments on sentences by producing movements toward the body or away from the body. The ACE is the finding that movements are faster when the direction of the movement (e.g., toward) matches the direction of the action in the to-be-judged sentence (e.g., Art gave you the pen describes action toward you). We report on a pre-registered, multi-lab replication of one version of the ACE. The results show that none of the 18 labs involved in the study observed a reliable ACE, and that the meta-analytic estimate of the size of the ACE was essentially zero.


Subject(s)
Comprehension , Language , Humans , Movement , Reaction Time
SELECTION OF CITATIONS
SEARCH DETAIL
...