Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Más filtros

Banco de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Behav Res Methods ; 2023 Sep 25.
Artículo en Inglés | MEDLINE | ID: mdl-37749423

RESUMEN

With the recent development of easy-to-use tools for Bayesian analysis, psychologists have started to embrace Bayesian hierarchical modeling. Bayesian hierarchical models provide an intuitive account of inter- and intraindividual variability and are particularly suited for the evaluation of repeated-measures designs. Here, we provide guidance for model specification and interpretation in Bayesian hierarchical modeling and describe common pitfalls that can arise in the process of model fitting and evaluation. Our introduction gives particular emphasis to prior specification and prior sensitivity, as well as to the calculation of Bayes factors for model comparisons. We illustrate the use of state-of-the-art software programs Stan and brms. The result is an overview of best practices in Bayesian hierarchical modeling that we hope will aid psychologists in making the best use of Bayesian hierarchical modeling.

2.
Behav Res Methods ; 54(6): 3100-3117, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-35233752

RESUMEN

In a sequential hypothesis test, the analyst checks at multiple steps during data collection whether sufficient evidence has accrued to make a decision about the tested hypotheses. As soon as sufficient information has been obtained, data collection is terminated. Here, we compare two sequential hypothesis testing procedures that have recently been proposed for use in psychological research: Sequential Probability Ratio Test (SPRT; Psychological Methods, 25(2), 206-226, 2020) and the Sequential Bayes Factor Test (SBFT; Psychological Methods, 22(2), 322-339, 2017). We show that although the two methods have different philosophical roots, they share many similarities and can even be mathematically regarded as two instances of an overarching hypothesis testing framework. We demonstrate that the two methods use the same mechanisms for evidence monitoring and error control, and that differences in efficiency between the methods depend on the exact specification of the statistical models involved, as well as on the population truth. Our simulations indicate that when deciding on a sequential design within a unified sequential testing framework, researchers need to balance the needs of test efficiency, robustness against model misspecification, and appropriate uncertainty quantification. We provide guidance for navigating these design decisions based on individual preferences and simulation-based design analyses.


Asunto(s)
Proyectos de Investigación , Humanos , Teorema de Bayes
3.
Behav Res Methods ; 51(3): 1042-1058, 2019 06.
Artículo en Inglés | MEDLINE | ID: mdl-30719688

RESUMEN

Well-designed experiments are likely to yield compelling evidence with efficient sample sizes. Bayes Factor Design Analysis (BFDA) is a recently developed methodology that allows researchers to balance the informativeness and efficiency of their experiment (Schönbrodt & Wagenmakers, Psychonomic Bulletin & Review, 25(1), 128-142 2018). With BFDA, researchers can control the rate of misleading evidence but, in addition, they can plan for a target strength of evidence. BFDA can be applied to fixed-N and sequential designs. In this tutorial paper, we provide an introduction to BFDA and analyze how the use of informed prior distributions affects the results of the BFDA. We also present a user-friendly web-based BFDA application that allows researchers to conduct BFDAs with ease. Two practical examples highlight how researchers can use a BFDA to plan for informative and efficient research designs.


Asunto(s)
Teorema de Bayes , Análisis Factorial , Proyectos de Investigación , Tamaño de la Muestra
4.
Psychol Methods ; 2024 Feb 08.
Artículo en Inglés | MEDLINE | ID: mdl-38330340

RESUMEN

A fundamental part of experimental design is to determine the sample size of a study. However, sparse information about population parameters and effect sizes before data collection renders effective sample size planning challenging. Specifically, sparse information may lead research designs to be based on inaccurate a priori assumptions, causing studies to use resources inefficiently or to produce inconclusive results. Despite its deleterious impact on sample size planning, many prominent methods for experimental design fail to adequately address the challenge of sparse a-priori information. Here we propose a Bayesian Monte Carlo methodology for interim design analyses that allows researchers to analyze and adapt their sampling plans throughout the course of a study. At any point in time, the methodology uses the best available knowledge about parameters to make projections about expected evidence trajectories. Two simulated application examples demonstrate how interim design analyses can be integrated into common designs to inform sampling plans on the fly. The proposed methodology addresses the problem of sample size planning with sparse a-priori information and yields research designs that are efficient, informative, and flexible. (PsycInfo Database Record (c) 2024 APA, all rights reserved).

5.
R Soc Open Sci ; 10(2): 220346, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-36778954

RESUMEN

In many research fields, the widespread use of questionable research practices has jeopardized the credibility of scientific results. One of the most prominent questionable research practices is p-hacking. Typically, p-hacking is defined as a compound of strategies targeted at rendering non-significant hypothesis testing results significant. However, a comprehensive overview of these p-hacking strategies is missing, and current meta-scientific research often ignores the heterogeneity of strategies. Here, we compile a list of 12 p-hacking strategies based on an extensive literature review, identify factors that control their level of severity, and demonstrate their impact on false-positive rates using simulation studies. We also use our simulation results to evaluate several approaches that have been proposed to mitigate the influence of questionable research practices. Our results show that investigating p-hacking at the level of strategies can provide a better understanding of the process of p-hacking, as well as a broader basis for developing effective countermeasures. By making our analyses available through a Shiny app and R package, we facilitate future meta-scientific research aimed at investigating the ramifications of p-hacking across multiple strategies, and we hope to start a broader discussion about different manifestations of p-hacking in practice.

6.
Comput Brain Behav ; 6(1): 127-139, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36879767

RESUMEN

In van Doorn et al. (2021), we outlined a series of open questions concerning Bayes factors for mixed effects model comparison, with an emphasis on the impact of aggregation, the effect of measurement error, the choice of prior distributions, and the detection of interactions. Seven expert commentaries (partially) addressed these initial questions. Surprisingly perhaps, the experts disagreed (often strongly) on what is best practice-a testament to the intricacy of conducting a mixed effect model comparison. Here, we provide our perspective on these comments and highlight topics that warrant further discussion. In general, we agree with many of the commentaries that in order to take full advantage of Bayesian mixed model comparison, it is important to be aware of the specific assumptions that underlie the to-be-compared models.

7.
Psychol Methods ; 27(2): 177-197, 2022 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-32940511

RESUMEN

The Bayesian statistical framework requires the specification of prior distributions, which reflect predata knowledge about the relative plausibility of different parameter values. As prior distributions influence the results of Bayesian analyses, it is important to specify them with care. Prior elicitation has frequently been proposed as a principled method for deriving prior distributions based on expert knowledge. Although prior elicitation provides a theoretically satisfactory method of specifying prior distributions, there are several implicit decisions that researchers need to make at different stages of the elicitation process, each of them constituting important researcher degrees of freedom. Here, we discuss some of these decisions and group them into 3 categories: decisions about (a) the setup of the prior elicitation; (b) the core elicitation process; and (c) combination of elicited prior distributions from different experts. Importantly, different decision paths could result in greatly varying priors elicited from the same experts. Hence, researchers who wish to perform prior elicitation are advised to carefully consider each of the practical decisions before, during, and after the elicitation process. By explicitly outlining the consequences of these practical decisions, we hope to raise awareness for methodological flexibility in prior elicitation and provide researchers with a more structured approach to navigate the decision paths in prior elicitation. Making the decisions explicit also provides the foundation for further research that can identify evidence-based best practices that may eventually reduce the methodologically flexibility in prior elicitation. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Asunto(s)
Proyectos de Investigación , Teorema de Bayes , Humanos
8.
Psychon Bull Rev ; 29(5): 1776-1794, 2022 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-35378671

RESUMEN

Bayesian inference requires the specification of prior distributions that quantify the pre-data uncertainty about parameter values. One way to specify prior distributions is through prior elicitation, an interview method guiding field experts through the process of expressing their knowledge in the form of a probability distribution. However, prior distributions elicited from experts can be subject to idiosyncrasies of experts and elicitation procedures, raising the spectre of subjectivity and prejudice. Here, we investigate the effect of interpersonal variation in elicited prior distributions on the Bayes factor hypothesis test. We elicited prior distributions from six academic experts with a background in different fields of psychology and applied the elicited prior distributions as well as commonly used default priors in a re-analysis of 1710 studies in psychology. The degree to which the Bayes factors vary as a function of the different prior distributions is quantified by three measures of concordance of evidence: We assess whether the prior distributions change the Bayes factor direction, whether they cause a switch in the category of evidence strength, and how much influence they have on the value of the Bayes factor. Our results show that although the Bayes factor is sensitive to changes in the prior distribution, these changes do not necessarily affect the qualitative conclusions of a hypothesis test. We hope that these results help researchers gauge the influence of interpersonal variation in elicited prior distributions in future psychological studies. Additionally, our sensitivity analyses can be used as a template for Bayesian robustness analyses that involve prior elicitation from multiple experts.


Asunto(s)
Proyectos de Investigación , Teorema de Bayes , Humanos , Probabilidad , Incertidumbre
9.
Br J Math Stat Psychol ; 73 Suppl 1: 180-193, 2020 11.
Artículo en Inglés | MEDLINE | ID: mdl-31691267

RESUMEN

Longitudinal studies are the gold standard for research on time-dependent phenomena in the social sciences. However, they often entail high costs due to multiple measurement occasions and a long overall study duration. It is therefore useful to optimize these design factors while maintaining a high informativeness of the design. Von Oertzen and Brandmaier (2013,Psychology and Aging, 28, 414) applied power equivalence to show that Latent Growth Curve Models (LGCMs) with different design factors can have the same power for likelihood-ratio tests on the latent structure. In this paper, we show that the notion of power equivalence can be extended to Bayesian hypothesis tests of the latent structure constants. Specifically, we show that the results of a Bayes factor design analysis (BFDA; Schönbrodt & Wagenmakers (2018,Psychonomic Bulletin and Review, 25, 128) of two power equivalent LGCMs are equivalent. This will be useful for researchers who aim to plan for compelling evidence instead of frequentist power and provides a contribution towards more efficient procedures for BFDA.


Asunto(s)
Teorema de Bayes , Modelos Estadísticos , Simulación por Computador , Análisis Factorial , Humanos , Funciones de Verosimilitud , Modelos Lineales , Estudios Longitudinales , Atención Plena/métodos , Atención Plena/estadística & datos numéricos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA