Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 203
Filtrar
1.
Multivariate Behav Res ; : 1-21, 2024 May 11.
Artigo em Inglês | MEDLINE | ID: mdl-38733319

RESUMO

Network psychometrics uses graphical models to assess the network structure of psychological variables. An important task in their analysis is determining which variables are unrelated in the network, i.e., are independent given the rest of the network variables. This conditional independence structure is a gateway to understanding the causal structure underlying psychological processes. Thus, it is crucial to have an appropriate method for evaluating conditional independence and dependence hypotheses. Bayesian approaches to testing such hypotheses allow researchers to differentiate between absence of evidence and evidence of absence of connections (edges) between pairs of variables in a network. Three Bayesian approaches to assessing conditional independence have been proposed in the network psychometrics literature. We believe that their theoretical foundations are not widely known, and therefore we provide a conceptual review of the proposed methods and highlight their strengths and limitations through a simulation study. We also illustrate the methods using an empirical example with data on Dark Triad Personality. Finally, we provide recommendations on how to choose the optimal method and discuss the current gaps in the literature on this important topic.

2.
Test (Madr) ; 33(1): 127-154, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38585622

RESUMO

The ongoing replication crisis in science has increased interest in the methodology of replication studies. We propose a novel Bayesian analysis approach using power priors: The likelihood of the original study's data is raised to the power of α, and then used as the prior distribution in the analysis of the replication data. Posterior distribution and Bayes factor hypothesis tests related to the power parameter α quantify the degree of compatibility between the original and replication study. Inferences for other parameters, such as effect sizes, dynamically borrow information from the original study. The degree of borrowing depends on the conflict between the two studies. The practical value of the approach is illustrated on data from three replication studies, and the connection to hierarchical modeling approaches explored. We generalize the known connection between normal power priors and normal hierarchical models for fixed parameters and show that normal power prior inferences with a beta prior on the power parameter α align with normal hierarchical model inferences using a generalized beta prior on the relative heterogeneity variance I2. The connection illustrates that power prior modeling is unnatural from the perspective of hierarchical modeling since it corresponds to specifying priors on a relative rather than an absolute heterogeneity scale.

3.
Res Synth Methods ; 15(3): 500-511, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38327122

RESUMO

Publication selection bias undermines the systematic accumulation of evidence. To assess the extent of this problem, we survey over 68,000 meta-analyses containing over 700,000 effect size estimates from medicine (67,386/597,699), environmental sciences (199/12,707), psychology (605/23,563), and economics (327/91,421). Our results indicate that meta-analyses in economics are the most severely contaminated by publication selection bias, closely followed by meta-analyses in environmental sciences and psychology, whereas meta-analyses in medicine are contaminated the least. After adjusting for publication selection bias, the median probability of the presence of an effect decreased from 99.9% to 29.7% in economics, from 98.9% to 55.7% in psychology, from 99.8% to 70.7% in environmental sciences, and from 38.0% to 29.7% in medicine. The median absolute effect sizes (in terms of standardized mean differences) decreased from d = 0.20 to d = 0.07 in economics, from d = 0.37 to d = 0.26 in psychology, from d = 0.62 to d = 0.43 in environmental sciences, and from d = 0.24 to d = 0.13 in medicine.


Assuntos
Economia , Metanálise como Assunto , Psicologia , Viés de Publicação , Humanos , Ecologia , Projetos de Pesquisa , Viés de Seleção , Probabilidade , Medicina
4.
Psychol Methods ; 2024 Feb 08.
Artigo em Inglês | MEDLINE | ID: mdl-38330340

RESUMO

A fundamental part of experimental design is to determine the sample size of a study. However, sparse information about population parameters and effect sizes before data collection renders effective sample size planning challenging. Specifically, sparse information may lead research designs to be based on inaccurate a priori assumptions, causing studies to use resources inefficiently or to produce inconclusive results. Despite its deleterious impact on sample size planning, many prominent methods for experimental design fail to adequately address the challenge of sparse a-priori information. Here we propose a Bayesian Monte Carlo methodology for interim design analyses that allows researchers to analyze and adapt their sampling plans throughout the course of a study. At any point in time, the methodology uses the best available knowledge about parameters to make projections about expected evidence trajectories. Two simulated application examples demonstrate how interim design analyses can be integrated into common designs to inform sampling plans on the fly. The proposed methodology addresses the problem of sample size planning with sparse a-priori information and yields research designs that are efficient, informative, and flexible. (PsycInfo Database Record (c) 2024 APA, all rights reserved).

5.
R Soc Open Sci ; 11(2): 231486, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38384774

RESUMO

In their book 'Nudge: Improving Decisions About Health, Wealth and Happiness', Thaler & Sunstein (2009) argue that choice architectures are promising public policy interventions. This research programme motivated the creation of 'nudge units', government agencies which aim to apply insights from behavioural science to improve public policy. We closely examine a meta-analysis of the evidence gathered by two of the largest and most influential nudge units (DellaVigna & Linos (2022 Econometrica 90, 81-116 (doi:10.3982/ECTA18709))) and use statistical techniques to detect reporting biases. Our analysis shows evidence suggestive of selective reporting. We additionally evaluate the public pre-analysis plans from one of the two nudge units (Office of Evaluation Sciences). We identify several instances of excellent practice; however, we also find that the analysis plans and reporting often lack sufficient detail to evaluate (unintentional) reporting biases. We highlight several improvements that would enhance the effectiveness of the pre-analysis plans and reports as a means to combat reporting biases. Our findings and suggestions can further improve the evidence base for policy decisions.

6.
Behav Res Methods ; 56(3): 1260-1282, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37099263

RESUMO

Researchers conduct meta-analyses in order to synthesize information across different studies. Compared to standard meta-analytic methods, Bayesian model-averaged meta-analysis offers several practical advantages including the ability to quantify evidence in favor of the absence of an effect, the ability to monitor evidence as individual studies accumulate indefinitely, and the ability to draw inferences based on multiple models simultaneously. This tutorial introduces the concepts and logic underlying Bayesian model-averaged meta-analysis and illustrates its application using the open-source software JASP. As a running example, we perform a Bayesian meta-analysis on language development in children. We show how to conduct a Bayesian model-averaged meta-analysis and how to interpret the results.


Assuntos
Projetos de Pesquisa , Software , Criança , Humanos , Teorema de Bayes
7.
Psychon Bull Rev ; 31(1): 242-248, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37542014

RESUMO

Huisman (Psychonomic Bulletin & Review, 1-10. 2022) argued that a valid measure of evidence should indicate more support in favor of a true alternative hypothesis when sample size is large than when it is small. Bayes factors may violate this pattern and hence Huisman concluded that Bayes factors are invalid as a measure of evidence. In this brief comment we call attention to the following: (1) Huisman's purported anomaly is in fact dictated by probability theory; (2) Huisman's anomaly has been discussed and explained in the statistical literature since 1939; the anomaly was also highlighted in the Psychonomic Bulletin & Review article by Rouder et al. (2009), who interpreted the anomaly as "ideal": an interpretation diametrically opposed to that of Huisman. We conclude that when intuition clashes with probability theory, chances are that it is intuition that needs schooling.


Assuntos
Teorema de Bayes , Humanos , Probabilidade , Tamanho da Amostra
8.
R Soc Open Sci ; 10(7): 230224, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37416830

RESUMO

Adjusting for publication bias is essential when drawing meta-analytic inferences. However, most methods that adjust for publication bias do not perform well across a range of research conditions, such as the degree of heterogeneity in effect sizes across studies. Sladekova et al. 2022 (Estimating the change in meta-analytic effect size estimates after the application of publication bias adjustment methods. Psychol. Methods) tried to circumvent this complication by selecting the methods that are most appropriate for a given set of conditions, and concluded that publication bias on average causes only minimal over-estimation of effect sizes in psychology. However, this approach suffers from a 'Catch-22' problem-to know the underlying research conditions, one needs to have adjusted for publication bias correctly, but to correctly adjust for publication bias, one needs to know the underlying research conditions. To alleviate this problem, we conduct an alternative analysis, robust Bayesian meta-analysis (RoBMA), which is not based on model-selection but on model-averaging. In RoBMA, models that predict the observed results better are given correspondingly larger weights. A RoBMA reanalysis of Sladekova et al.'s dataset reveals that more than 60% of meta-analyses in psychology notably overestimate the evidence for the presence of the meta-analytic effect and more than 50% overestimate its magnitude.

9.
Behav Res Methods ; 55(8): 4343-4368, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-37277644

RESUMO

The multibridge R package allows a Bayesian evaluation of informed hypotheses [Formula: see text] applied to frequency data from an independent binomial or multinomial distribution. multibridge uses bridge sampling to efficiently compute Bayes factors for the following hypotheses concerning the latent category proportions 𝜃: (a) hypotheses that postulate equality constraints (e.g., 𝜃1 = 𝜃2 = 𝜃3); (b) hypotheses that postulate inequality constraints (e.g., 𝜃1 < 𝜃2 < 𝜃3 or 𝜃1 > 𝜃2 > 𝜃3); (c) hypotheses that postulate combinations of inequality constraints and equality constraints (e.g., 𝜃1 < 𝜃2 = 𝜃3); and (d) hypotheses that postulate combinations of (a)-(c) (e.g., 𝜃1 < (𝜃2 = 𝜃3),𝜃4). Any informed hypothesis [Formula: see text] may be compared against the encompassing hypothesis [Formula: see text] that all category proportions vary freely, or against the null hypothesis [Formula: see text] that all category proportions are equal. multibridge facilitates the fast and accurate comparison of large models with many constraints and models for which relatively little posterior mass falls in the restricted parameter space. This paper describes the underlying methodology and illustrates the use of multibridge through fully reproducible examples.


Assuntos
Teorema de Bayes , Humanos , Distribuições Estatísticas
10.
Nature ; 617(7962): 669-670, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-37217667

Assuntos
Pensamento
11.
Psychol Methods ; 2023 May 11.
Artigo em Inglês | MEDLINE | ID: mdl-37166854

RESUMO

Cognitive models provide a substantively meaningful quantitative description of latent cognitive processes. The quantitative formulation of these models supports cumulative theory building and enables strong empirical tests. However, the nonlinearity of these models and pervasive correlations among model parameters pose special challenges when applying cognitive models to data. Firstly, estimating cognitive models typically requires large hierarchical data sets that need to be accommodated by an appropriate statistical structure within the model. Secondly, statistical inference needs to appropriately account for model uncertainty to avoid overconfidence and biased parameter estimates. In the present work, we show how these challenges can be addressed through a combination of Bayesian hierarchical modeling and Bayesian model averaging. To illustrate these techniques, we apply the popular diffusion decision model to data from a collaborative selective influence study. (PsycInfo Database Record (c) 2023 APA, all rights reserved).

12.
Comput Brain Behav ; 6(1): 127-139, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36879767

RESUMO

In van Doorn et al. (2021), we outlined a series of open questions concerning Bayes factors for mixed effects model comparison, with an emphasis on the impact of aggregation, the effect of measurement error, the choice of prior distributions, and the detection of interactions. Seven expert commentaries (partially) addressed these initial questions. Surprisingly perhaps, the experts disagreed (often strongly) on what is best practice-a testament to the intricacy of conducting a mixed effect model comparison. Here, we provide our perspective on these comments and highlight topics that warrant further discussion. In general, we agree with many of the commentaries that in order to take full advantage of Bayesian mixed model comparison, it is important to be aware of the specific assumptions that underlie the to-be-compared models.

13.
Nat Hum Behav ; 7(1): 15-26, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36707644

RESUMO

Flexibility in the design, analysis and interpretation of scientific studies creates a multiplicity of possible research outcomes. Scientists are granted considerable latitude to selectively use and report the hypotheses, variables and analyses that create the most positive, coherent and attractive story while suppressing those that are negative or inconvenient. This creates a risk of bias that can lead to scientists fooling themselves and fooling others. Preregistration involves declaring a research plan (for example, hypotheses, design and statistical analyses) in a public registry before the research outcomes are known. Preregistration (1) reduces the risk of bias by encouraging outcome-independent decision-making and (2) increases transparency, enabling others to assess the risk of bias and calibrate their confidence in research outcomes. In this Perspective, we briefly review the historical evolution of preregistration in medicine, psychology and other domains, clarify its pragmatic functions, discuss relevant meta-research, and provide recommendations for scientists and journal editors.


Assuntos
Processos Mentais , Projetos de Pesquisa , Humanos , Sistema de Registros
14.
Psychol Methods ; 28(2): 322-338, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-34914473

RESUMO

Hypotheses concerning the distribution of multinomial proportions typically entail exact equality constraints that can be evaluated using standard tests. Whenever researchers formulate inequality constrained hypotheses, however, they must rely on sampling-based methods that are relatively inefficient and computationally expensive. To address this problem we developed a bridge sampling routine that allows an efficient evaluation of multinomial inequality constraints. An empirical application showcases that bridge sampling outperforms current Bayesian methods, especially when relatively little posterior mass falls in the restricted parameter space. The method is extended to mixtures between equality and inequality constrained hypotheses. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Assuntos
Teorema de Bayes , Humanos
15.
Psychol Methods ; 28(3): 558-579, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35298215

RESUMO

The last 25 years have shown a steady increase in attention for the Bayes factor as a tool for hypothesis evaluation and model selection. The present review highlights the potential of the Bayes factor in psychological research. We discuss six types of applications: Bayesian evaluation of point null, interval, and informative hypotheses, Bayesian evidence synthesis, Bayesian variable selection and model averaging, and Bayesian evaluation of cognitive models. We elaborate what each application entails, give illustrative examples, and provide an overview of key references and software with links to other applications. The article is concluded with a discussion of the opportunities and pitfalls of Bayes factor applications and a sketch of corresponding future research lines. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Assuntos
Teorema de Bayes , Pesquisa Comportamental , Psicologia , Humanos , Pesquisa Comportamental/métodos , Psicologia/métodos , Software , Projetos de Pesquisa
16.
Behav Res Methods ; 55(3): 1069-1078, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-35581436

RESUMO

The current practice of reliability analysis is both uniform and troublesome: most reports consider only Cronbach's α, and almost all reports focus exclusively on a point estimate, disregarding the impact of sampling error. In an attempt to improve the status quo we have implemented Bayesian estimation routines for five popular single-test reliability coefficients in the open-source statistical software program JASP. Using JASP, researchers can easily obtain Bayesian credible intervals to indicate a range of plausible values and thereby quantify the precision of the point estimate. In addition, researchers may use the posterior distribution of the reliability coefficients to address practically relevant questions such as "What is the probability that the reliability of my test is larger than a threshold value of .80?". In this tutorial article, we outline how to conduct a Bayesian reliability analysis in JASP and correctly interpret the results. By making available a computationally complex procedure in an easy-to-use software package, we hope to motivate researchers to include uncertainty estimates whenever reporting the results of a single-test reliability analysis.


Assuntos
Software , Humanos , Teorema de Bayes , Reprodutibilidade dos Testes , Incerteza
17.
Psychol Methods ; 28(1): 107-122, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-35588075

RESUMO

Meta-analysis is an important quantitative tool for cumulative science, but its application is frustrated by publication bias. In order to test and adjust for publication bias, we extend model-averaged Bayesian meta-analysis with selection models. The resulting robust Bayesian meta-analysis (RoBMA) methodology does not require all-or-none decisions about the presence of publication bias, can quantify evidence in favor of the absence of publication bias, and performs well under high heterogeneity. By model-averaging over a set of 12 models, RoBMA is relatively robust to model misspecification and simulations show that it outperforms existing methods. We demonstrate that RoBMA finds evidence for the absence of publication bias in Registered Replication Reports and reliably avoids false positives. We provide an implementation in R so that researchers can easily use the new methodology in practice. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Assuntos
Viés de Publicação , Humanos , Teorema de Bayes
18.
Psychon Bull Rev ; 30(2): 516-533, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-35969359

RESUMO

A tradition that goes back to Sir Karl R. Popper assesses the value of a statistical test primarily by its severity: was there an honest and stringent attempt to prove the tested hypothesis wrong? For "error statisticians" such as Mayo (1996, 2018), and frequentists more generally, severity is a key virtue in hypothesis tests. Conversely, failure to incorporate severity into statistical inference, as allegedly happens in Bayesian inference, counts as a major methodological shortcoming. Our paper pursues a double goal: First, we argue that the error-statistical explication of severity has substantive drawbacks; specifically, the neglect of research context and the specificity of the predictions of the hypothesis. Second, we argue that severity matters for Bayesian inference via the value of specific, risky predictions: severity boosts the expected evidential value of a Bayesian hypothesis test. We illustrate severity-based reasoning in Bayesian statistics by means of a practical example and discuss its advantages and potential drawbacks.


Assuntos
Teorema de Bayes , Humanos
19.
Perspect Psychol Sci ; 18(3): 607-623, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36190899

RESUMO

Progress in psychology has been frustrated by challenges concerning replicability, generalizability, strategy selection, inferential reproducibility, and computational reproducibility. Although often discussed separately, these five challenges may share a common cause: insufficient investment of intellectual and nonintellectual resources into the typical psychology study. We suggest that the emerging emphasis on big-team science can help address these challenges by allowing researchers to pool their resources together to increase the amount available for a single study. However, the current incentives, infrastructure, and institutions in academic science have all developed under the assumption that science is conducted by solo principal investigators and their dependent trainees, an assumption that creates barriers to sustainable big-team science. We also anticipate that big-team science carries unique risks, such as the potential for big-team-science organizations to be co-opted by unaccountable leaders, become overly conservative, and make mistakes at a grand scale. Big-team-science organizations must also acquire personnel who are properly compensated and have clear roles. Not doing so raises risks related to mismanagement and a lack of financial sustainability. If researchers can manage its unique barriers and risks, big-team science has the potential to spur great progress in psychology and beyond.


Assuntos
Pesquisa Interdisciplinar , Humanos , Reprodutibilidade dos Testes
20.
Psychol Methods ; 28(3): 740-755, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-34735173

RESUMO

Some important research questions require the ability to find evidence for two conditions being practically equivalent. This is impossible to accomplish within the traditional frequentist null hypothesis significance testing framework; hence, other methodologies must be utilized. We explain and illustrate three approaches for finding evidence for equivalence: The frequentist two one-sided tests procedure, the Bayesian highest density interval region of practical equivalence procedure, and the Bayes factor interval null procedure. We compare the classification performances of these three approaches for various plausible scenarios. The results indicate that the Bayes factor interval null approach compares favorably to the other two approaches in terms of statistical power. Critically, compared with the Bayes factor interval null procedure, the two one-sided tests and the highest density interval region of practical equivalence procedures have limited discrimination capabilities when the sample size is relatively small: Specifically, in order to be practically useful, these two methods generally require over 250 cases within each condition when rather large equivalence margins of approximately .2 or .3 are used; for smaller equivalence margins even more cases are required. Because of these results, we recommend that researchers rely more on the Bayes factor interval null approach for quantifying evidence for equivalence, especially for studies that are constrained on sample size. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Assuntos
Projetos de Pesquisa , Humanos , Teorema de Bayes , Tamanho da Amostra
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA