Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
1.
Assessment ; : 10731911241234118, 2024 Mar 14.
Artigo em Inglês | MEDLINE | ID: mdl-38486349

RESUMO

Replication provides a confrontation of psychological theory, not only in experimental research, but also in model-based research. Goodness of fit (GOF) of the original model to the replication data is routinely provided as meaningful evidence of replication. We demonstrate, however, that GOF obscures important differences between the original and replication studies. As an alternative, we present Bayesian prior predictive similarity checking: a tool for rigorously evaluating the degree to which the data patterns and parameter estimates of a model replication study resemble those of the original study. We apply this method to original and replication data from the National Comorbidity Survey. Both data sets yielded excellent GOF, but the similarity checks often failed to support close or approximate empirical replication, especially when examining covariance patterns and indicator thresholds. We conclude with recommendations for applied research, including registered reports of model-based research, and provide extensive annotated R code to facilitate future applications of prior predictive similarity checking.

2.
Front Psychol ; 12: 621547, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34912255

RESUMO

The popularity and use of Bayesian methods have increased across many research domains. The current article demonstrates how some less familiar Bayesian methods can be used. Specifically, we applied expert elicitation, testing for prior-data conflicts, the Bayesian Truth Serum, and testing for replication effects via Bayes Factors in a series of four studies investigating the use of questionable research practices (QRPs). Scientifically fraudulent or unethical research practices have caused quite a stir in academia and beyond. Improving science starts with educating Ph.D. candidates: the scholars of tomorrow. In four studies concerning 765 Ph.D. candidates, we investigate whether Ph.D. candidates can differentiate between ethical and unethical or even fraudulent research practices. We probed the Ph.D.s' willingness to publish research from such practices and tested whether this is influenced by (un)ethical behavior pressure from supervisors or peers. Furthermore, 36 academic leaders (deans, vice-deans, and heads of research) were interviewed and asked to predict what Ph.D.s would answer for different vignettes. Our study shows, and replicates, that some Ph.D. candidates are willing to publish results deriving from even blatant fraudulent behavior-data fabrication. Additionally, some academic leaders underestimated this behavior, which is alarming. Academic leaders have to keep in mind that Ph.D. candidates can be under more pressure than they realize and might be susceptible to using QRPs. As an inspiring example and to encourage others to make their Bayesian work reproducible, we published data, annotated scripts, and detailed output on the Open Science Framework (OSF).

3.
Front Psychol ; 11: 608045, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33324306

RESUMO

The current paper highlights a new, interactive Shiny App that can be used to aid in understanding and teaching the important task of conducting a prior sensitivity analysis when implementing Bayesian estimation methods. In this paper, we discuss the importance of examining prior distributions through a sensitivity analysis. We argue that conducting a prior sensitivity analysis is equally important when so-called diffuse priors are implemented as it is with subjective priors. As a proof of concept, we conducted a small simulation study, which illustrates the impact of priors on final model estimates. The findings from the simulation study highlight the importance of conducting a sensitivity analysis of priors. This concept is further extended through an interactive Shiny App that we developed. The Shiny App allows users to explore the impact of various forms of priors using empirical data. We introduce this Shiny App and thoroughly detail an example using a simple multiple regression model that users at all levels can understand. In this paper, we highlight how to determine the different settings for a prior sensitivity analysis, how to visually and statistically compare results obtained in the sensitivity analysis, and how to display findings and write up disparate results obtained across the sensitivity analysis. The goal is that novice users can follow the process outlined here and work within the interactive Shiny App to gain a deeper understanding of the role of prior distributions and the importance of a sensitivity analysis when implementing Bayesian methods. The intended audience is broad (e.g., undergraduate or graduate students, faculty, and other researchers) and can include those with limited exposure to Bayesian methods or the specific model presented here.

4.
Front Psychol ; 11: 611963, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33362673

RESUMO

When Bayesian estimation is used to analyze Structural Equation Models (SEMs), prior distributions need to be specified for all parameters in the model. Many popular software programs offer default prior distributions, which is helpful for novel users and makes Bayesian SEM accessible for a broad audience. However, when the sample size is small, those prior distributions are not always suitable and can lead to untrustworthy results. In this tutorial, we provide a non-technical discussion of the risks associated with the use of default priors in small sample contexts. We discuss how default priors can unintentionally behave as highly informative priors when samples are small. Also, we demonstrate an online educational Shiny app, in which users can explore the impact of varying prior distributions and sample sizes on model results. We discuss how the Shiny app can be used in teaching; provide a reading list with literature on how to specify suitable prior distributions; and discuss guidelines on how to recognize (mis)behaving priors. It is our hope that this tutorial helps to spread awareness of the importance of specifying suitable priors when Bayesian SEM is used with small samples.

5.
Multivariate Behav Res ; 54(6): 795-821, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31012738

RESUMO

Recent advances have allowed for modeling mixture components within latent growth modeling using robust, skewed mixture distributions rather than normal distributions. This feature adds flexibility in handling non-normality in longitudinal data, through manifest or latent variables, by directly modeling skewed or heavy-tailed latent classes rather than assuming a mixture of normal distributions. The aim of this study was to assess through simulation the potential under- or over-extraction of latent classes in a growth mixture model when underlying data follow either normal, skewed-normal, or skewed-t distributions. In order to assess this, we implement skewed-t, skewed-normal, and conventional normal (i.e., not skewed) forms of the growth mixture model. The skewed-t and skewed-normal versions of this model have only recently been implemented, and relatively little is known about their performance. Model comparison, fit, and classification of correctly specified and mis-specified models were assessed through various indices. Findings suggest that the accuracy of model comparison and fit measures are dependent on the type of (mis)specification, as well as the amount of class separation between the latent classes. A secondary simulation exposed computation and accuracy difficulties under some skewed modeling contexts. Implications of findings, recommendations for applied researchers, and future directions are discussed; a motivating example is presented using education data.


Assuntos
Análise de Classes Latentes , Modelos Estatísticos , Distribuições Estatísticas , Simulação por Computador , Humanos , Funções Verossimilhança , Estatística como Assunto
6.
Artigo em Inglês | MEDLINE | ID: mdl-30042732

RESUMO

Background: Cultural factors influence how individuals define, evaluate, and approach their quality of life (QoL). The CushingQoL is a widely used disease-specific questionnaire to assess QoL in patients with Cushing's syndrome. However, there is no information about potential cross-country differences in the way patients interpret the items on the CushingQoL. Thus, the current study examined if the CushingQoL is interpreted in the same way across nationalities. Methods: Patients from the U.S. (n = 260) and the Netherlands (n = 103) were asked to fill out the CushingQoL and a short demographics survey. Measurement invariance testing was utilized to explore whether or not the patient samples from the U.S. and the Netherlands interpreted items on the CushingQoL in the same way. Results: A two-subscale scoring approach was used for the CushingQoL. Model fit was good for the U.S. sample (e.g., CFI = 0.983; TLI = 0.979), as well as the Dutch sample (e.g., CFI = 0.971; TLI = 0.964). Invariance testing revealed that three of the 12 items on the CushingQoL were interpreted differently across the groups. These items are all related to psychosocial issues (e.g., irritable mood and worrying about one's health). Items assessing physical aspects of QoL did not vary across the U.S. and Dutch samples. Conclusions: Interpreting results from the CushingQoL requires careful consideration of country of residence, as this appears to impact the interpretation of the questionnaire.

7.
PLoS One ; 13(4): e0195229, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29614117

RESUMO

Violent acts on university campuses are becoming more frequent. Enrollment rates of Latinos at universities is increasing. Research has indicated that youths are more susceptible to trauma, particularly Latinos. Thus, it is imperative to evaluate the validity of commonly used posttraumatic stress measures among Latino college students. The Impact of Event Scale-Revised (IES-R) is one of the most commonly used metrics of posttraumatic stress disorder symptomatology. However, it is largely unknown if the IES-R is measuring the same construct across different sub-samples (e.g. Latino versus non-Latino). The current study aimed to assess measurement invariance for the IES-R between Latino and non-Latino participants. A total of 545 participants completed the IES-R. One- and three-factor scoring solutions were compared using confirmatory factor analyses. Measurement invariance was then evaluated by estimating several multiple-group confirmatory factor analytic models. Four models with an increasing degree of invariance across groups were compared. A significant χ2 difference test was used to indicate a significant change in model fit between nested models within the measurement invariance testing process. The three-factor scoring solution could not be used for the measurement invariance process because the subscale correlations were too high for estimation (rs 0.92-1.00). Therefore, the one-factor model was used for the invariance testing process. Invariance was met for each level of invariance: configural, metric, scalar, and strict. All measurement invariance testing results indicated that the one-factor solution for the IES-R was equivalent for the Latino and non-Latino participants.


Assuntos
Etnicidade/psicologia , Etnicidade/estatística & dados numéricos , Hispânico ou Latino/psicologia , Hispânico ou Latino/estatística & dados numéricos , Psicometria/métodos , Adolescente , Adulto , California/epidemiologia , Humanos , Vigilância da População , Reprodutibilidade dos Testes , Transtornos de Estresse Pós-Traumáticos/epidemiologia , Transtornos de Estresse Pós-Traumáticos/psicologia , Estudantes , Universidades , Adulto Jovem
8.
Multivariate Behav Res ; 53(2): 267-291, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29324055

RESUMO

There is a recent increase in interest of Bayesian analysis. However, little effort has been made thus far to directly incorporate background knowledge via the prior distribution into the analyses. This process might be especially useful in the context of latent growth mixture modeling when one or more of the latent groups are expected to be relatively small due to what we refer to as limited data. We argue that the use of Bayesian statistics has great advantages in limited data situations, but only if background knowledge can be incorporated into the analysis via prior distributions. We highlight these advantages through a data set including patients with burn injuries and analyze trajectories of posttraumatic stress symptoms using the Bayesian framework following the steps of the WAMBS-checklist. In the included example, we illustrate how to obtain background information using previous literature based on a systematic literature search and by using expert knowledge. Finally, we show how to translate this knowledge into prior distributions and we illustrate the importance of conducting a prior sensitivity analysis. Although our example is from the trauma field, the techniques we illustrate can be applied to any field.


Assuntos
Teorema de Bayes , Modelos Estatísticos , Transtornos de Estresse Pós-Traumáticos , Unidades de Queimados , Humanos , Literatura de Revisão como Assunto , Ferimentos e Lesões
9.
Psychol Methods ; 22(2): 217-239, 2017 06.
Artigo em Inglês | MEDLINE | ID: mdl-28594224

RESUMO

Although the statistical tools most often used by researchers in the field of psychology over the last 25 years are based on frequentist statistics, it is often claimed that the alternative Bayesian approach to statistics is gaining in popularity. In the current article, we investigated this claim by performing the very first systematic review of Bayesian psychological articles published between 1990 and 2015 (n = 1,579). We aim to provide a thorough presentation of the role Bayesian statistics plays in psychology. This historical assessment allows us to identify trends and see how Bayesian methods have been integrated into psychological research in the context of different statistical frameworks (e.g., hypothesis testing, cognitive models, IRT, SEM, etc.). We also describe take-home messages and provide "big-picture" recommendations to the field as Bayesian statistics becomes more popular. Our review indicated that Bayesian statistics is used in a variety of contexts across subfields of psychology and related disciplines. There are many different reasons why one might choose to use Bayes (e.g., the use of priors, estimating otherwise intractable models, modeling uncertainty, etc.). We found in this review that the use of Bayes has increased and broadened in the sense that this methodology can be used in a flexible manner to tackle many different forms of questions. We hope this presentation opens the door for a larger discussion regarding the current state of Bayesian statistics, as well as future trends. (PsycINFO Database Record


Assuntos
Teorema de Bayes , Publicações Periódicas como Assunto , Psicologia , Projetos de Pesquisa , Humanos , Modelos Teóricos , Incerteza
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA