Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
1.
Behav Res Methods ; 55(4): 1942-1964, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-35798918

RESUMEN

Multilevel models are used ubiquitously in the social and behavioral sciences and effect sizes are critical for contextualizing results. A general framework of R-squared effect size measures for multilevel models has only recently been developed. Rights and Sterba (2019) distinguished each source of explained variance for each possible kind of outcome variance. Though researchers have long desired a comprehensive and coherent approach to computing R-squared measures for multilevel models, the use of this framework has a steep learning curve. The purpose of this tutorial is to introduce and demonstrate using a new R package - r2mlm - that automates the intensive computations involved in implementing the framework and provides accompanying graphics to visualize all multilevel R-squared measures together. We use accessible illustrations with open data and code to demonstrate how to use and interpret the R package output.


Asunto(s)
Ciencias de la Conducta , Humanos , Análisis Multinivel
2.
Behav Brain Sci ; 45: e14, 2022 02 10.
Artículo en Inglés | MEDLINE | ID: mdl-35139945

RESUMEN

Because of the misspecification of models and specificity of operationalizations, many studies produce claims of limited utility. We suggest a path forward that requires taking a few steps back. Researchers can retool large-scale replications to conduct the descriptive research which assesses the generalizability of constructs. Large-scale construct validation is feasible and a necessary next step in addressing the generalizability crisis.


Asunto(s)
Proyectos de Investigación , Investigadores , Humanos
4.
Can J Exp Psychol ; 2024 Jul 25.
Artículo en Inglés | MEDLINE | ID: mdl-39052343

RESUMEN

A Registered Report is a type of research journal article in which the introduction, methods, and analysis plan are proposed and peer-reviewed prior to the execution of the study. The goal is to limit publication bias based on study findings by conducting peer review on the merits of the study before the results are known. First introduced in 2012 (Chambers, 2013; Chambers & Tzavella, 2022), this format of journal article publication has become more commonplace. Here we provide an overview of the format as well as eight core lessons we learned while preparing Registered Reports. We integrate guidelines from the literature with our experience to provide insight into the process of preparing and publishing a Registered Report for those who have not yet tried it. Though Registered Reports require researchers to invest more effort at the earlier stages of idea generation, design, and analysis planning, they will benefit from the feedback of reviewers when it is most beneficial and leave behind the fear of rejection due to unanticipated study limitations or null results. (PsycInfo Database Record (c) 2024 APA, all rights reserved).

5.
Psychol Methods ; 28(4): 905-924, 2023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-35588078

RESUMEN

Measurement invariance-the notion that the measurement properties of a scale are equal across groups, contexts, or time-is an important assumption underlying much of psychology research. The traditional approach for evaluating measurement invariance is to fit a series of nested measurement models using multiple-group confirmatory factor analyses. However, traditional approaches are strict, vary across the field in implementation, and present multiplicity challenges, even in the simplest case of two groups under study. The alignment method was recently proposed as an alternative approach. This method is more automated, requires fewer decisions from researchers, and accommodates two or more groups. However, it has different assumptions, estimation techniques, and limitations from traditional approaches. To address the lack of accessible resources that explain the methodological differences and complexities between the two approaches, we introduce and illustrate both, comparing them side by side. First, we overview the concepts, assumptions, advantages, and limitations of each approach. Based on this overview, we propose a list of four key considerations to help researchers decide which approach to choose and how to document their analytical decisions in a preregistration or analysis plan. We then demonstrate our key considerations on an illustrative research question using an open dataset and provide an example of a completed preregistration. Our illustrative example is accompanied by an annotated analysis report that shows readers, step-by-step, how to conduct measurement invariance tests using R and Mplus. Finally, we provide recommendations for how to decide between and use each approach and next steps for methodological research. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Asunto(s)
Proyectos de Investigación , Humanos , Análisis Factorial
6.
Am Psychol ; 77(4): 576-588, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35482669

RESUMEN

Currently, there is little guidance for navigating measurement challenges that threaten construct validity in replication research. To identify common challenges and ultimately strengthen replication research, we conducted a systematic review of the measures used in the 100 original and replication studies from the Reproducibility Project: Psychology (Open Science Collaboration, 2015). Results indicate that it was common for scales used in the original studies to have little or no validity evidence. Our systematic review demonstrates and corroborates evidence that issues of construct validity are sorely neglected in original and replicated research. We identify four measurement challenges replicators are likely to face: a lack of essential measurement information, a lack of validity evidence, measurement differences, and translation. Next, we offer solutions for addressing these challenges that will improve measurement practices in original and replication research. Finally, we close with a discussion of the need to develop measurement methodologies for the next generation of replication research. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Asunto(s)
Reproducibilidad de los Resultados
7.
Psychol Methods ; 26(3): 273-294, 2021 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-32673042

RESUMEN

In this article, we propose integrated generalized structured component analysis (IGSCA), which is a general statistical approach for analyzing data with both components and factors in the same model, simultaneously. This approach combines generalized structured component analysis (GSCA) and generalized structured component analysis with measurement errors incorporated (GSCAM) in a unified manner and can estimate both factor- and component-model parameters, including component and factor loadings, component and factor path coefficients, and path coefficients connecting factors and components. We conduct 2 simulation studies to investigate the performance of IGSCA under models with both factors and components. The first simulation study assesses how existing approaches for structural equation modeling and IGSCA recover parameters. This study shows that only consistent partial least squares (PLSc) and IGSCA yield unbiased estimates of all parameters, whereas the other approaches always provided biased estimates of several parameters. As such, we conduct a second, extensive simulation study to evaluate the relative performance of the 2 competitors (PLSc and IGSCA), considering a variety of experimental factors (model specification, sample size, the number of indicators per factor/component, and exogenous factor/component correlation). IGSCA exhibits better performance than PLSc under most conditions. We also present a real data application of IGSCA to the study of genes and their influence on depression. Finally, we discuss the implications and limitations of this approach, and recommendations for future research. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Asunto(s)
Análisis de Clases Latentes , Simulación por Computador , Humanos , Análisis de los Mínimos Cuadrados , Tamaño de la Muestra
8.
Adv Methods Pract Psychol Sci ; 1(4): 501-515, 2018 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-31886452

RESUMEN

Concerns have been growing about the veracity of psychological research. Many findings in psychological science are based on studies with insufficient statistical power and nonrepresentative samples, or may otherwise be limited to specific, ungeneralizable settings or populations. Crowdsourced research, a type of large-scale collaboration in which one or more research projects are conducted across multiple lab sites, offers a pragmatic solution to these and other current methodological challenges. The Psychological Science Accelerator (PSA) is a distributed network of laboratories designed to enable and support crowdsourced research projects. These projects can focus on novel research questions, or attempt to replicate prior research, in large, diverse samples. The PSA's mission is to accelerate the accumulation of reliable and generalizable evidence in psychological science. Here, we describe the background, structure, principles, procedures, benefits, and challenges of the PSA. In contrast to other crowdsourced research networks, the PSA is ongoing (as opposed to time-limited), efficient (in terms of re-using structures and principles for different projects), decentralized, diverse (in terms of participants and researchers), and inclusive (of proposals, contributions, and other relevant input from anyone inside or outside of the network). The PSA and other approaches to crowdsourced psychological science will advance our understanding of mental processes and behaviors by enabling rigorous research and systematically examining its generalizability.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA