Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros

Banco de datos
Asunto principal
Tipo de estudio
Tipo del documento
Asunto de la revista
Intervalo de año de publicación
1.
Behav Res Methods ; 2023 Nov 29.
Artículo en Inglés | MEDLINE | ID: mdl-38030923

RESUMEN

Measurement invariance (MI) of a psychometric scale is a prerequisite for valid group comparisons of the measured construct. While the invariance of loadings and intercepts (i.e., scalar invariance) supports comparisons of factor means and observed means with continuous items, a general belief is that the same holds with ordered-categorical (i.e., ordered-polytomous and dichotomous) items. However, as this paper shows, this belief is only partially true-factor mean comparison is permissible in the correctly specified scalar invariance model with ordered-polytomous items but not with dichotomous items. Furthermore, rather than scalar invariance, full strict invariance-invariance of loadings, thresholds, intercepts, and unique factor variances in all items-is needed when comparing observed means with both ordered-polytomous and dichotomous items. In a Monte Carlo simulation study, we found that unique factor noninvariance led to biased estimations and inferences (e.g., with inflated type I error rates of 19.52%) of (a) the observed mean difference for both ordered-polytomous and dichotomous items and (b) the factor mean difference for dichotomous items in the scalar invariance model. We provide a tutorial on invariance testing with ordered-categorical items as well as suggestions on mean comparisons when strict invariance is violated. In general, we recommend testing strict invariance prior to comparing observed means with ordered-categorical items and adjusting for partial invariance to compare factor means if strict invariance fails.

2.
Behav Res Methods ; 54(1): 414-434, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-34236670

RESUMEN

Measurement invariance is the condition that an instrument measures a target construct in the same way across subgroups, settings, and time. In psychological measurement, usually only partial, but not full, invariance is achieved, which potentially biases subsequent parameter estimations and statistical inferences. Although existing literature shows that a correctly specified partial invariance model can remove such biases, it ignores the model uncertainty in the specification search step: flagging the wrong items may lead to additional bias and variability in subsequent inferences. On the other hand, several new approaches, including Bayesian approximate invariance and alignment optimization methods, have been proposed; these methods use an approximate invariance model to adjust for partial measurement invariance without the need to directly identify noninvariant items. However, there has been limited research on these methods in situations with a small number of groups. In this paper, we conducted three systematic simulation studies to compare five methods for adjusting partial invariance. While specification search performed reasonably well when the proportion of noninvariant parameters was no more than one-third, alignment optimization overall performed best across conditions in terms of efficiency of parameter estimates, confidence interval coverage, and type I error rates. In addition, the Bayesian version of alignment optimization performed best for estimating latent means and variances in small-sample and low-reliability conditions. We thus recommend the use of the alignment optimization methods for adjusting partial invariance when comparing latent constructs across a few groups.


Asunto(s)
Teorema de Bayes , Sesgo , Simulación por Computador , Análisis Factorial , Humanos , Reproducibilidad de los Resultados
3.
Front Psychol ; 10: 1286, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31214090

RESUMEN

Previous research by Zhang and Savalei (2015) proposed an alternative scale format to the Likert scale format: the Expanded format. Scale items in the Expanded format present both positively worded and negatively worded sentences as response options for each scale item; therefore, they were less affected by the acquiescence bias and method effects that often occur in the Likert scale items. The major goal of the current study is to further demonstrate the superiority of the Expanded format to the Likert format across different psychological scales. Specifically, we aim to replicate the findings of Zhang and Savalei and to determine whether order effect exists in the Expanded format scales. Six psychological scales were examined in the study, including the five subscales of the big five inventory (BFI) and the Rosenberg self-esteem (RSE) scale. Four versions were created for each psychological scale. One version was the original scale in the Likert format. The other three versions were in different Expanded formats that varied in the order of the response options. For each scale, the participant was randomly assigned to complete one scale version. Across the different versions of each scale, we compared the factor structures and the distributions of the response options. Our results successfully replicated the findings of Zhang and Savalei, and also showed that order effect was generally absent in the Expanded format scales. Based on these promising findings, we encourage researchers to use the Expanded format for these and other scales in their substantive research.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA