Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 34
Filtrar
1.
Behav Res Methods ; 2024 Apr 02.
Artigo em Inglês | MEDLINE | ID: mdl-38565742

RESUMO

Structural equation models are used to model the relationships between latent constructs and observable behaviors such as survey responses. Researchers are often interested in testing nested models to determine whether additional constraints that create a more parsimonious model are also supported by the data. A popular statistical tool for nested model comparison is the chi-square difference test. However, there is some evidence that this test performs suboptimally when the unrestricted model is misspecified. In this paper, we examine the type I error rate of the difference test within the context of single-group confirmatory factor analyses when the less restricted model is misspecified but the constraints imposed by the restricted model are correct. Using empirical simulations and analytic approximations, we find that the chi-square difference test is robust to many but not all forms of realistically sized misspecification in the unrestricted model.

2.
Multivariate Behav Res ; 58(6): 1134-1159, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37039444

RESUMO

The use of modern missing data techniques has become more prevalent with their increasing accessibility in statistical software. These techniques focus on handling data that are missing at random (MAR). Although all MAR mechanisms are routinely treated as the same, they are not equal. The impact of missing data on the efficiency of parameter estimates can differ for different MAR variations, even when the amount of missing data is held constant; yet, in current practice, only the rate of missing data is reported. The impact of MAR on the loss of efficiency can instead be more directly measured by the fraction of missing information (FMI). In this article, we explore this impact using FMIs in regression models with one and two predictors. With the help of a Shiny application, we demonstrate that efficiency loss due to missing data can be highly complex and is not always intuitive. We recommend substantive researchers who work with missing data report estimates of FMIs in addition to the rate of missingness. We also encourage methodologists to examine FMIs when designing simulation studies with missing data, and to explore the behavior of efficiency loss under MAR using FMIs in more complex models.


Assuntos
Modelos Estatísticos , Software , Interpretação Estatística de Dados , Simulação por Computador
4.
Multivariate Behav Res ; 56(3): 390-407, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-32054327

RESUMO

Current computations of commonly used fit indices in structural equation modeling (SEM), such as RMSEA and CFI, indicate much better fit when the data are categorical than if the same data had not been categorized. As a result, researchers may be led to accept poorly fitting models with greater frequency when data are categorical. In this article, I first explain why the current computations of categorical fit indices lead to this problematic behavior. I then propose and evaluate alternative ways to compute fit indices with categorical data. The proposed computations approximate what the fit index values would have been had the data not been categorized. The developments in this article are for the DWLS (diagonally weighted least squares) estimator, a popular limited information categorical estimation method. I report on the results of a simulation comparing existing and newly proposed categorical fit indices. The results confirmed the theoretical expectation that the new indices better match the corresponding values with continuous data. The new fit indices performed well across all studied conditions, with the exception of binary data at the smallest studied sample size (N = 200), when all categorical fit indices performed poorly.


Assuntos
Modelos Estatísticos , Interpretação Estatística de Dados , Análise de Classes Latentes , Análise dos Mínimos Quadrados , Tamanho da Amostra
5.
Behav Res Methods ; 52(6): 2306-2323, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-32333330

RESUMO

Psychologists use scales comprised of multiple items to measure underlying constructs. Missing data on such scales often occur at the item level, whereas the model of interest to the researcher is at the composite (scale score) level. Existing analytic approaches cannot easily accommodate item-level missing data when models involve composites. A very common practice in psychology is to average all available items to produce scale scores. This approach, referred to as available-case maximum likelihood (ACML), may produce biased parameter estimates. Another approach researchers use to deal with item-level missing data is scale-level full information maximum likelihood (SL-FIML), which treats the whole scale as missing if any item is missing. SL-FIML is inefficient and it may also exhibit bias. Multiple imputation (MI) produces the correct results using a simulation-based approach. We study a new analytic alternative for item-level missingness, called two-stage maximum likelihood (TSML; Savalei & Rhemtulla, Journal of Educational and Behavioral Statistics, 42(4), 405-431. 2017). The original work showed the method outperforming ACML and SL-FIML in structural equation models with parcels. The current simulation study examined the performance of ACML, SL-FIML, MI, and TSML in the context of univariate regression. We demonstrated performance issues encountered by ACML and SL-FIML when estimating regression coefficients, under both MCAR and MAR conditions. Aside from convergence issues with small sample sizes and high missingness, TSML performed similarly to MI in all conditions, showing negligible bias, high efficiency, and good coverage. This fast analytic approach is therefore recommended whenever it achieves convergence. R code and a Shiny app to perform TSML are provided.


Assuntos
Projetos de Pesquisa , Viés , Interpretação Estatística de Dados , Humanos , Funções Verossimilhança , Tamanho da Amostra
6.
Multivariate Behav Res ; 53(3): 419-429, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29624085

RESUMO

A new type of nonnormality correction to the RMSEA has recently been developed, which has several advantages over existing corrections. In particular, the new correction adjusts the sample estimate of the RMSEA for the inflation due to nonnormality, while leaving its population value unchanged, so that established cutoff criteria can still be used to judge the degree of approximate fit. A confidence interval (CI) for the new robust RMSEA based on the mean-corrected ("Satorra-Bentler") test statistic has also been proposed. Follow up work has provided the same type of nonnormality correction for the CFI (Brosseau-Liard & Savalei, 2014). These developments have recently been implemented in lavaan. This note has three goals: a) to show how to compute the new robust RMSEA and CFI from the mean-and-variance corrected test statistic; b) to offer a new CI for the robust RMSEA based on the mean-and-variance corrected test statistic; and c) to caution that the logic of the new nonnormality corrections to RMSEA and CFI is most appropriate for the maximum likelihood (ML) estimator, and cannot easily be generalized to the most commonly used categorical data estimators.


Assuntos
Modelos Estatísticos , Algoritmos , Interpretação Estatística de Dados
7.
J Educ Behav Stat ; 42(4): 405-431, 2017 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-29276371

RESUMO

In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately handle missing data at the item level. Item-level multiple imputation (MI), however, can handle such missing data straightforwardly. In this article, we develop an analytic approach for dealing with item-level missing data-that is, one that obtains a unique set of parameter estimates directly from the incomplete data set and does not require imputations. The proposed approach is a variant of the two-stage maximum likelihood (TSML) methodology, and it is the analytic equivalent of item-level MI. We compare the new TSML approach to three existing alternatives for handling item-level missing data: scale-level full information maximum likelihood, available-case maximum likelihood, and item-level MI. We find that the TSML approach is the best analytic approach, and its performance is similar to item-level MI. We recommend its implementation in popular software and its further study.

8.
Educ Psychol Meas ; 76(3): 357-386, 2016 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-27182074

RESUMO

Many psychological scales written in the Likert format include reverse worded (RW) items in order to control acquiescence bias. However, studies have shown that RW items often contaminate the factor structure of the scale by creating one or more method factors. The present study examines an alternative scale format, called the Expanded format, which replaces each response option in the Likert scale with a full sentence. We hypothesized that this format would result in a cleaner factor structure as compared with the Likert format. We tested this hypothesis on three popular psychological scales: the Rosenberg Self-Esteem scale, the Conscientiousness subscale of the Big Five Inventory, and the Beck Depression Inventory II. Scales in both formats showed comparable reliabilities. However, scales in the Expanded format had better (i.e., lower and more theoretically defensible) dimensionalities than scales in the Likert format, as assessed by both exploratory factor analyses and confirmatory factor analyses. We encourage further study and wider use of the Expanded format, particularly when a scale's dimensionality is of theoretical interest.

9.
Multivariate Behav Res ; 49(5): 407-24, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-26732356

RESUMO

Researchers are often advised to write balanced scales (containing an equal number of positively and negatively worded items) when measuring psychological attributes. This practice is recommended to control for acquiescence bias (ACQ). However, little advice has been given on what to do with such data if the researcher subsequently wants to evaluate a 1-factor model for the scale. This article compares 3 approaches for dealing with the presence of ACQ bias, which make different assumptions: an ipsatization approach based on the work of Chan and Bentler (CB; 1993), a confirmatory factor analysis (CFA) approach that includes an ACQ factor with equal loadings (Billiet & McClendon, 2000; Mirowsky & Ross, 1991), and an exploratory factor analysis (EFA) approach with a target rotation (Ferrando, Lorenzo-Seva, & Chico, 2003). We also examine the "do nothing" approach which fits the 1-factor model to the data ignoring the presence of ACQ bias. Our main findings are that the CFA method performs best overall and that it is robust to the violation of its assumptions, the EFA and the CB approaches work well when their assumptions are strictly met, and the "do nothing" approach can be surprisingly robust when the ACQ factor is not very strong.

10.
Multivariate Behav Res ; 49(5): 460-70, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-26732359

RESUMO

A variety of indices are commonly used to assess model fit in structural equation modeling. However, fit indices obtained from the normal theory maximum likelihood fit function are affected by the presence of nonnormality in the data. We present a nonnormality correction for 2 commonly used incremental fit indices, the comparative fit index and the Tucker-Lewis index. This correction uses the Satorra-Bentler scaling constant to modify the sample estimate of these fit indices but does not affect the population value. We argue that this type of nonnormality correction is superior to the correction that changes the population value of the fit index implemented in some software programs. In a simulation study, we demonstrate that our correction performs well across a variety of sample sizes, model types, and misspecification types.

12.
Psychol Methods ; 28(2): 263-283, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-35007107

RESUMO

The full-information maximum likelihood (FIML) is a popular estimation method for missing data in structural equation modeling (SEM). However, previous research has shown that SEM approximate fit indices (AFIs) such as the root mean square error of approximation (RMSEA) and the comparative fit index (CFI) can be distorted relative to their complete data counterparts when they are computed following the FIML estimation. The main goal of the current paper is to propose and examine an alternative approach for computing AFIs following the FIML estimation, which we refer to as the FIML-corrected or FIML-C approach. The secondary goal of the article is to examine another existing estimation method, the two-stage (TS) approach, for computing AFIs in the presence of missing data. Both FIML-C and TS approaches remove the bias due to missing data, so that the resulting incomplete data AFIs estimate the same population values as their complete data counterparts. For both approaches, we also propose a series of small sample corrections to improve the estimates of AFIs. In two simulation studies, we found that the FIML-C and TS approaches, when implemented with small sample corrections, estimated the population-complete-data AFIs with little bias across a variety of conditions, although the FIML-C approach can fail in a small number of conditions with a high percentage of missing data and a high degree of model misspecification. In contrast, the FIML AFIs as currently computed often performed poorly. We recommend FIML-C and TS approaches for computing AFIs in SEM. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Assuntos
Interpretação Estatística de Dados , Humanos , Simulação por Computador , Análise de Classes Latentes , Viés
13.
Educ Psychol Meas ; 83(4): 649-683, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37398842

RESUMO

Zhang and Savalei proposed an alternative scale format to the Likert format, called the Expanded format. In this format, response options are presented in complete sentences, which can reduce acquiescence bias and method effects. The goal of the current study was to compare the psychometric properties of the Rosenberg Self-Esteem Scale (RSES) in the Expanded format and in two other alternative formats, relative to several versions of the traditional Likert format. We conducted two studies to compare the psychometric properties of the RSES across the different formats. We found that compared with the Likert format, the alternative formats tend to have a unidimensional factor structure, less response inconsistency, and comparable validity. In addition, we found that the Expanded format resulted in the best factor structure among the three alternative formats. Researchers should consider the Expanded format, especially when creating short psychological scales such as the RSES.

14.
Psychol Methods ; 2023 Jan 09.
Artigo em Inglês | MEDLINE | ID: mdl-36622720

RESUMO

Comparison of nested models is common in applications of structural equation modeling (SEM). When two models are nested, model comparison can be done via a chi-square difference test or by comparing indices of approximate fit. The advantage of fit indices is that they permit some amount of misspecification in the additional constraints imposed on the model, which is a more realistic scenario. The most popular index of approximate fit is the root mean square error of approximation (RMSEA). In this article, we argue that the dominant way of comparing RMSEA values for two nested models, which is simply taking their difference, is problematic and will often mask misfit, particularly in model comparisons with large initial degrees of freedom. We instead advocate computing the RMSEA associated with the chi-square difference test, which we call RMSEAD. We are not the first to propose this index, and we review numerous methodological articles that have suggested it. Nonetheless, these articles appear to have had little impact on actual practice. The modification of current practice that we call for may be particularly needed in the context of measurement invariance assessment. We illustrate the difference between the current approach and our advocated approach on three examples, where two involve multiple-group and longitudinal measurement invariance assessment and the third involves comparisons of models with different numbers of factors. We conclude with a discussion of recommendations and future research directions. (PsycInfo Database Record (c) 2023 APA, all rights reserved).

15.
J Pers Assess ; 93(5): 445-53, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-21859284

RESUMO

Popular computer programs print 2 versions of Cronbach's alpha: unstandardized alpha, α(Σ), based on the covariance matrix, and standardized alpha, α(R), based on the correlation matrix. Sources that accurately describe the theoretical distinction between the 2 coefficients are lacking, which can lead to the misconception that the differences between α(R) and α(Σ) are unimportant and to the temptation to report the larger coefficient. We explore the relationship between α(R) and α(Σ) and the reliability of the standardized and unstandardized composite under 3 popular measurement models; we clarify the theoretical meaning of each coefficient and conclude that researchers should choose an appropriate reliability coefficient based on theoretical considerations. We also illustrate that α(R) and α(Σ) estimate the reliability of different composite scores, and in most cases cannot be substituted for one another.


Assuntos
Projetos de Pesquisa , Estatística como Assunto , Modelos Estatísticos
16.
Front Psychol ; 12: 667802, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34512436

RESUMO

In missing data analysis, the reporting of missing rates is insufficient for the readers to determine the impact of missing data on the efficiency of parameter estimates. A more diagnostic measure, the fraction of missing information (FMI), shows how the standard errors of parameter estimates increase from the information loss due to ignorable missing data. FMI is well-known in the multiple imputation literature (Rubin, 1987), but it has only been more recently developed for full information maximum likelihood (Savalei and Rhemtulla, 2012). Sample FMI estimates using this approach have since then been made accessible as part of the lavaan package (Rosseel, 2012) in the R statistical programming language. However, the properties of FMI estimates at finite sample sizes have not been the subject of comprehensive investigation. In this paper, we present a simulation study on the properties of three sample FMI estimates from FIML in two common models in psychology, regression and two-factor analysis. We summarize the performance of these FMI estimates and make recommendations on their application.

17.
Pers Soc Psychol Bull ; 47(6): 969-984, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-32865124

RESUMO

Researchers' subjective judgments may affect the statistical results they obtain. This possibility is particularly stark in Bayesian hypothesis testing: To use this increasingly popular approach, researchers specify the effect size they are expecting (the "prior mean"), which is then incorporated into the final statistical results. Because the prior mean represents an expression of confidence that one is studying a large effect, we reasoned that scientists who are more confident in their research skills may be inclined to select larger prior means. Across two preregistered studies with more than 900 active researchers in psychology, we showed that more self-confident researchers selected larger prior means. We also found suggestive but somewhat inconsistent evidence that men may choose larger prior means than women, due in part to gender differences in researcher self-confidence. Our findings provide the first evidence that researchers' personal characteristics might shape the statistical results they obtain with Bayesian hypothesis testing.


Assuntos
Projetos de Pesquisa , Pesquisadores , Teorema de Bayes , Feminino , Humanos
18.
Psychol Assess ; 32(7): 698-704, 2020 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-32271061

RESUMO

The unique objectives of the current investigation were: (a) to assess the fit of a multiinformant 2-factor measurement model of friendship quality in a clinical sample of children with attention-deficit/hyperactivity disorder (ADHD); and (b) to use a multiple indicators multiple causes approach to evaluate whether comorbid externalizing and internalizing disorders incrementally predict levels of positive and negative friendship quality. Our sample included 165 target children diagnosed with ADHD (33% girls; aged 6-11 years). Target children, their parents, their friends, and the parents of their friends independently completed a self-report measure of friendship quality about the reciprocated friendship between the target child and the friend. Results indicated that a multiinformant 2-factor measurement model with correlated positive friendship quality and negative friendship quality had good fit. The friendships of children with ADHD and a comorbid externalizing disorder were characterized by less positive friendship quality and more negative friendship quality than the friendships of children with ADHD and no externalizing disorder after controlling for the presence of a comorbid internalizing disorder. However, the presence of a comorbid internalizing disorder did not predict positive or negative friendship quality. These findings suggest that soliciting reports from parents in addition to children and friends, and measuring comorbid externalizing disorders, may be valuable evidence-based strategies when assessing friendship quality in ADHD populations. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Assuntos
Transtorno do Deficit de Atenção com Hiperatividade/psicologia , Comorbidade , Amigos/psicologia , Relações Interpessoais , Modelos Psicológicos , Criança , Feminino , Humanos , Masculino , Pais
19.
Psychol Methods ; 24(3): 352-370, 2019 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-29781637

RESUMO

It is well known that methods that fail to account for measurement error in observed variables, such as regression and path analysis (PA), can result in poor estimates and incorrect inference. On the other hand, methods that fully account for measurement error, such as structural equation modeling with latent variables and multiple indicators, can produce highly variable estimates in small samples. This article advocates a family of intermediate models for small samples (N < 200), referred to as single indicator (SI) models. In these models, each latent variable has a single composite indicator, with its reliability fixed to a plausible value. A simulation study compared three versions of the SI method with PA and with a multiple-indicator structural equation model (SEM) in small samples (N = 30 to 200). Two of the SI models fixed the reliability of each construct to a value chosen a priori (either .7 or .8). The third SI model (referred to as "SIα") estimated the reliability of each construct from the data via coefficient alpha. The results showed that PA and fixed-reliability SI methods that overestimated reliability slightly resulted in the most accurate estimates as well as in the highest power. Fixed-reliability SI methods also maintained good coverage and Type I error rates. The SIα and SEM methods had intermediate performance. In small samples, use of a fixed-reliability SI method is recommended. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Assuntos
Bioestatística/métodos , Interpretação Estatística de Dados , Análise de Classes Latentes , Modelos Estatísticos , Humanos , Reprodutibilidade dos Testes , Tamanho da Amostra
20.
Front Psychol ; 10: 1286, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31214090

RESUMO

Previous research by Zhang and Savalei (2015) proposed an alternative scale format to the Likert scale format: the Expanded format. Scale items in the Expanded format present both positively worded and negatively worded sentences as response options for each scale item; therefore, they were less affected by the acquiescence bias and method effects that often occur in the Likert scale items. The major goal of the current study is to further demonstrate the superiority of the Expanded format to the Likert format across different psychological scales. Specifically, we aim to replicate the findings of Zhang and Savalei and to determine whether order effect exists in the Expanded format scales. Six psychological scales were examined in the study, including the five subscales of the big five inventory (BFI) and the Rosenberg self-esteem (RSE) scale. Four versions were created for each psychological scale. One version was the original scale in the Likert format. The other three versions were in different Expanded formats that varied in the order of the response options. For each scale, the participant was randomly assigned to complete one scale version. Across the different versions of each scale, we compared the factor structures and the distributions of the response options. Our results successfully replicated the findings of Zhang and Savalei, and also showed that order effect was generally absent in the Expanded format scales. Based on these promising findings, we encourage researchers to use the Expanded format for these and other scales in their substantive research.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA