Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 45
Filtrar
1.
Am J Kidney Dis ; 71(4): 461-468, 2018 04.
Artigo em Inglês | MEDLINE | ID: mdl-29128411

RESUMO

BACKGROUND: The Centers for Medicare & Medicaid Services require that dialysis patients' health-related quality of life be assessed annually. The primary instrument used for this purpose is the Kidney Disease Quality of Life 36-Item Short-Form Survey (KDQOL-36), which includes the SF-12 as its generic core and 3 kidney disease-targeted scales: Burden of Kidney Disease, Symptoms and Problems of Kidney Disease, and Effects of Kidney Disease. Despite its broad use, there has been limited evaluation of KDQOL-36's psychometric properties. STUDY DESIGN: Secondary analyses of data collected by the Medical Education Institute to evaluate the reliability and factor structure of the KDQOL-36 scales. SETTINGS & PARTICIPANTS: KDQOL-36 responses from 70,786 dialysis patients in 1,381 US dialysis facilities that permitted data analysis were collected from June 1, 2015, through May 31, 2016, as part of routine clinical assessment. MEASUREMENTS & OUTCOMES: We assessed the KDQOL-36 scales' internal consistency reliability and dialysis facility-level reliability using coefficient alpha and 1-way analysis of variance. We evaluated the KDQOL-36's factor structure using item-to-total scale correlations and confirmatory factor analysis. Construct validity was examined using correlations between SF-12 and KDQOL-36 scales and "known groups" analyses. RESULTS: Each of the KDQOL-36's kidney disease-targeted scales had acceptable internal consistency reliability (α=0.83-0.85) and facility-level reliability (r=0.75-0.83). Item-scale correlations and a confirmatory factor analysis model evidenced the KDQOL-36's original factor structure. Construct validity was supported by large correlations between the SF-12 Physical Component Summary and Mental Component Summary (r=0.40-0.52) and the KDQOL-36 scale scores, as well as significant differences on the scale scores between patients receiving different types of dialysis, diabetic and nondiabetic patients, and patients who were employed full-time versus not. LIMITATIONS: Use of secondary data from a clinical registry. CONCLUSIONS: The study provides support for the reliability and construct validity of the KDQOL-36 scales for assessment of health-related quality of life among dialysis patients.


Assuntos
Nefropatias/psicologia , Psicometria/métodos , Qualidade de Vida , Sistema de Registros , Inquéritos e Questionários , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Feminino , Seguimentos , Humanos , Nefropatias/epidemiologia , Nefropatias/terapia , Masculino , Pessoa de Meia-Idade , Morbidade , Diálise Renal , Reprodutibilidade dos Testes , Estudos Retrospectivos , Estados Unidos/epidemiologia , Adulto Jovem
2.
Qual Life Res ; 27(10): 2699-2707, 2018 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-29761347

RESUMO

PURPOSE: Black dialysis patients report better health-related quality of life (HRQOL) than White patients, which may be explained if Black and White patients respond systematically differently to HRQOL survey items. METHODS: We examined differential item functioning (DIF) of the Kidney Disease Quality of Life 36-item (KDQOLTM-36) Burden of Kidney Disease, Symptoms and Problems with Kidney Disease, and Effects of Kidney Disease scales between Black (n = 18,404) and White (n = 21,439) dialysis patients. We fit multiple group confirmatory factor analysis models with increasing invariance: a Configural model (invariant factor structure), a Metric model (invariant factor loadings), and a Scalar model (invariant intercepts). Criteria for invariance included non-significant χ2 tests, > 0.002 difference in the models' CFI, and > 0.015 difference in RMSEA and SRMR. Next, starting with a fully invariant model, we freed loadings and intercepts item-by-item to determine if DIF impacted estimated KDQOLTM-36 scale means. RESULTS: ΔCFI was 0.006 between the metric and scalar models but was reduced to 0.001 when we freed intercepts for the burdens and symptoms and problems of kidney disease scales. In comparison to standardized means of 0 in the White group, those for the Black group on the Burdens, Symptoms and Problems, and Effects of Kidney Disease scales were 0.218, 0.061, and 0.161, respectively. When loadings and thresholds were released sequentially, differences in means between models ranged between 0.001 and 0.048. CONCLUSION: Despite some DIF, impacts on KDQOLTM-36 responses appear to be minimal. We conclude that the KDQOLTM-36 is appropriate to make substantive comparisons of HRQOL between Black and White dialysis patients.


Assuntos
Negro ou Afro-Americano/psicologia , Nefropatias/psicologia , Qualidade de Vida/psicologia , Diálise Renal/psicologia , Inquéritos e Questionários , População Branca/psicologia , Adulto , Idoso , Idoso de 80 Anos ou mais , Análise Fatorial , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
3.
Long Range Plann ; 47(3): 138-145, 2014 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-24926106

RESUMO

Rigdon (2012) suggests that partial least squares (PLS) can be improved by killing it, that is, by making it into a different methodology based on components. We provide some history on problems with component-type methods and develop some implications of Rigdon's suggestion. It seems more appropriate to maintain and improve PLS as far as possible, but also to freely utilize alternative models and methods when those are more relevant in certain data analytic situations. Huang's (2013) new consistent and efficient PLSe2 methodology is suggested as a candidate for an improved PLS.

4.
Stat Med ; 32(24): 4229-39, 2013 Oct 30.
Artigo em Inglês | MEDLINE | ID: mdl-23640746

RESUMO

High-dimensional longitudinal data involving latent variables such as depression and anxiety that cannot be quantified directly are often encountered in biomedical and social sciences. Multiple responses are used to characterize these latent quantities, and repeated measures are collected to capture their trends over time. Furthermore, substantive research questions may concern issues such as interrelated trends among latent variables that can only be addressed by modeling them jointly. Although statistical analysis of univariate longitudinal data has been well developed, methods for modeling multivariate high-dimensional longitudinal data are still under development. In this paper, we propose a latent factor linear mixed model (LFLMM) for analyzing this type of data. This model is a combination of the factor analysis and multivariate linear mixed models. Under this modeling framework, we reduced the high-dimensional responses to low-dimensional latent factors by the factor analysis model, and then we used the multivariate linear mixed model to study the longitudinal trends of these latent factors. We developed an expectation-maximization algorithm to estimate the model. We used simulation studies to investigate the computational properties of the expectation-maximization algorithm and compare the LFLMM model with other approaches for high-dimensional longitudinal data analysis. We used a real data example to illustrate the practical usefulness of the model.


Assuntos
Interpretação Estatística de Dados , Análise Fatorial , Modelos Lineares , Estudos Longitudinais , Idoso , Algoritmos , Cognição/fisiologia , Feminino , Humanos , Masculino , Aptidão Física/fisiologia , Aptidão Física/psicologia
5.
Comput Stat Data Anal ; 57(1): 392-403, 2013 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-22904587

RESUMO

Based on the Bayes modal estimate of factor scores in binary latent variable models, this paper proposes two new limited information estimators for the factor analysis model with a logistic link function for binary data based on Bernoulli distributions up to the second and the third order with maximum likelihood estimation and Laplace approximations to required integrals. These estimators and two existing limited information weighted least squares estimators are studied empirically. The limited information estimators compare favorably to full information estimators based on marginal maximum likelihood, MCMC, and multinomial distribution with a Laplace approximation methodology. Among the various estimators, Maydeu-Olivares and Joe's (2005) weighted least squares limited information estimators implemented with Laplace approximations for probabilities are shown in a simulation to have the best root mean square errors.

6.
J Stat Comput Simul ; 83(1): 25-36, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23329857

RESUMO

The item factor analysis model for investigating multidimensional latent spaces has proved to be useful. Parameter estimation in this model requires computationally demanding high-dimensional integrations. While several approaches to approximate such integrations have been proposed, they suffer various computational difficulties. This paper proposes a Nesting Monte Carlo Expectation-Maximization (MCEM) algorithm for item factor analysis with binary data. Simulation studies and a real data example suggest that the Nesting MCEM approach can significantly improve computational efficiency while also enjoying the good properties of stable convergence and easy implementation.

7.
Sci Rep ; 13(1): 5536, 2023 Apr 04.
Artigo em Inglês | MEDLINE | ID: mdl-37015939

RESUMO

Climate change is a critical issue of our time, and its causes, pathways, and forecasts remain a topic of broader discussion. In this paper, we present a novel data driven pathway analysis framework to identify the key processes behind mean global temperature and sea level rise, and to forecast the magnitude of their increase from the present to 2100. Based on historical data and dynamic statistical modeling alone, we have established the causal pathways that connect increasing greenhouse gas emissions to increasing global mean temperature and sea level, with its intermediate links encompassing humidity, sea ice coverage, and glacier mass, but not for sunspot numbers. Our results indicate that if no action is taken to curb anthropogenic greenhouse gas emissions, the global average temperature would rise to an estimated 3.28 °C (2.46-4.10 °C) above its pre-industrial level while the global sea level would be an estimated 573 mm (474-671 mm) above its 2021 mean by 2100. However, if countries adhere to the greenhouse gas emission regulations outlined in the 2021 United Nations Conference on Climate Change (COP26), the rise in global temperature would lessen to an average increase of 1.88 °C (1.43-2.33 °C) above its pre-industrial level, albeit still higher than the targeted 1.5 °C, while the sea level increase would reduce to 449 mm (389-509 mm) above its 2021 mean by 2100.

8.
Sociol Methods Res ; 41(4): 598-629, 2012 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-24764604

RESUMO

Normal-distribution-based maximum likelihood (ML) and multiple imputation (MI) are the two major procedures for missing data analysis. This article compares the two procedures with respects to bias and efficiency of parameter estimates. It also compares formula-based standard errors (SEs) for each procedure against the corresponding empirical SEs. The results indicate that parameter estimates by MI tend to be less efficient than those by ML; and the estimates of variance-covariance parameters by MI are also more biased. In particular, when the population for the observed variables possesses heavy tails, estimates of variance-covariance parameters by MI may contain severe bias even at relative large sample sizes. Although performing a lot better, ML parameter estimates may also contain substantial bias at smaller sample sizes. The results also indicate that, when the underlying population is close to normally distributed, SEs based on the sandwich-type covariance matrix and those based on the observed information matrix are very comparable to empirical SEs with either ML or MI. When the underlying distribution has heavier tails, SEs based on the sandwich-type covariance matrix for ML estimates are more reliable than those based on the observed information matrix. Both empirical results and analysis show that neither SEs based on the observed information matrix nor those based on the sandwich-type covariance matrix can provide consistent SEs in MI. Thus, ML is preferable to MI in practice, although parameter estimates by MI might still be consistent.

9.
Multivariate Behav Res ; 47(3): 448-462, 2012 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-23144511

RESUMO

Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne's asymptotically distribution-free method and Satorra Bentler's mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler's statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby's study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic.

10.
Multivariate Behav Res ; 47(3): 442-447, 2012 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-23180888

RESUMO

Molenaar (2003, 2011) showed that a common factor model could be transformed into an equivalent model without factors, involving only observed variables and residual errors. He called this invertible transformation the Houdini transformation. His derivation involved concepts from time series and state space theory. This paper verifies the Houdini transformation on a general latent variable model using algebraic methods. The results show that the Houdini transformation is illusory, in the sense that the Houdini transformed model remains a latent variable model. Contrary to common knowledge, a model that is a path model with only observed variables and residual errors may, in fact, be a latent variable model.

11.
Psychometrika ; 77(3): 442-54, 2012 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-27519775

RESUMO

Bi-factor analysis is a form of confirmatory factor analysis originally introduced by Holzinger and Swineford (Psychometrika 47:41-54, 1937). The bi-factor model has a general factor, a number of group factors, and an explicit bi-factor structure. Jennrich and Bentler (Psychometrika 76:537-549, 2011) introduced an exploratory form of bi-factor analysis that does not require one to provide an explicit bi-factor structure a priori. They use exploratory factor analysis and a bifactor rotation criterion designed to produce a rotated loading matrix that has an approximate bi-factor structure. Among other things this can be used as an aid in finding an explicit bi-factor structure for use in a confirmatory bi-factor analysis. They considered only orthogonal rotation. The purpose of this paper is to consider oblique rotation and to compare it to orthogonal rotation. Because there are many more oblique rotations of an initial loading matrix than orthogonal rotations, one expects the oblique results to approximate a bi-factor structure better than orthogonal rotations and this is indeed the case. A surprising result arises when oblique bi-factor rotation methods are applied to ideal data.

12.
Psychol Methods ; 27(4): 519-540, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-34166048

RESUMO

n real data analysis with structural equation modeling, data are unlikely to be exactly normally distributed. If we ignore the non-normality reality, the parameter estimates, standard error estimates, and model fit statistics from normal theory based methods such as maximum likelihood (ML) and normal theory based generalized least squares estimation (GLS) are unreliable. On the other hand, the asymptotically distribution free (ADF) estimator does not rely on any distribution assumption but cannot demonstrate its efficiency advantage with small and modest sample sizes. The methods which adopt misspecified loss functions including ridge GLS (RGLS) can provide better estimates and inferences than the normal theory based methods and the ADF estimator in some cases. We propose a distributionally weighted least squares (DLS) estimator, and expect that it can perform better than the existing generalized least squares, because it combines normal theory based and ADF based generalized least squares estimation. Computer simulation results suggest that model-implied covariance based DLS (DLSM) provided relatively accurate and efficient estimates in terms of RMSE. In addition, the empirical standard errors, the relative biases of standard error estimates, and the Type I error rates of the Jiang-Yuan rank adjusted model fit test statistic (TJY) in DLSM were competitive with the classical methods including ML, GLS, and RGLS. The performance of DLSM depends on its tuning parameter a. We illustrate how to implement DLSM and select the optimal a by a bootstrap procedure in a real data example. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Assuntos
Análise dos Mínimos Quadrados , Viés , Simulação por Computador , Humanos , Análise de Classes Latentes , Tamanho da Amostra
13.
Stat Med ; 30(21): 2634-47, 2011 Sep 20.
Artigo em Inglês | MEDLINE | ID: mdl-21786282

RESUMO

Finite mixture factor analysis provides a parsimonious model to explore latent group structures of high-dimensional data. In this modeling framework, we can explore latent structures for continuous responses. However, dichotomous items are often used to define latent domains in practice. This paper proposes an extended finite mixture factor analysis model with covariates to model mixed continuous and binary responses. We use a Monte Carlo expectation-maximization (MCEM) algorithm to estimate the model. In the E step, closed-form solutions are not available for the conditional expectation of complete data log likelihood, so it is approximated by sample means, which are in turn generated by the Gibbs sampler from the joint conditional distribution of latent variables. To monitor the convergence of the MCEM algorithm, we use bridge sampling to calculate the log likelihood ratio of two successive iterations. We adopt a diagnostic plot of the log likelihood ratio against iterations for monitoring the convergence of the MCEM algorithm. We compare different models based on BIC, in which we approximate the observed data log likelihood by using a Monte Carlo method. We investigate the computational properties of the MCEM algorithm by simulation studies. We use a real data example to illustrate the practical usefulness of the model. Finally, we discuss limitations and possible extensions.


Assuntos
Modelos Biológicos , Modelos Estatísticos , Algoritmos , Simulação por Computador/estatística & dados numéricos , Análise Fatorial , Pessoas Mal Alojadas/psicologia , Pessoas Mal Alojadas/estatística & dados numéricos , Humanos , Masculino , Sexo Seguro/psicologia , Sexo Seguro/estatística & dados numéricos , Apoio Social , Transtornos Relacionados ao Uso de Substâncias/epidemiologia , Transtornos Relacionados ao Uso de Substâncias/psicologia
14.
Educ Psychol Meas ; 71(2): 325-345, 2011 Mar 22.
Artigo em Inglês | MEDLINE | ID: mdl-21544234

RESUMO

Maximum likelihood is commonly used for estimation of model parameters in analysis of two-level structural equation models. Constraints on model parameters could be encountered in some situations such as equal factor loadings for different factors. Linear constraints are the most common ones and they are relatively easy to handle in maximum likelihood analysis. Nonlinear constraints could be encountered in complicated applications. In this paper we develop an EM-type algorithm for estimating model parameters with both linear and nonlinear constraints. The empirical performance of the algorithm is demonstrated by a Monte Carlo study. Application of the algorithm for linear constraints is illustrated by setting up a two-level mean and covariance structure model for a real two-level data set and running an EQS program.

15.
Br J Math Stat Psychol ; 64(Pt 1): 107-33, 2011 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-21506947

RESUMO

This paper develops a ridge procedure for structural equation modelling (SEM) with ordinal and continuous data by modelling the polychoric/polyserial/product-moment correlation matrix R. Rather than directly fitting R, the procedure fits a structural model to R(a) =R+aI by minimizing the normal distribution-based discrepancy function, where a > 0. Statistical properties of the parameter estimates are obtained. Four statistics for overall model evaluation are proposed. Empirical results indicate that the ridge procedure for SEM with ordinal data has better convergence rate, smaller bias, smaller mean square error, and better overall model evaluation than the widely used maximum likelihood procedure.


Assuntos
Coleta de Dados/estatística & dados numéricos , Modelos Estatísticos , Psicometria/estatística & dados numéricos , Ciências Sociais/estatística & dados numéricos , Estatística como Assunto , Extroversão Psicológica , Humanos , Introversão Psicológica , Funções Verossimilhança , Computação Matemática , Transtornos Neuróticos/diagnóstico , Transtornos Neuróticos/psicologia , Inventário de Personalidade/estatística & dados numéricos , Transtornos Psicóticos/diagnóstico , Transtornos Psicóticos/psicologia , Software , Inquéritos e Questionários
16.
Psychometrika ; 76(4): 537-49, 2011 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-22232562

RESUMO

Bi-factor analysis is a form of confirmatory factor analysis originally introduced by Holzinger. The bi-factor model has a general factor and a number of group factors. The purpose of this paper is to introduce an exploratory form of bi-factor analysis. An advantage of using exploratory bi-factor analysis is that one need not provide a specific bi-factor model a priori. The result of an exploratory bi-factor analysis, however, can be used as an aid in defining a specific bi-factor model. Our exploratory bi-factor analysis is simply exploratory factor analysis using a bi-factor rotation criterion. This is a criterion designed to produce perfect cluster structure in all but the first column of a rotated loading matrix. Examples are given to show how exploratory bi-factor analysis can be used with ideal and real data. The relation of exploratory bi-factor analysis to the Schmid-Leiman method is discussed.

17.
Psychometrika ; 86(4): 861-868, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34401978

RESUMO

Sijtsma and Pfadt (Psychometrika, 2021) provide a wide-ranging defense for the use of coefficient alpha. Alpha is practical and useful when its limitations are acceptable. This paper discusses several methodologies for reliability, some new here, that go beyond alpha and were not emphasized by Sijtsma and Pfadt. Bentler's (Psychometrika 33:335-345, 1968. https://doi.org/10.1007/BF02289328 ) combined factor analysis (FA) and classical test theory (CTT) model. FACTT provides a key conceptual foundation.


Assuntos
Reprodutibilidade dos Testes , Análise Fatorial , Psicometria
18.
Br J Math Stat Psychol ; 63(Pt 2): 273-91, 2010 May.
Artigo em Inglês | MEDLINE | ID: mdl-19793410

RESUMO

Many test statistics are asymptotically equivalent to quadratic forms of normal variables, which are further equivalent to T = sigma(d)(i=1) lambda(i)z(i)(2) with z(i) being independent and following N(0,1). Two approximations to the distribution of T have been implemented in popular software and are widely used in evaluating various models. It is important to know how accurate these approximations are when compared to each other and to the exact distribution of T. The paper systematically studies the quality of the two approximations and examines the effect of the lambda(i) and the degrees of freedom d by analysis and Monte Carlo. The results imply that the adjusted distribution for T can be as good as knowing its exact distribution. When the coefficient of variation of the lambda(i) is small, the rescaled statistic T(R) = dT/(sigma(d)(i=1) lambda(i)) is also adequate for practical model inference. But comparing T(R) against chi2(d) will inflate type I errors when substantial differences exist among the lambda(i), especially, when d is also large.


Assuntos
Funções Verossimilhança , Modelos Psicológicos , Psicometria/estatística & dados numéricos , Distribuições Estatísticas , Distribuição de Qui-Quadrado , Humanos , Computação Matemática , Método de Monte Carlo , Software
19.
J Fam Issues ; 30(10): 1339-1355, 2009.
Artigo em Inglês | MEDLINE | ID: mdl-19774203

RESUMO

This short-term longitudinal study investigated whether maternal educational attainment, maternal employment status, and family income affect African-American children's behavioral and cognitive functioning over time through their impacts on mothers' psychological functioning and parenting efficacy in a sample of 100 poor and near-poor single black mothers and their 3- and 4-year-old focal children. Results indicate that education, working status, and earnings display statistically significant, negative, indirect relations with behavior problems and, with the exception of earnings, statistically significant, positive, indirect relationships with teacher-rated adaptive language skills over time. Findings suggest further that parenting efficacy may mediate the link between poor and near-poor single black mothers' depressive symptoms and their preschoolers' subsequent school adjustment. Implications of these findings for policy and program interventions are discussed.

20.
J Prim Prev ; 30(3-4): 265-92, 2009 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-19415497

RESUMO

A structural equations model examined the influence of three cultural variables of ethnic pride, traditional family values and acculturation, along with the mediating variables of avoidance self-efficacy and perceptions of the "benefits" of cigarette smoking, on cigarette and alcohol use in a sample of Latino middle school students in the Southwest. Girls (N = 585) and boys (N = 360) were analyzed separately. In both groups, higher ethnic pride and traditional family values exerted indirect effects on less cigarette smoking and alcohol use when mediated through greater self-efficacy and less endorsement of the "benefits" of cigarette smoking. Among the girls, greater ethnic pride also had a direct effect on less cigarette and alcohol use. Also, greater acculturation directly predicted more cigarette and alcohol use among the girls, but not among the boys. However, differences between the boys and girls were generally nonsignificant as revealed by multiple group latent variable models. These results offer implications for incorporating cultural variables into the design of culturally relevant prevention interventions that discourage cigarette and alcohol use among Latino adolescents.


Assuntos
Aculturação , Consumo de Bebidas Alcoólicas/epidemiologia , Hispânico ou Latino/psicologia , Autoimagem , Fumar/etnologia , Identificação Social , Adolescente , Criança , Estudos Transversais , Feminino , Humanos , Masculino , Assunção de Riscos , Fumar/epidemiologia , Sudoeste dos Estados Unidos/epidemiologia
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa