Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 805
Filtrar
Mais filtros

Eixos temáticos
Tipo de documento
Intervalo de ano de publicação
1.
Biostatistics ; 25(2): 306-322, 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-37230469

RESUMO

Measurement error is common in environmental epidemiologic studies, but methods for correcting measurement error in regression models with multiple environmental exposures as covariates have not been well investigated. We consider a multiple imputation approach, combining external or internal calibration samples that contain information on both true and error-prone exposures with the main study data of multiple exposures measured with error. We propose a constrained chained equations multiple imputation (CEMI) algorithm that places constraints on the imputation model parameters in the chained equations imputation based on the assumptions of strong nondifferential measurement error. We also extend the constrained CEMI method to accommodate nondetects in the error-prone exposures in the main study data. We estimate the variance of the regression coefficients using the bootstrap with two imputations of each bootstrapped sample. The constrained CEMI method is shown by simulations to outperform existing methods, namely the method that ignores measurement error, classical calibration, and regression prediction, yielding estimated regression coefficients with smaller bias and confidence intervals with coverage close to the nominal level. We apply the proposed method to the Neighborhood Asthma and Allergy Study to investigate the associations between the concentrations of multiple indoor allergens and the fractional exhaled nitric oxide level among asthmatic children in New York City. The constrained CEMI method can be implemented by imposing constraints on the imputation matrix using the mice and bootImpute packages in R.


Assuntos
Algoritmos , Exposição Ambiental , Criança , Humanos , Animais , Camundongos , Exposição Ambiental/efeitos adversos , Estudos Epidemiológicos , Calibragem , Viés
2.
Am J Epidemiol ; 2024 Jun 11.
Artigo em Inglês | MEDLINE | ID: mdl-38863120

RESUMO

In epidemiology and social sciences, propensity score methods are popular for estimating treatment effects using observational data, and multiple imputation is popular for handling covariate missingness. However, how to appropriately use multiple imputation for propensity score analysis is not completely clear. This paper aims to bring clarity on the consistency (or lack thereof) of methods that have been proposed, focusing on the within approach (where the effect is estimated separately in each imputed dataset and then the multiple estimates are combined) and the across approach (where typically propensity scores are averaged across imputed datasets before being used for effect estimation). We show that the within method is valid and can be used with any causal effect estimator that is consistent in the full-data setting. Existing across methods are inconsistent, but a different across method that averages the inverse probability weights across imputed datasets is consistent for propensity score weighting. We also comment on methods that rely on imputing a function of the missing covariate rather than the covariate itself, including imputation of the propensity score and of the probability weight. Based on consistency results and practical flexibility, we recommend generally using the standard within method. Throughout, we provide intuition to make the results meaningful to the broad audience of applied researchers.

3.
Am J Epidemiol ; 193(6): 908-916, 2024 Jun 03.
Artigo em Inglês | MEDLINE | ID: mdl-38422371

RESUMO

Routinely collected testing data have been a vital resource for public health response during the COVID-19 pandemic and have revealed the extent to which Black and Hispanic persons have borne a disproportionate burden of SARS-CoV-2 infections and hospitalizations in the United States. However, missing race and ethnicity data and missed infections due to testing disparities limit the interpretation of testing data and obscure the true toll of the pandemic. We investigated potential bias arising from these 2 types of missing data through a case study carried out in Holyoke, Massachusetts, during the prevaccination phase of the pandemic. First, we estimated SARS-CoV-2 testing and case rates by race and ethnicity, imputing missing data using a joint modeling approach. We then investigated disparities in SARS-CoV-2 reported case rates and missed infections by comparing case rate estimates with estimates derived from a COVID-19 seroprevalence survey. Compared with the non-Hispanic White population, we found that the Hispanic population had similar testing rates (476 tested per 1000 vs 480 per 1000) but twice the case rate (8.1% vs 3.7%). We found evidence of inequitable testing, with a higher rate of missed infections in the Hispanic population than in the non-Hispanic White population (79 infections missed per 1000 vs 60 missed per 1000).


Assuntos
Teste para COVID-19 , COVID-19 , Hispânico ou Latino , SARS-CoV-2 , Humanos , COVID-19/etnologia , COVID-19/epidemiologia , COVID-19/diagnóstico , Massachusetts/epidemiologia , Teste para COVID-19/estatística & dados numéricos , Hispânico ou Latino/estatística & dados numéricos , Masculino , Feminino , Pessoa de Meia-Idade , Disparidades em Assistência à Saúde/etnologia , Disparidades em Assistência à Saúde/estatística & dados numéricos , Adulto , Disparidades nos Níveis de Saúde , Negro ou Afro-Americano/estatística & dados numéricos , Etnicidade/estatística & dados numéricos , Idoso , Diagnóstico Ausente/estatística & dados numéricos
4.
Am J Epidemiol ; 2024 May 16.
Artigo em Inglês | MEDLINE | ID: mdl-38751323

RESUMO

In 2023, Martinez et al. examined trends in the inclusion, conceptualization, operationalization and analysis of race and ethnicity among studies published in US epidemiology journals. Based on a random sample of papers (N=1,050) published from 1995-2018, the authors describe the treatment of race, ethnicity, and ethnorace in the analytic sample (N=414, 39% of baseline sample) over time. Between 32% and 19% of studies in each time stratum lacked race data; 61% to 34% lacked ethnicity data. The review supplies stark evidence of the routine omission and variability of measures of race and ethnicity in epidemiologic research. Informed by public health critical race praxis (PHCRP), this commentary discusses the implications of four problems the findings suggest pervade epidemiology: 1) a general lack of clarity about what race and ethnicity are; 2) the limited use of critical race or other theory; 3) an ironic lack of rigor in measuring race and ethnicity; and, 4) the ordinariness of racism and white supremacy in epidemiology. The identified practices reflect neither current publication guidelines nor the state of the knowledge on race, ethnicity and racism; therefore, we conclude by offering recommendations to move epidemiology toward more rigorous research in an increasingly diverse society.

5.
Am J Epidemiol ; 193(7): 1019-1030, 2024 Jul 08.
Artigo em Inglês | MEDLINE | ID: mdl-38400653

RESUMO

Targeted maximum likelihood estimation (TMLE) is increasingly used for doubly robust causal inference, but how missing data should be handled when using TMLE with data-adaptive approaches is unclear. Based on data (1992-1998) from the Victorian Adolescent Health Cohort Study, we conducted a simulation study to evaluate 8 missing-data methods in this context: complete-case analysis, extended TMLE incorporating an outcome-missingness model, the missing covariate missing indicator method, and 5 multiple imputation (MI) approaches using parametric or machine-learning models. We considered 6 scenarios that varied in terms of exposure/outcome generation models (presence of confounder-confounder interactions) and missingness mechanisms (whether outcome influenced missingness in other variables and presence of interaction/nonlinear terms in missingness models). Complete-case analysis and extended TMLE had small biases when outcome did not influence missingness in other variables. Parametric MI without interactions had large bias when exposure/outcome generation models included interactions. Parametric MI including interactions performed best in bias and variance reduction across all settings, except when missingness models included a nonlinear term. When choosing a method for handling missing data in the context of TMLE, researchers must consider the missingness mechanism and, for MI, compatibility with the analysis method. In many settings, a parametric MI approach that incorporates interactions and nonlinearities is expected to perform well.


Assuntos
Causalidade , Humanos , Funções Verossimilhança , Adolescente , Interpretação Estatística de Dados , Viés , Modelos Estatísticos , Simulação por Computador
6.
Am J Epidemiol ; 2024 Jul 03.
Artigo em Inglês | MEDLINE | ID: mdl-38960664

RESUMO

It is unclear how the risk of post-covid symptoms evolved during the pandemic, especially before the spread of Severe Acute Respiratory Syndrome Coronavirus 2 variants and the availability of vaccines. We used modified Poisson regressions to compare the risk of six-month post-covid symptoms and their associated risk factors according to the period of first acute covid: during the French first (March-May 2020) or second (September-November 2020) wave. Non-response weights and multiple imputation were used to handle missing data. Among participants aged 15 or more in a national population-based cohort, the risk of post-covid symptoms was 14.6% (95% CI: 13.9%, 15.3%) in March-May 2020, versus 7.0% (95% CI: 6.3%, 7.7%) in September-November 2020 (adjusted RR: 1.36, 95% CI: 1.20, 1.55). For both periods, the risk was higher in the presence of baseline physical condition(s), and it increased with the number of acute symptoms. During the first wave, the risk was also higher for women, in the presence of baseline mental condition(s), and it varied with educational level. In France in 2020, the risk of six-month post-covid symptoms was higher during the first than the second wave. This difference was observed before the spread of variants and the availability of vaccines.

7.
Biostatistics ; 24(3): 743-759, 2023 Jul 14.
Artigo em Inglês | MEDLINE | ID: mdl-35579386

RESUMO

Understanding associations between injury severity and postacute care recovery for patients with traumatic brain injury (TBI) is crucial to improving care. Estimating these associations requires information on patients' injury, demographics, and healthcare utilization, which are dispersed across multiple data sets. Because of privacy regulations, unique identifiers are not available to link records across these data sets. Record linkage methods identify records that represent the same patient across data sets in the absence of unique identifiers. With a large number of records, these methods may result in many false links. Health providers are a natural grouping scheme for patients, because only records that receive care from the same provider can represent the same patient. In some cases, providers are defined within each data set, but they are not uniquely identified across data sets. We propose a Bayesian record linkage procedure that simultaneously links providers and patients. The procedure improves the accuracy of the estimated links compared to current methods. We use this procedure to merge a trauma registry with Medicare claims to estimate the association between TBI patients' injury severity and postacute care recovery.


Assuntos
Lesões Encefálicas Traumáticas , Cuidados Semi-Intensivos , Idoso , Humanos , Estados Unidos , Medicare , Teorema de Bayes , Sistema de Registros , Lesões Encefálicas Traumáticas/terapia
8.
Stat Med ; 43(3): 514-533, 2024 02 10.
Artigo em Inglês | MEDLINE | ID: mdl-38073512

RESUMO

Missing data is a common problem in medical research, and is commonly addressed using multiple imputation. Although traditional imputation methods allow for valid statistical inference when data are missing at random (MAR), their implementation is problematic when the presence of missingness depends on unobserved variables, that is, the data are missing not at random (MNAR). Unfortunately, this MNAR situation is rather common, in observational studies, registries and other sources of real-world data. While several imputation methods have been proposed for addressing individual studies when data are MNAR, their application and validity in large datasets with multilevel structure remains unclear. We therefore explored the consequence of MNAR data in hierarchical data in-depth, and proposed a novel multilevel imputation method for common missing patterns in clustered datasets. This method is based on the principles of Heckman selection models and adopts a two-stage meta-analysis approach to impute binary and continuous variables that may be outcomes or predictors and that are systematically or sporadically missing. After evaluating the proposed imputation model in simulated scenarios, we illustrate it use in a cross-sectional community survey to estimate the prevalence of malaria parasitemia in children aged 2-10 years in five regions in Uganda.


Assuntos
Pesquisa Biomédica , Criança , Humanos , Estudos Transversais , Uganda/epidemiologia
9.
Stat Med ; 2024 Jun 18.
Artigo em Inglês | MEDLINE | ID: mdl-38890124

RESUMO

Policymakers often require information on programs' long-term impacts that is not available when decisions are made. For example, while rigorous evidence from the Oregon Health Insurance Experiment (OHIE) shows that having health insurance influences short-term health and financial measures, the impact on long-term outcomes, such as mortality, will not be known for many years following the program's implementation. We demonstrate how data fusion methods may be used address the problem of missing final outcomes and predict long-run impacts of interventions before the requisite data are available. We implement this method by concatenating data on an intervention (such as the OHIE) with auxiliary long-term data and then imputing missing long-term outcomes using short-term surrogate outcomes while approximating uncertainty with replication methods. We use simulations to examine the performance of the methodology and apply the method in a case study. Specifically, we fuse data on the OHIE with data from the National Longitudinal Mortality Study and estimate that being eligible to apply for subsidized health insurance will lead to a statistically significant improvement in long-term mortality.

10.
Stat Med ; 43(6): 1238-1255, 2024 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-38258282

RESUMO

In clinical studies, multi-state model (MSM) analysis is often used to describe the sequence of events that patients experience, enabling better understanding of disease progression. A complicating factor in many MSM studies is that the exact event times may not be known. Motivated by a real dataset of patients who received stem cell transplants, we considered the setting in which some event times were exactly observed and some were missing. In our setting, there was little information about the time intervals in which the missing event times occurred and missingness depended on the event type, given the analysis model covariates. These additional challenges limited the usefulness of some missing data methods (maximum likelihood, complete case analysis, and inverse probability weighting). We show that multiple imputation (MI) of event times can perform well in this setting. MI is a flexible method that can be used with any complete data analysis model. Through an extensive simulation study, we show that MI by predictive mean matching (PMM), in which sampling is from a set of observed times without reliance on a specific parametric distribution, has little bias when event times are missing at random, conditional on the observed data. Applying PMM separately for each sub-group of patients with a different pathway through the MSM tends to further reduce bias and improve precision. We recommend MI using PMM methods when performing MSM analysis with Markov models and partially observed event times.


Assuntos
Projetos de Pesquisa , Humanos , Interpretação Estatística de Dados , Simulação por Computador , Probabilidade , Viés
11.
Stat Med ; 43(2): 379-394, 2024 01 30.
Artigo em Inglês | MEDLINE | ID: mdl-37987515

RESUMO

Validation studies are often used to obtain more reliable information in settings with error-prone data. Validated data on a subsample of subjects can be used together with error-prone data on all subjects to improve estimation. In practice, more than one round of data validation may be required, and direct application of standard approaches for combining validation data into analyses may lead to inefficient estimators since the information available from intermediate validation steps is only partially considered or even completely ignored. In this paper, we present two novel extensions of multiple imputation and generalized raking estimators that make full use of all available data. We show through simulations that incorporating information from intermediate steps can lead to substantial gains in efficiency. This work is motivated by and illustrated in a study of contraceptive effectiveness among 83 671 women living with HIV, whose data were originally extracted from electronic medical records, of whom 4732 had their charts reviewed, and a subsequent 1210 also had a telephone interview to validate key study variables.


Assuntos
Confiabilidade dos Dados , Registros Eletrônicos de Saúde , Feminino , Humanos , Infecções por HIV
12.
Stat Med ; 2024 Jun 19.
Artigo em Inglês | MEDLINE | ID: mdl-38897921

RESUMO

Biomarkers are often measured in bulk to diagnose patients, monitor patient conditions, and research novel drug pathways. The measurement of these biomarkers often suffers from detection limits that result in missing and untrustworthy measurements. Frequently, missing biomarkers are imputed so that down-stream analysis can be conducted with modern statistical methods that cannot normally handle data subject to informative censoring. This work develops an empirical Bayes g $$ g $$ -modeling method for imputing and denoising biomarker measurements. We establish superior estimation properties compared to popular methods in simulations and with real data, providing the useful biomarker measurement estimations for down-stream analysis.

13.
Stat Med ; 43(6): 1170-1193, 2024 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-38386367

RESUMO

This research introduces a multivariate τ $$ \tau $$ -inflated beta regression ( τ $$ \tau $$ -IBR) modeling approach for the analysis of censored recurrent event data that is particularly useful when there is a mixture of (a) individuals who are generally less susceptible to recurrent events and (b) heterogeneity in duration of event-free periods amongst those who experience events. The modeling approach is applied to a restructured version of the recurrent event data that consists of censored longitudinal times-to-first-event in τ $$ \tau $$ length follow-up windows that potentially overlap. Multiple imputation (MI) and expectation-solution (ES) approaches appropriate for censored data are developed as part of the model fitting process. A suite of useful analysis outputs are provided from the τ $$ \tau $$ -IBR model that include parameter estimates to help interpret the (a) and (b) mixture of event times in the data, estimates of mean τ $$ \tau $$ -restricted event-free duration in a τ $$ \tau $$ -length follow-up window based on a patient's covariate profile, and heat maps of raw τ $$ \tau $$ -restricted event-free durations observed in the data with censored observations augmented via averages across MI datasets. Simulations indicate good statistical performance of the proposed τ $$ \tau $$ -IBR approach to modeling censored recurrent event data. An example is given based on the Azithromycin for Prevention of COPD Exacerbations Trial.


Assuntos
Azitromicina , Doença Pulmonar Obstrutiva Crônica , Humanos
14.
BMC Med Res Methodol ; 24(1): 32, 2024 Feb 10.
Artigo em Inglês | MEDLINE | ID: mdl-38341552

RESUMO

BACKGROUND: When studying the association between treatment and a clinical outcome, a parametric multivariable model of the conditional outcome expectation is often used to adjust for covariates. The treatment coefficient of the outcome model targets a conditional treatment effect. Model-based standardization is typically applied to average the model predictions over the target covariate distribution, and generate a covariate-adjusted estimate of the marginal treatment effect. METHODS: The standard approach to model-based standardization involves maximum-likelihood estimation and use of the non-parametric bootstrap. We introduce a novel, general-purpose, model-based standardization method based on multiple imputation that is easily applicable when the outcome model is a generalized linear model. We term our proposed approach multiple imputation marginalization (MIM). MIM consists of two main stages: the generation of synthetic datasets and their analysis. MIM accommodates a Bayesian statistical framework, which naturally allows for the principled propagation of uncertainty, integrates the analysis into a probabilistic framework, and allows for the incorporation of prior evidence. RESULTS: We conduct a simulation study to benchmark the finite-sample performance of MIM in conjunction with a parametric outcome model. The simulations provide proof-of-principle in scenarios with binary outcomes, continuous-valued covariates, a logistic outcome model and the marginal log odds ratio as the target effect measure. When parametric modeling assumptions hold, MIM yields unbiased estimation in the target covariate distribution, valid coverage rates, and similar precision and efficiency than the standard approach to model-based standardization. CONCLUSION: We demonstrate that multiple imputation can be used to marginalize over a target covariate distribution, providing appropriate inference with a correctly specified parametric outcome model and offering statistical performance comparable to that of the standard approach to model-based standardization.


Assuntos
Modelos Estatísticos , Humanos , Teorema de Bayes , Modelos Lineares , Simulação por Computador , Modelos Logísticos , Padrões de Referência
15.
Multivariate Behav Res ; 59(3): 411-433, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38379305

RESUMO

Propensity score (PS) analyses are increasingly popular in behavioral sciences. Two issues often add complexities to PS analyses, including missing data in observed covariates and clustered data structure. In previous research, methods for conducting PS analyses with considering either issue alone were examined. In practice, the two issues often co-occur; but the performance of methods for PS analyses in the presence of both issues has not been evaluated previously. In this study, we consider PS weighting analysis when data are clustered and observed covariates have missing values. A simulation study is conducted to evaluate the performance of different missing data handling methods (complete-case, single-level imputation, or multilevel imputation) combined with different multilevel PS weighting methods (fixed- or random-effects PS models, inverse-propensity-weighting or the clustered weighting, weighted single-level or multilevel outcome models). The results suggest that the bias in average treatment effect estimation can be reduced, by better accounting for clustering in both the missing data handling stage (such as with the multilevel imputation) and the PS analysis stage (such as with the fixed-effects PS model, clustered weighting, and weighted multilevel outcome model). A real-data example is provided for illustration.


Assuntos
Simulação por Computador , Pontuação de Propensão , Humanos , Análise por Conglomerados , Interpretação Estatística de Dados , Simulação por Computador/estatística & dados numéricos , Modelos Estatísticos , Análise Multinível/métodos , Viés
16.
Pharm Stat ; 2024 Apr 05.
Artigo em Inglês | MEDLINE | ID: mdl-38581166

RESUMO

The combination of propensity score analysis and multiple imputation has been prominent in epidemiological research in recent years. However, studies on the evaluation of balance in this combination are limited. In this paper, we propose a new method for assessing balance in propensity score analysis following multiple imputation. A simulation study was conducted to evaluate the performance of balance assessment methods (Leyrat's, Leite's, and new method). Simulated scenarios varied regarding the presence of missing data in the control or treatment and control group, and the imputation model with/without outcome. Leyrat's method was more biased in all the studied scenarios. Leite's method and the combine method yielded balanced results with lower mean absolute difference, regardless of whether the outcome was included in the imputation model or not. Leyrat's method had a higher false positive ratio and Leite's and combine method had higher specificity and accuracy, especially when the outcome was not included in the imputation model. According to simulation results, most of time, Leyrat's method and Leite's method contradict with each other on appraising the balance. This discrepancy can be solved using new combine method.

17.
Pharm Stat ; 2024 Apr 17.
Artigo em Inglês | MEDLINE | ID: mdl-38631678

RESUMO

Accurate frequentist performance of a method is desirable in confirmatory clinical trials, but is not sufficient on its own to justify the use of a missing data method. Reference-based conditional mean imputation, with variance estimation justified solely by its frequentist performance, has the surprising and undesirable property that the estimated variance becomes smaller the greater the number of missing observations; as explained under jump-to-reference it effectively forces the true treatment effect to be exactly zero for patients with missing data.

18.
Biom J ; 66(3): e2200326, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38637322

RESUMO

In the context of missing data, the identifiability or "recoverability" of the average causal effect (ACE) depends not only on the usual causal assumptions but also on missingness assumptions that can be depicted by adding variable-specific missingness indicators to causal diagrams, creating missingness directed acyclic graphs (m-DAGs). Previous research described canonical m-DAGs, representing typical multivariable missingness mechanisms in epidemiological studies, and examined mathematically the recoverability of the ACE in each case. However, this work assumed no effect modification and did not investigate methods for estimation across such scenarios. Here, we extend this research by determining the recoverability of the ACE in settings with effect modification and conducting a simulation study to evaluate the performance of widely used missing data methods when estimating the ACE using correctly specified g-computation. Methods assessed were complete case analysis (CCA) and various implementations of multiple imputation (MI) with varying degrees of compatibility with the outcome model used in g-computation. Simulations were based on an example from the Victorian Adolescent Health Cohort Study (VAHCS), where interest was in estimating the ACE of adolescent cannabis use on mental health in young adulthood. We found that the ACE is recoverable when no incomplete variable (exposure, outcome, or confounder) causes its own missingness, and nonrecoverable otherwise, in simplified versions of 10 canonical m-DAGs that excluded unmeasured common causes of missingness indicators. Despite this nonrecoverability, simulations showed that MI approaches that are compatible with the outcome model in g-computation may enable approximately unbiased estimation across all canonical m-DAGs considered, except when the outcome causes its own missingness or causes the missingness of a variable that causes its own missingness. In the latter settings, researchers may need to consider sensitivity analysis methods incorporating external information (e.g., delta-adjustment methods). The VAHCS case study illustrates the practical implications of these findings.


Assuntos
Estudos de Coortes , Humanos , Adulto Jovem , Adulto , Adolescente , Interpretação Estatística de Dados , Causalidade , Simulação por Computador
19.
Behav Res Methods ; 56(3): 1715-1737, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37540467

RESUMO

Multiple Imputation (MI) is one of the most popular approaches to addressing missing values in questionnaires and surveys. MI with multivariate imputation by chained equations (MICE) allows flexible imputation of many types of data. In MICE, for each variable under imputation, the imputer needs to specify which variables should act as predictors in the imputation model. The selection of these predictors is a difficult, but fundamental, step in the MI procedure, especially when there are many variables in a data set. In this project, we explore the use of principal component regression (PCR) as a univariate imputation method in the MICE algorithm to automatically address the many-variables problem that arises when imputing large social science data. We compare different implementations of PCR-based MICE with a correlation-thresholding strategy through two Monte Carlo simulation studies and a case study. We find the use of PCR on a variable-by-variable basis to perform best and that it can perform closely to expertly designed imputation procedures.


Assuntos
Algoritmos , Humanos , Simulação por Computador , Inquéritos e Questionários , Método de Monte Carlo
20.
Behav Res Methods ; 56(3): 1229-1243, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-36973636

RESUMO

In structural equation modeling, when multiple imputation is used for handling missing data, model fit evaluation involves pooling likelihood-ratio test statistics across imputations. Under the normality assumption, the two most popular pooling approaches were proposed by Li et al. (Statistica Sinica, 1(1), 65-92, 1991) and Meng and Rubin (Biometrika, 79(1), 103-111, 1992). When the assumption of normality is violated, it is not clear how well these pooling approaches work with the test statistics generated from various robust estimators and multiple imputation methods. Jorgensen and colleagues (2021) implemented these pooling approaches in their R package semTools; however, no systematical evaluation has been conducted. In this simulation study, we examine the performance of these approaches in working with different imputation methods and robust estimators under nonnormality. We found that the naïve pooling approach based on Meng and Rubin (Biometrika, 79(1), 103-111, 1992; D3SN) worked the best when combining with the normal-theory-based imputation and either MLM or MLMV estimator.


Assuntos
Modelos Estatísticos , Humanos , Interpretação Estatística de Dados , Simulação por Computador , Análise de Classes Latentes
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa