Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 806
Filtrar
Más filtros

Tipo del documento
Intervalo de año de publicación
1.
Biostatistics ; 25(2): 306-322, 2024 Apr 15.
Artículo en Inglés | MEDLINE | ID: mdl-37230469

RESUMEN

Measurement error is common in environmental epidemiologic studies, but methods for correcting measurement error in regression models with multiple environmental exposures as covariates have not been well investigated. We consider a multiple imputation approach, combining external or internal calibration samples that contain information on both true and error-prone exposures with the main study data of multiple exposures measured with error. We propose a constrained chained equations multiple imputation (CEMI) algorithm that places constraints on the imputation model parameters in the chained equations imputation based on the assumptions of strong nondifferential measurement error. We also extend the constrained CEMI method to accommodate nondetects in the error-prone exposures in the main study data. We estimate the variance of the regression coefficients using the bootstrap with two imputations of each bootstrapped sample. The constrained CEMI method is shown by simulations to outperform existing methods, namely the method that ignores measurement error, classical calibration, and regression prediction, yielding estimated regression coefficients with smaller bias and confidence intervals with coverage close to the nominal level. We apply the proposed method to the Neighborhood Asthma and Allergy Study to investigate the associations between the concentrations of multiple indoor allergens and the fractional exhaled nitric oxide level among asthmatic children in New York City. The constrained CEMI method can be implemented by imposing constraints on the imputation matrix using the mice and bootImpute packages in R.


Asunto(s)
Algoritmos , Exposición a Riesgos Ambientales , Niño , Humanos , Animales , Ratones , Exposición a Riesgos Ambientales/efectos adversos , Estudios Epidemiológicos , Calibración , Sesgo
2.
Am J Epidemiol ; 193(6): 908-916, 2024 06 03.
Artículo en Inglés | MEDLINE | ID: mdl-38422371

RESUMEN

Routinely collected testing data have been a vital resource for public health response during the COVID-19 pandemic and have revealed the extent to which Black and Hispanic persons have borne a disproportionate burden of SARS-CoV-2 infections and hospitalizations in the United States. However, missing race and ethnicity data and missed infections due to testing disparities limit the interpretation of testing data and obscure the true toll of the pandemic. We investigated potential bias arising from these 2 types of missing data through a case study carried out in Holyoke, Massachusetts, during the prevaccination phase of the pandemic. First, we estimated SARS-CoV-2 testing and case rates by race and ethnicity, imputing missing data using a joint modeling approach. We then investigated disparities in SARS-CoV-2 reported case rates and missed infections by comparing case rate estimates with estimates derived from a COVID-19 seroprevalence survey. Compared with the non-Hispanic White population, we found that the Hispanic population had similar testing rates (476 tested per 1000 vs 480 per 1000) but twice the case rate (8.1% vs 3.7%). We found evidence of inequitable testing, with a higher rate of missed infections in the Hispanic population than in the non-Hispanic White population (79 infections missed per 1000 vs 60 missed per 1000).


Asunto(s)
Prueba de COVID-19 , COVID-19 , Hispánicos o Latinos , SARS-CoV-2 , Humanos , COVID-19/etnología , COVID-19/epidemiología , COVID-19/diagnóstico , Massachusetts/epidemiología , Prueba de COVID-19/estadística & datos numéricos , Hispánicos o Latinos/estadística & datos numéricos , Masculino , Femenino , Persona de Mediana Edad , Disparidades en Atención de Salud/etnología , Disparidades en Atención de Salud/estadística & datos numéricos , Adulto , Disparidades en el Estado de Salud , Negro o Afroamericano/estadística & datos numéricos , Etnicidad/estadística & datos numéricos , Anciano , Diagnóstico Erróneo/estadística & datos numéricos
3.
Am J Epidemiol ; 2024 May 16.
Artículo en Inglés | MEDLINE | ID: mdl-38751323

RESUMEN

In 2023, Martinez et al. examined trends in the inclusion, conceptualization, operationalization and analysis of race and ethnicity among studies published in US epidemiology journals. Based on a random sample of papers (N=1,050) published from 1995-2018, the authors describe the treatment of race, ethnicity, and ethnorace in the analytic sample (N=414, 39% of baseline sample) over time. Between 32% and 19% of studies in each time stratum lacked race data; 61% to 34% lacked ethnicity data. The review supplies stark evidence of the routine omission and variability of measures of race and ethnicity in epidemiologic research. Informed by public health critical race praxis (PHCRP), this commentary discusses the implications of four problems the findings suggest pervade epidemiology: 1) a general lack of clarity about what race and ethnicity are; 2) the limited use of critical race or other theory; 3) an ironic lack of rigor in measuring race and ethnicity; and, 4) the ordinariness of racism and white supremacy in epidemiology. The identified practices reflect neither current publication guidelines nor the state of the knowledge on race, ethnicity and racism; therefore, we conclude by offering recommendations to move epidemiology toward more rigorous research in an increasingly diverse society.

4.
Am J Epidemiol ; 2024 Jun 11.
Artículo en Inglés | MEDLINE | ID: mdl-38863120

RESUMEN

In epidemiology and social sciences, propensity score methods are popular for estimating treatment effects using observational data, and multiple imputation is popular for handling covariate missingness. However, how to appropriately use multiple imputation for propensity score analysis is not completely clear. This paper aims to bring clarity on the consistency (or lack thereof) of methods that have been proposed, focusing on the within approach (where the effect is estimated separately in each imputed dataset and then the multiple estimates are combined) and the across approach (where typically propensity scores are averaged across imputed datasets before being used for effect estimation). We show that the within method is valid and can be used with any causal effect estimator that is consistent in the full-data setting. Existing across methods are inconsistent, but a different across method that averages the inverse probability weights across imputed datasets is consistent for propensity score weighting. We also comment on methods that rely on imputing a function of the missing covariate rather than the covariate itself, including imputation of the propensity score and of the probability weight. Based on consistency results and practical flexibility, we recommend generally using the standard within method. Throughout, we provide intuition to make the results meaningful to the broad audience of applied researchers.

5.
Am J Epidemiol ; 193(7): 1019-1030, 2024 07 08.
Artículo en Inglés | MEDLINE | ID: mdl-38400653

RESUMEN

Targeted maximum likelihood estimation (TMLE) is increasingly used for doubly robust causal inference, but how missing data should be handled when using TMLE with data-adaptive approaches is unclear. Based on data (1992-1998) from the Victorian Adolescent Health Cohort Study, we conducted a simulation study to evaluate 8 missing-data methods in this context: complete-case analysis, extended TMLE incorporating an outcome-missingness model, the missing covariate missing indicator method, and 5 multiple imputation (MI) approaches using parametric or machine-learning models. We considered 6 scenarios that varied in terms of exposure/outcome generation models (presence of confounder-confounder interactions) and missingness mechanisms (whether outcome influenced missingness in other variables and presence of interaction/nonlinear terms in missingness models). Complete-case analysis and extended TMLE had small biases when outcome did not influence missingness in other variables. Parametric MI without interactions had large bias when exposure/outcome generation models included interactions. Parametric MI including interactions performed best in bias and variance reduction across all settings, except when missingness models included a nonlinear term. When choosing a method for handling missing data in the context of TMLE, researchers must consider the missingness mechanism and, for MI, compatibility with the analysis method. In many settings, a parametric MI approach that incorporates interactions and nonlinearities is expected to perform well.


Asunto(s)
Causalidad , Humanos , Funciones de Verosimilitud , Adolescente , Interpretación Estadística de Datos , Sesgo , Modelos Estadísticos , Simulación por Computador
6.
Am J Epidemiol ; 2024 Jul 03.
Artículo en Inglés | MEDLINE | ID: mdl-38960664

RESUMEN

It is unclear how the risk of post-covid symptoms evolved during the pandemic, especially before the spread of Severe Acute Respiratory Syndrome Coronavirus 2 variants and the availability of vaccines. We used modified Poisson regressions to compare the risk of six-month post-covid symptoms and their associated risk factors according to the period of first acute covid: during the French first (March-May 2020) or second (September-November 2020) wave. Non-response weights and multiple imputation were used to handle missing data. Among participants aged 15 or more in a national population-based cohort, the risk of post-covid symptoms was 14.6% (95% CI: 13.9%, 15.3%) in March-May 2020, versus 7.0% (95% CI: 6.3%, 7.7%) in September-November 2020 (adjusted RR: 1.36, 95% CI: 1.20, 1.55). For both periods, the risk was higher in the presence of baseline physical condition(s), and it increased with the number of acute symptoms. During the first wave, the risk was also higher for women, in the presence of baseline mental condition(s), and it varied with educational level. In France in 2020, the risk of six-month post-covid symptoms was higher during the first than the second wave. This difference was observed before the spread of variants and the availability of vaccines.

7.
Biostatistics ; 24(3): 743-759, 2023 Jul 14.
Artículo en Inglés | MEDLINE | ID: mdl-35579386

RESUMEN

Understanding associations between injury severity and postacute care recovery for patients with traumatic brain injury (TBI) is crucial to improving care. Estimating these associations requires information on patients' injury, demographics, and healthcare utilization, which are dispersed across multiple data sets. Because of privacy regulations, unique identifiers are not available to link records across these data sets. Record linkage methods identify records that represent the same patient across data sets in the absence of unique identifiers. With a large number of records, these methods may result in many false links. Health providers are a natural grouping scheme for patients, because only records that receive care from the same provider can represent the same patient. In some cases, providers are defined within each data set, but they are not uniquely identified across data sets. We propose a Bayesian record linkage procedure that simultaneously links providers and patients. The procedure improves the accuracy of the estimated links compared to current methods. We use this procedure to merge a trauma registry with Medicare claims to estimate the association between TBI patients' injury severity and postacute care recovery.


Asunto(s)
Lesiones Traumáticas del Encéfalo , Atención Subaguda , Anciano , Humanos , Estados Unidos , Medicare , Teorema de Bayes , Sistema de Registros , Lesiones Traumáticas del Encéfalo/terapia
8.
Stat Med ; 43(3): 514-533, 2024 02 10.
Artículo en Inglés | MEDLINE | ID: mdl-38073512

RESUMEN

Missing data is a common problem in medical research, and is commonly addressed using multiple imputation. Although traditional imputation methods allow for valid statistical inference when data are missing at random (MAR), their implementation is problematic when the presence of missingness depends on unobserved variables, that is, the data are missing not at random (MNAR). Unfortunately, this MNAR situation is rather common, in observational studies, registries and other sources of real-world data. While several imputation methods have been proposed for addressing individual studies when data are MNAR, their application and validity in large datasets with multilevel structure remains unclear. We therefore explored the consequence of MNAR data in hierarchical data in-depth, and proposed a novel multilevel imputation method for common missing patterns in clustered datasets. This method is based on the principles of Heckman selection models and adopts a two-stage meta-analysis approach to impute binary and continuous variables that may be outcomes or predictors and that are systematically or sporadically missing. After evaluating the proposed imputation model in simulated scenarios, we illustrate it use in a cross-sectional community survey to estimate the prevalence of malaria parasitemia in children aged 2-10 years in five regions in Uganda.


Asunto(s)
Investigación Biomédica , Niño , Humanos , Estudios Transversales , Uganda/epidemiología
9.
Stat Med ; 43(19): 3702-3722, 2024 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-38890124

RESUMEN

Policymakers often require information on programs' long-term impacts that is not available when decisions are made. For example, while rigorous evidence from the Oregon Health Insurance Experiment (OHIE) shows that having health insurance influences short-term health and financial measures, the impact on long-term outcomes, such as mortality, will not be known for many years following the program's implementation. We demonstrate how data fusion methods may be used address the problem of missing final outcomes and predict long-run impacts of interventions before the requisite data are available. We implement this method by concatenating data on an intervention (such as the OHIE) with auxiliary long-term data and then imputing missing long-term outcomes using short-term surrogate outcomes while approximating uncertainty with replication methods. We use simulations to examine the performance of the methodology and apply the method in a case study. Specifically, we fuse data on the OHIE with data from the National Longitudinal Mortality Study and estimate that being eligible to apply for subsidized health insurance will lead to a statistically significant improvement in long-term mortality.


Asunto(s)
Seguro de Salud , Humanos , Oregon , Seguro de Salud/estadística & datos numéricos , Simulación por Computador , Mortalidad , Estudios Longitudinales , Estados Unidos , Modelos Estadísticos
10.
Stat Med ; 43(6): 1238-1255, 2024 Mar 15.
Artículo en Inglés | MEDLINE | ID: mdl-38258282

RESUMEN

In clinical studies, multi-state model (MSM) analysis is often used to describe the sequence of events that patients experience, enabling better understanding of disease progression. A complicating factor in many MSM studies is that the exact event times may not be known. Motivated by a real dataset of patients who received stem cell transplants, we considered the setting in which some event times were exactly observed and some were missing. In our setting, there was little information about the time intervals in which the missing event times occurred and missingness depended on the event type, given the analysis model covariates. These additional challenges limited the usefulness of some missing data methods (maximum likelihood, complete case analysis, and inverse probability weighting). We show that multiple imputation (MI) of event times can perform well in this setting. MI is a flexible method that can be used with any complete data analysis model. Through an extensive simulation study, we show that MI by predictive mean matching (PMM), in which sampling is from a set of observed times without reliance on a specific parametric distribution, has little bias when event times are missing at random, conditional on the observed data. Applying PMM separately for each sub-group of patients with a different pathway through the MSM tends to further reduce bias and improve precision. We recommend MI using PMM methods when performing MSM analysis with Markov models and partially observed event times.


Asunto(s)
Proyectos de Investigación , Humanos , Interpretación Estadística de Datos , Simulación por Computador , Probabilidad , Sesgo
11.
Stat Med ; 43(2): 379-394, 2024 01 30.
Artículo en Inglés | MEDLINE | ID: mdl-37987515

RESUMEN

Validation studies are often used to obtain more reliable information in settings with error-prone data. Validated data on a subsample of subjects can be used together with error-prone data on all subjects to improve estimation. In practice, more than one round of data validation may be required, and direct application of standard approaches for combining validation data into analyses may lead to inefficient estimators since the information available from intermediate validation steps is only partially considered or even completely ignored. In this paper, we present two novel extensions of multiple imputation and generalized raking estimators that make full use of all available data. We show through simulations that incorporating information from intermediate steps can lead to substantial gains in efficiency. This work is motivated by and illustrated in a study of contraceptive effectiveness among 83 671 women living with HIV, whose data were originally extracted from electronic medical records, of whom 4732 had their charts reviewed, and a subsequent 1210 also had a telephone interview to validate key study variables.


Asunto(s)
Exactitud de los Datos , Registros Electrónicos de Salud , Femenino , Humanos , Infecciones por VIH
12.
Stat Med ; 43(19): 3742-3758, 2024 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-38897921

RESUMEN

Biomarkers are often measured in bulk to diagnose patients, monitor patient conditions, and research novel drug pathways. The measurement of these biomarkers often suffers from detection limits that result in missing and untrustworthy measurements. Frequently, missing biomarkers are imputed so that down-stream analysis can be conducted with modern statistical methods that cannot normally handle data subject to informative censoring. This work develops an empirical Bayes g $$ g $$ -modeling method for imputing and denoising biomarker measurements. We establish superior estimation properties compared to popular methods in simulations and with real data, providing the useful biomarker measurement estimations for down-stream analysis.


Asunto(s)
Teorema de Bayes , Biomarcadores , Simulación por Computador , Humanos , Biomarcadores/análisis , Modelos Estadísticos , Estadísticas no Paramétricas , Interpretación Estadística de Datos
13.
Stat Med ; 43(6): 1170-1193, 2024 Mar 15.
Artículo en Inglés | MEDLINE | ID: mdl-38386367

RESUMEN

This research introduces a multivariate τ $$ \tau $$ -inflated beta regression ( τ $$ \tau $$ -IBR) modeling approach for the analysis of censored recurrent event data that is particularly useful when there is a mixture of (a) individuals who are generally less susceptible to recurrent events and (b) heterogeneity in duration of event-free periods amongst those who experience events. The modeling approach is applied to a restructured version of the recurrent event data that consists of censored longitudinal times-to-first-event in τ $$ \tau $$ length follow-up windows that potentially overlap. Multiple imputation (MI) and expectation-solution (ES) approaches appropriate for censored data are developed as part of the model fitting process. A suite of useful analysis outputs are provided from the τ $$ \tau $$ -IBR model that include parameter estimates to help interpret the (a) and (b) mixture of event times in the data, estimates of mean τ $$ \tau $$ -restricted event-free duration in a τ $$ \tau $$ -length follow-up window based on a patient's covariate profile, and heat maps of raw τ $$ \tau $$ -restricted event-free durations observed in the data with censored observations augmented via averages across MI datasets. Simulations indicate good statistical performance of the proposed τ $$ \tau $$ -IBR approach to modeling censored recurrent event data. An example is given based on the Azithromycin for Prevention of COPD Exacerbations Trial.


Asunto(s)
Azitromicina , Enfermedad Pulmonar Obstructiva Crónica , Humanos
14.
BMC Med Res Methodol ; 24(1): 32, 2024 Feb 10.
Artículo en Inglés | MEDLINE | ID: mdl-38341552

RESUMEN

BACKGROUND: When studying the association between treatment and a clinical outcome, a parametric multivariable model of the conditional outcome expectation is often used to adjust for covariates. The treatment coefficient of the outcome model targets a conditional treatment effect. Model-based standardization is typically applied to average the model predictions over the target covariate distribution, and generate a covariate-adjusted estimate of the marginal treatment effect. METHODS: The standard approach to model-based standardization involves maximum-likelihood estimation and use of the non-parametric bootstrap. We introduce a novel, general-purpose, model-based standardization method based on multiple imputation that is easily applicable when the outcome model is a generalized linear model. We term our proposed approach multiple imputation marginalization (MIM). MIM consists of two main stages: the generation of synthetic datasets and their analysis. MIM accommodates a Bayesian statistical framework, which naturally allows for the principled propagation of uncertainty, integrates the analysis into a probabilistic framework, and allows for the incorporation of prior evidence. RESULTS: We conduct a simulation study to benchmark the finite-sample performance of MIM in conjunction with a parametric outcome model. The simulations provide proof-of-principle in scenarios with binary outcomes, continuous-valued covariates, a logistic outcome model and the marginal log odds ratio as the target effect measure. When parametric modeling assumptions hold, MIM yields unbiased estimation in the target covariate distribution, valid coverage rates, and similar precision and efficiency than the standard approach to model-based standardization. CONCLUSION: We demonstrate that multiple imputation can be used to marginalize over a target covariate distribution, providing appropriate inference with a correctly specified parametric outcome model and offering statistical performance comparable to that of the standard approach to model-based standardization.


Asunto(s)
Modelos Estadísticos , Humanos , Teorema de Bayes , Modelos Lineales , Simulación por Computador , Modelos Logísticos , Estándares de Referencia
15.
Multivariate Behav Res ; : 1-29, 2024 Jul 12.
Artículo en Inglés | MEDLINE | ID: mdl-38997153

RESUMEN

Missingness in intensive longitudinal data triggered by latent factors constitute one type of nonignorable missingness that can generate simultaneous missingness across multiple items on each measurement occasion. To address this issue, we propose a multiple imputation (MI) strategy called MI-FS, which incorporates factor scores, lag/lead variables, and missing data indicators into the imputation model. In the context of process factor analysis (PFA), we conducted a Monte Carlo simulation study to compare the performance of MI-FS to listwise deletion (LD), MI with manifest variables (MI-MV, which implements MI on both dependent variables and covariates), and partial MI with MVs (PMI-MV, which implements MI on covariates and handles missing dependent variables via full-information maximum likelihood) under different conditions. Across conditions, we found MI-based methods overall outperformed the LD; the MI-FS approach yielded lower root mean square errors (RMSEs) and higher coverage rates for auto-regression (AR) parameters compared to MI-MV; and the PMI-MV and MI-MV approaches yielded higher coverage rates for most parameters except AR parameters compared to MI-FS. These approaches were also compared using an empirical example investigating the relationships between negative affect and perceived stress over time. Recommendations on when and how to incorporate factor scores into MI processes were discussed.

16.
Multivariate Behav Res ; 59(3): 411-433, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38379305

RESUMEN

Propensity score (PS) analyses are increasingly popular in behavioral sciences. Two issues often add complexities to PS analyses, including missing data in observed covariates and clustered data structure. In previous research, methods for conducting PS analyses with considering either issue alone were examined. In practice, the two issues often co-occur; but the performance of methods for PS analyses in the presence of both issues has not been evaluated previously. In this study, we consider PS weighting analysis when data are clustered and observed covariates have missing values. A simulation study is conducted to evaluate the performance of different missing data handling methods (complete-case, single-level imputation, or multilevel imputation) combined with different multilevel PS weighting methods (fixed- or random-effects PS models, inverse-propensity-weighting or the clustered weighting, weighted single-level or multilevel outcome models). The results suggest that the bias in average treatment effect estimation can be reduced, by better accounting for clustering in both the missing data handling stage (such as with the multilevel imputation) and the PS analysis stage (such as with the fixed-effects PS model, clustered weighting, and weighted multilevel outcome model). A real-data example is provided for illustration.


Asunto(s)
Simulación por Computador , Puntaje de Propensión , Humanos , Análisis por Conglomerados , Interpretación Estadística de Datos , Simulación por Computador/estadística & datos numéricos , Modelos Estadísticos , Análisis Multinivel/métodos , Sesgo
17.
Pharm Stat ; 2024 Aug 04.
Artículo en Inglés | MEDLINE | ID: mdl-39099192

RESUMEN

The estimands framework outlined in ICH E9 (R1) describes the components needed to precisely define the effects to be estimated in clinical trials, which includes how post-baseline 'intercurrent' events (IEs) are to be handled. In late-stage clinical trials, it is common to handle IEs like 'treatment discontinuation' using the treatment policy strategy and target the treatment effect on outcomes regardless of treatment discontinuation. For continuous repeated measures, this type of effect is often estimated using all observed data before and after discontinuation using either a mixed model for repeated measures (MMRM) or multiple imputation (MI) to handle any missing data. In basic form, both these estimation methods ignore treatment discontinuation in the analysis and therefore may be biased if there are differences in patient outcomes after treatment discontinuation compared with patients still assigned to treatment, and missing data being more common for patients who have discontinued treatment. We therefore propose and evaluate a set of MI models that can accommodate differences between outcomes before and after treatment discontinuation. The models are evaluated in the context of planning a Phase 3 trial for a respiratory disease. We show that analyses ignoring treatment discontinuation can introduce substantial bias and can sometimes underestimate variability. We also show that some of the MI models proposed can successfully correct the bias, but inevitably lead to increases in variance. We conclude that some of the proposed MI models are preferable to the traditional analysis ignoring treatment discontinuation, but the precise choice of MI model will likely depend on the trial design, disease of interest and amount of observed and missing data following treatment discontinuation.

18.
Pharm Stat ; 2024 Jul 16.
Artículo en Inglés | MEDLINE | ID: mdl-39013479

RESUMEN

The ICH E9(R1) Addendum (International Council for Harmonization 2019) suggests treatment-policy as one of several strategies for addressing intercurrent events such as treatment withdrawal when defining an estimand. This strategy requires the monitoring of patients and collection of primary outcome data following termination of randomised treatment. However, when patients withdraw from a study early before completion this creates true missing data complicating the analysis. One possible way forward uses multiple imputation to replace the missing data based on a model for outcome on- and off-treatment prior to study withdrawal, often referred to as retrieved dropout multiple imputation. This article introduces a novel approach to parameterising this imputation model so that those parameters which may be difficult to estimate have mildly informative Bayesian priors applied during the imputation stage. A core reference-based model is combined with a retrieved dropout compliance model, using both on- and off-treatment data, to form an extended model for the purposes of imputation. This alleviates the problem of specifying a complex set of analysis rules to accommodate situations where parameters which influence the estimated value are not estimable, or are poorly estimated leading to unrealistically large standard errors in the resulting analysis. We refer to this new approach as retrieved dropout reference-base centred multiple imputation.

19.
Pharm Stat ; 2024 Apr 05.
Artículo en Inglés | MEDLINE | ID: mdl-38581166

RESUMEN

The combination of propensity score analysis and multiple imputation has been prominent in epidemiological research in recent years. However, studies on the evaluation of balance in this combination are limited. In this paper, we propose a new method for assessing balance in propensity score analysis following multiple imputation. A simulation study was conducted to evaluate the performance of balance assessment methods (Leyrat's, Leite's, and new method). Simulated scenarios varied regarding the presence of missing data in the control or treatment and control group, and the imputation model with/without outcome. Leyrat's method was more biased in all the studied scenarios. Leite's method and the combine method yielded balanced results with lower mean absolute difference, regardless of whether the outcome was included in the imputation model or not. Leyrat's method had a higher false positive ratio and Leite's and combine method had higher specificity and accuracy, especially when the outcome was not included in the imputation model. According to simulation results, most of time, Leyrat's method and Leite's method contradict with each other on appraising the balance. This discrepancy can be solved using new combine method.

20.
Pharm Stat ; 2024 Apr 17.
Artículo en Inglés | MEDLINE | ID: mdl-38631678

RESUMEN

Accurate frequentist performance of a method is desirable in confirmatory clinical trials, but is not sufficient on its own to justify the use of a missing data method. Reference-based conditional mean imputation, with variance estimation justified solely by its frequentist performance, has the surprising and undesirable property that the estimated variance becomes smaller the greater the number of missing observations; as explained under jump-to-reference it effectively forces the true treatment effect to be exactly zero for patients with missing data.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA