Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 31
Filter
Add more filters

Publication year range
1.
Stat Med ; 2024 Sep 18.
Article in English | MEDLINE | ID: mdl-39291682

ABSTRACT

We consider evaluating biomarkers for treatment selection under assay modification. Survival outcome, treatment, and Affymetrix gene expression data were attained from cancer patients. Consider migrating a gene expression biomarker to the Illumina platform. A recent novel approach allows a quick evaluation of the migrated biomarker with only a reproducibility study needed to compare the two platforms, achieved by treating the original biomarker as an error-contaminated observation of the migrated biomarker. However, its assumptions of a classical measurement error model and a linear predictor for the outcome may not hold. Ignoring such model deviations may lead to sub-optimal treatment selection or failure to identify effective biomarkers. To overcome such limitations, we adopt a nonparametric logistic regression to model the relationship between the event rate and the biomarker, and the deduced marker-based treatment selection is optimal. We further assume a nonparametric relationship between the migrated and original biomarkers and show that the error-contaminated biomarker leads to sub-optimal treatment selection compared to the error-free biomarker. We obtain the estimation via B-spline approximation. The approach is assessed by simulation studies and demonstrated through application to lung cancer data.

2.
Biostatistics ; 23(1): 173-188, 2022 01 13.
Article in English | MEDLINE | ID: mdl-32424421

ABSTRACT

We consider evaluating new or more accurately measured predictive biomarkers for treatment selection based on a previous clinical trial involving standard biomarkers. Instead of rerunning the clinical trial with the new biomarkers, we propose a more efficient approach which requires only either conducting a reproducibility study in which the new biomarkers and standard biomarkers are both measured on a set of patient samples, or adopting replicated measures of the error-contaminated standard biomarkers in the original study. This approach is easier to conduct and much less expensive than studies that require new samples from patients randomized to the intervention. In addition, it makes it possible to perform the estimation of the clinical performance quickly, since there will be no requirement to wait for events to occur as would be the case with prospective validation. The treatment selection is assessed via a working model, but the proposed estimator of the mean restricted lifetime is valid even if the working model is misspecified. The proposed approach is assessed through simulation studies and applied to a cancer study.


Subject(s)
Research Design , Biomarkers , Computer Simulation , Humans , Reproducibility of Results
3.
Biometrics ; 79(4): 3929-3940, 2023 12.
Article in English | MEDLINE | ID: mdl-37458679

ABSTRACT

In this paper, we analyze the length-biased and partly interval-censored data, whose challenges primarily come from biased sampling and interfere induced by interval censoring. Unlike existing methods that focus on low-dimensional data and assume the covariates to be precisely measured, sometimes researchers may encounter high-dimensional data subject to measurement error, which are ubiquitous in applications and make estimation unreliable. To address those challenges, we explore a valid inference method for handling high-dimensional length-biased and interval-censored survival data with measurement error in covariates under the accelerated failure time model. We primarily employ the SIMEX method to correct for measurement error effects and propose the boosting procedure to do variable selection and estimation. The proposed method is able to handle the case that the dimension of covariates is larger than the sample size and enjoys appealing features that the distributions of the covariates are left unspecified.


Subject(s)
Sample Size , Survival Analysis , Computer Simulation
4.
BMC Emerg Med ; 23(1): 52, 2023 05 24.
Article in English | MEDLINE | ID: mdl-37226121

ABSTRACT

INTRODUCTION: The simulation exercise (SimEx) simulates an emergency in which an elaboration or description of the response is applied. The purpose of these exercises is to validate and improve plans, procedures, and systems for responding to all hazards. The purpose of this study was to review disaster preparation exercises conducted by various national, non-government, and academic institutions. METHODOLOGY: Several databases, including PubMed (Medline), Cumulative Index to Nursing and Allied Health Literature (CINAHL), BioMed Central, and Google Scholar, were used to review the literature. Information was retrieved using Medical Subject Headings (MeSH) and documents were selected according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). To assess the quality of the selected articles, the Newcastle-Ottawa Scale (NOS) technique was utilized. RESULTS: A total of 29 papers were selected for final review based on PRISMA guidelines and the NOS quality assessment. Studies have shown that many forms of SimEx commonly used in disaster management including tabletop exercises, functional exercises, and full-scale exercises have their benefits and limitations. There is no doubt that SimEx is an excellent tool for improving disaster planning and response. It is still necessary to give SimEx programs a more rigorous evaluation and to standardize the processes more thoroughly. CONCLUSIONS: Drills and training can be improved for disaster management, which will enable medical professionals to face the challenges of disaster management in the 21st century.


Subject(s)
Disaster Planning , Disasters , Humans , Databases, Factual , Emotions , Schools
5.
Stat Med ; 41(1): 163-179, 2022 01 15.
Article in English | MEDLINE | ID: mdl-34655089

ABSTRACT

Control risk regression is a diffuse approach for meta-analysis about the effectiveness of a treatment, relating the measure of risk with which the outcome occurs in the treated group to that in the control group. The severity of illness is a source of between-study heterogeneity that can be difficult to measure. It can be approximated by the rate of events in the control group. Since the estimate is a surrogate for the underlying risk, it is prone to measurement error. Correction methods are necessary to provide reliable inference. This article illustrates the extent of measurement error effects under different scenarios, including departures from the classical normality assumption for the control risk distribution. The performance of different measurement error corrections is examined. Attention will be paid to likelihood-based structural methods assuming a distribution for the control risk measure and to functional methods avoiding the assumption, namely, a simulation-based method and two score function methods. Advantages and limits of the approaches are evaluated through simulation. In case of large heterogeneity, structural approaches are preferable to score methods, while score methods perform better for small heterogeneity and small sample size. The simulation-based approach has a satisfactory behavior whichever the examined scenario, with no convergence issues. The methods are applied to a meta-analysis about the association between diabetes and risk of Parkinson disease. The study intends to make researchers aware of the measurement error problem occurring in control risk regression and lead them to the use of appropriate correction techniques to prevent fallacious conclusions.


Subject(s)
Research Design , Computer Simulation , Humans , Likelihood Functions , Meta-Analysis as Topic
6.
Pharm Stat ; 21(3): 584-598, 2022 05.
Article in English | MEDLINE | ID: mdl-34935280

ABSTRACT

New technologies for novel biomarkers have transformed the field of precision medicine. However, in applications such as liquid biopsy for early tumor detection, the misclassification rates of next generation sequencing and other technologies have become an unavoidable feature of biomarker development. Because initial experiments are usually confined to specific technology choices and application settings, a statistical method that can project the performance metrics of other scenarios with different misclassification rates would be very helpful for planning further biomarker development and future trials. In this article, we describe an approach based on an extended version of simulation extrapolation (SIMEX) to project the performance of biomarkers measured with varying misclassification rates due to different technological or application settings when experimental results are only available from one specific setting. Through simulation studies for logistic regression and proportional hazards models, we show that our proposed method can be used to project the biomarker performance with good precision when switching from one to anther technology or application setting. Similar to the original SIMEX model, the proposed method can be implemented with existing software in a straightforward manner. A data analysis example is also presented using a lung cancer data set and performance metrics for two gene panel based biomarkers. Results demonstrate that it is feasible to infer the potential implications of using a range of technologies or application scenarios for biomarkers with limited human trial data.


Subject(s)
Precision Medicine , Research Design , Biomarkers , Computer Simulation , Humans , Proportional Hazards Models
7.
Stat Med ; 40(28): 6309-6320, 2021 12 10.
Article in English | MEDLINE | ID: mdl-34474502

ABSTRACT

Current status data arise when each subject is observed only once and the failure time of interest is only known to be either smaller or larger than the observation time rather than observed exactly. For the situation, due to the use of imperfect diagnostic tests, the failure status could often suffer misclassification or one observes misclassified data, which may result in severely biased estimation if not taken into account. In this article, we discuss regression analysis of such misclassified current status data arising from the additive hazards model, and a simulation-extrapolation (SIMEX) approach is developed for the estimation. Furthermore, the asymptotic properties of the proposed estimators are established, and a simulation study is conducted to assess the empirical performance of the method, which indicates that the proposed procedure performs well. In particular, it can correct the estimation bias given by the naive method that ignores the existence of misclassification. An application to a medical study on gonorrhea is also provided.


Subject(s)
Computer Simulation , Proportional Hazards Models , Bias , Humans , Regression Analysis
8.
Stat Med ; 39(16): 2197-2231, 2020 07 20.
Article in English | MEDLINE | ID: mdl-32246539

ABSTRACT

Measurement error and misclassification of variables frequently occur in epidemiology and involve variables important to public health. Their presence can impact strongly on results of statistical analyses involving such variables. However, investigators commonly fail to pay attention to biases resulting from such mismeasurement. We provide, in two parts, an overview of the types of error that occur, their impacts on analytic results, and statistical methods to mitigate the biases that they cause. In this first part, we review different types of measurement error and misclassification, emphasizing the classical, linear, and Berkson models, and on the concepts of nondifferential and differential error. We describe the impacts of these types of error in covariates and in outcome variables on various analyses, including estimation and testing in regression models and estimating distributions. We outline types of ancillary studies required to provide information about such errors and discuss the implications of covariate measurement error for study design. Methods for ascertaining sample size requirements are outlined, both for ancillary studies designed to provide information about measurement error and for main studies where the exposure of interest is measured with error. We describe two of the simpler methods, regression calibration and simulation extrapolation (SIMEX), that adjust for bias in regression coefficients caused by measurement error in continuous covariates, and illustrate their use through examples drawn from the Observing Protein and Energy (OPEN) dietary validation study. Finally, we review software available for implementing these methods. The second part of the article deals with more advanced topics.


Subject(s)
Models, Statistical , Research Design , Bias , Calibration , Causality , Computer Simulation , Humans
9.
Biom J ; 62(5): 1139-1163, 2020 09.
Article in English | MEDLINE | ID: mdl-32003495

ABSTRACT

The Cox regression model is a popular model for analyzing the relationship between a covariate vector and a survival endpoint. The standard Cox model assumes a constant covariate effect across the entire covariate domain. However, in many epidemiological and other applications, the covariate of main interest is subject to a threshold effect: a change in the slope at a certain point within the covariate domain. Often, the covariate of interest is subject to some degree of measurement error. In this paper, we study measurement error correction in the case where the threshold is known. Several bias correction methods are examined: two versions of regression calibration (RC1 and RC2, the latter of which is new), two methods based on the induced relative risk under a rare event assumption (RR1 and RR2, the latter of which is new), a maximum pseudo-partial likelihood estimator (MPPLE), and simulation-extrapolation (SIMEX). We develop the theory, present simulations comparing the methods, and illustrate their use on data concerning the relationship between chronic air pollution exposure to particulate matter PM10 and fatal myocardial infarction (Nurses Health Study (NHS)), and on data concerning the effect of a subject's long-term underlying systolic blood pressure level on the risk of cardiovascular disease death (Framingham Heart Study (FHS)). The simulations indicate that the best methods are RR2 and MPPLE.


Subject(s)
Proportional Hazards Models , Survival Analysis , Bias , Calibration , Cardiovascular Diseases/mortality , Computer Simulation , Humans , Longitudinal Studies , Myocardial Infarction/mortality
10.
Biometrics ; 75(4): 1133-1144, 2019 12.
Article in English | MEDLINE | ID: mdl-31260084

ABSTRACT

Errors-in-variables models in high-dimensional settings pose two challenges in application. First, the number of observed covariates is larger than the sample size, while only a small number of covariates are true predictors under an assumption of model sparsity. Second, the presence of measurement error can result in severely biased parameter estimates, and also affects the ability of penalized methods such as the lasso to recover the true sparsity pattern. A new estimation procedure called SIMulation-SELection-EXtrapolation (SIMSELEX) is proposed. This procedure makes double use of lasso methodology. First, the lasso is used to estimate sparse solutions in the simulation step, after which a group lasso is implemented to do variable selection. The SIMSELEX estimator is shown to perform well in variable selection, and has significantly lower estimation error than naive estimators that ignore measurement error. SIMSELEX can be applied in a variety of errors-in-variables settings, including linear models, generalized linear models, and Cox survival models. It is furthermore shown in the Supporting Information how SIMSELEX can be applied to spline-based regression models. A simulation study is conducted to compare the SIMSELEX estimators to existing methods in the linear and logistic model settings, and to evaluate performance compared to naive methods in the Cox and spline models. Finally, the method is used to analyze a microarray dataset that contains gene expression measurements of favorable histology Wilms tumors.


Subject(s)
Models, Statistical , Scientific Experimental Error , Gene Expression Profiling , Humans , Linear Models , Logistic Models , Methods , Microarray Analysis/statistics & numerical data , Proportional Hazards Models , Sample Size , Wilms Tumor/genetics
11.
Biostatistics ; 18(2): 325-337, 2017 04 01.
Article in English | MEDLINE | ID: mdl-27993763

ABSTRACT

One of the main limitations of causal inference methods is that they rely on the assumption that all variables are measured without error. A popular approach for handling measurement error is simulation-extrapolation (SIMEX). However, its use for estimating causal effects have been examined only in the context of an additive, non-differential, and homoscedastic classical measurement error structure. In this article we extend the SIMEX methodology, in the context of a mean reverting measurement error structure, to a doubly robust estimator of the average treatment effect when a single covariate is measured with error but the outcome and treatment and treatment indicator are not. Throughout this article we assume that an independent validation sample is available. Simulation studies suggest that our method performs better than a naive approach that simply uses the covariate measured with error.


Subject(s)
Computer Simulation , Data Interpretation, Statistical , Outcome Assessment, Health Care/methods , Outcome Assessment, Health Care/standards , Adolescent , Humans , Propensity Score
12.
Magn Reson Med ; 80(4): 1666-1675, 2018 10.
Article in English | MEDLINE | ID: mdl-29411435

ABSTRACT

PURPOSE: The bias and variance of high angular resolution diffusion imaging methods have not been thoroughly explored in the literature and may benefit from the simulation extrapolation (SIMEX) and bootstrap techniques to estimate bias and variance of high angular resolution diffusion imaging metrics. METHODS: The SIMEX approach is well established in the statistics literature and uses simulation of increasingly noisy data to extrapolate back to a hypothetical case with no noise. The bias of calculated metrics can then be computed by subtracting the SIMEX estimate from the original pointwise measurement. The SIMEX technique has been studied in the context of diffusion imaging to accurately capture the bias in fractional anisotropy measurements in DTI. Herein, we extend the application of SIMEX and bootstrap approaches to characterize bias and variance in metrics obtained from a Q-ball imaging reconstruction of high angular resolution diffusion imaging data. RESULTS: The results demonstrate that SIMEX and bootstrap approaches provide consistent estimates of the bias and variance of generalized fractional anisotropy, respectively. The RMSE for the generalized fractional anisotropy estimates shows a 7% decrease in white matter and an 8% decrease in gray matter when compared with the observed generalized fractional anisotropy estimates. On average, the bootstrap technique results in SD estimates that are approximately 97% of the true variation in white matter, and 86% in gray matter. CONCLUSION: Both SIMEX and bootstrap methods are flexible, estimate population characteristics based on single scans, and may be extended for bias and variance estimation on a variety of high angular resolution diffusion imaging metrics.


Subject(s)
Brain/diagnostic imaging , Diffusion Tensor Imaging/methods , Image Processing, Computer-Assisted/methods , Bias , Cluster Analysis , Humans , Models, Statistical , Reproducibility of Results
13.
Clin Proteomics ; 15: 23, 2018.
Article in English | MEDLINE | ID: mdl-30065622

ABSTRACT

BACKGROUND: Lower urinary tract symptoms (LUTS) and prostate specific antigen-based parameters seem to have only a limited utility for the differential diagnosis of prostate cancer (PCa). MALDI-TOF/MS peptidomic profiling could be a useful diagnostic tool for biomarker discovery, although reproducibility issues have limited its applicability until now. The current study aimed to evaluate a new MALDI-TOF/MS candidate biomarker. METHODS: Within- and between-subject variability of MALDI-TOF/MS-based peptidomic urine and serum analyses were evaluated in 20 and 15 healthy donors, respectively. Normalizations and approaches for accounting below limit of detection (LOD) values were utilized to enhance reproducibility, while Monte Carlo experiments were performed to verify whether measurement error can be dealt with LOD data. Post-prostatic massage urine and serum samples from 148 LUTS patients were analysed using MALDI-TOF/MS. Regression-calibration and simulation and extrapolation methods were used to derive the unbiased association between peptidomic features and PCa. RESULTS: Although the median normalized peptidomic variability was 24.9%, the within- and between-subject variability showed that median normalization, LOD adjustment, and log2 data transformation were the best combination in terms of reliability; in measurement error conditions, intraclass correlation coefficient was a reliable estimate when the LOD/2 was substituted for below LOD values. In the patients studied, 43 peptides were shared by the urine and serum, and several features were found to be associated with PCa. Only few serum features, however, show statistical significance after the multiple testing procedures were completed. Two serum fragmentation patterns corresponded to the complement C4-A. CONCLUSIONS: MALDI-TOF/MS serum peptidome profiling was more efficacious with respect to post-prostatic massage urine analysis in discriminating PCa.

14.
Stat Med ; 37(8): 1276-1289, 2018 04 15.
Article in English | MEDLINE | ID: mdl-29193180

ABSTRACT

For time-to-event outcomes, a rich literature exists on the bias introduced by covariate measurement error in regression models, such as the Cox model, and methods of analysis to address this bias. By comparison, less attention has been given to understanding the impact or addressing errors in the failure time outcome. For many diseases, the timing of an event of interest (such as progression-free survival or time to AIDS progression) can be difficult to assess or reliant on self-report and therefore prone to measurement error. For linear models, it is well known that random errors in the outcome variable do not bias regression estimates. With nonlinear models, however, even random error or misclassification can introduce bias into estimated parameters. We compare the performance of 2 common regression models, the Cox and Weibull models, in the setting of measurement error in the failure time outcome. We introduce an extension of the SIMEX method to correct for bias in hazard ratio estimates from the Cox model and discuss other analysis options to address measurement error in the response. A formula to estimate the bias induced into the hazard ratio by classical measurement error in the event time for a log-linear survival model is presented. Detailed numerical studies are presented to examine the performance of the proposed SIMEX method under varying levels and parametric forms of the error in the outcome. We further illustrate the method with observational data on HIV outcomes from the Vanderbilt Comprehensive Care Clinic.


Subject(s)
Bias , Data Interpretation, Statistical , Nonlinear Dynamics , Proportional Hazards Models , Regression Analysis , Algorithms , Computer Simulation , Disease Progression , Humans , Linear Models , Time Factors
15.
Biostatistics ; 17(3): 422-36, 2016 07.
Article in English | MEDLINE | ID: mdl-26795191

ABSTRACT

In this paper, the influence of measurement errors in exposure doses in a regression model with binary response is studied. Recently, it has been recognized that uncertainty in exposure dose is characterized by errors of two types: classical additive errors and Berkson multiplicative errors. The combination of classical additive and Berkson multiplicative errors has not been considered in the literature previously. In a simulation study based on data from radio-epidemiological research of thyroid cancer in Ukraine caused by the Chornobyl accident, it is shown that ignoring measurement errors in doses leads to overestimation of background prevalence and underestimation of excess relative risk. In the work, several methods to reduce these biases are proposed. They are new regression calibration, an additive version of efficient SIMEX, and novel corrected score methods.


Subject(s)
Chernobyl Nuclear Accident , Dose-Response Relationship, Radiation , Models, Theoretical , Radiation Exposure/statistics & numerical data , Risk Assessment/methods , Computer Simulation , Humans
16.
BMC Med Res Methodol ; 17(1): 6, 2017 01 11.
Article in English | MEDLINE | ID: mdl-28077079

ABSTRACT

BACKGROUND: Bivariate random-effects models represent a widely accepted and recommended approach for meta-analysis of test accuracy studies. Standard likelihood methods routinely used for inference are prone to several drawbacks. Small sample size can give rise to unreliable inferential conclusions and convergence issues make the approach unappealing. This paper suggests a different methodology to address such difficulties. METHODS: A SIMEX methodology is proposed. The method is a simulation-based technique originally developed as a correction strategy within the measurement error literature. It suits the meta-analysis framework as the diagnostic accuracy measures provided by each study are prone to measurement error. SIMEX can be straightforwardly adapted to cover different measurement error structures and to deal with covariates. The effortless implementation with standard software is an interesting feature of the method. RESULTS: Extensive simulation studies highlight the improvement provided by SIMEX over likelihood approach in terms of empirical coverage probabilities of confidence intervals under different scenarios, independently of the sample size and the values of the correlation between sensitivity and specificity. A remarkable amelioration is obtained in case of deviations from the normality assumption for the random-effects distribution. From a computational point of view, the application of SIMEX is shown to be neither involved nor subject to the convergence issues affecting likelihood-based alternatives. Application of the method to a diagnostic review of the performance of transesophageal echocardiography for assessing ascending aorta atherosclerosis enables overcoming limitations of the likelihood procedure. CONCLUSIONS: The SIMEX methodology represents an interesting alternative to likelihood-based procedures for inference in meta-analysis of diagnostic accuracy studies. The approach can provide more accurate inferential conclusions, while avoiding convergence failure and numerical instabilities. The application of the method in the R programming language is possible through the code which is made available and illustrated using the real data example.


Subject(s)
Algorithms , Diagnostic Tests, Routine/methods , Meta-Analysis as Topic , Models, Theoretical , Computer Simulation , Echocardiography, Transesophageal/methods , Humans , Likelihood Functions , Reproducibility of Results , Sensitivity and Specificity
17.
BMC Health Serv Res ; 17(1): 578, 2017 Aug 22.
Article in English | MEDLINE | ID: mdl-28830422

ABSTRACT

BACKGROUND: Results of associations between process and mortality indicators, both used for the external assessment of hospital care quality or public reporting, differ strongly across studies. However, most of those studies were conducted in North America or United Kingdom. Providing new evidence based on French data could fuel the international debate on quality of care indicators and help inform French policy-makers. The objective of our study was to explore whether optimal care delivery in French hospitals as assessed by their Hospital Process Indicators (HPIs) is associated with low Hospital Standardized Mortality Ratios (HSMRs). METHODS: The French National Authority for Health (HAS) routinely collects for each hospital located in France, a set of mandatory HPIs. Five HPIs were selected among the process indicators collected by the HAS in 2009. They were measured using random samples of 60 to 80 medical records from inpatients admitted between January 1st, 2009 and December 31, 2009 in respect with some selection criteria. HSMRs were estimated at 30, 60 and 90 days post-admission (dpa) using administrative health data extracted from the national health insurance information system (SNIIR-AM) which covers 77% of the French population. Associations between HPIs and HSMRs were assessed by Poisson regression models corrected for measurement errors with a simulation-extrapolation (SIMEX) method. RESULTS: Most associations studied were not statistically significant. Only two process indicators were found associated with HSMRs. Completeness and quality of anesthetic records was negatively associated with 30 dpa HSMR (0.72 [0.52-0.99]). Early detection of nutritional disorders was negatively associated with all HSMRs: 30 dpa HSMR (0.71 [0.54-0.95]), 60 dpa HSMR (0.51 [0.39-0.67]) and 90 dpa HSMR (0.52 [0.40-0.68]). CONCLUSION: In absence of gold standard of quality of care measurement, the limited number of associations suggested to drive in-depth improvements in order to better determine associations between process and mortality indicators. A smart utilization of both process and outcomes indicators is mandatory to capture aspects of the hospital quality of care complexity.


Subject(s)
Hospital Mortality , Quality Indicators, Health Care , France/epidemiology , Hospitalization , Hospitals/standards , Hospitals/statistics & numerical data , Humans , Quality Indicators, Health Care/standards , Quality of Health Care
18.
Am J Epidemiol ; 184(3): 249-58, 2016 08 01.
Article in English | MEDLINE | ID: mdl-27416840

ABSTRACT

Unbiased estimation of causal parameters from marginal structural models (MSMs) requires a fundamental assumption of no unmeasured confounding. Unfortunately, the time-varying covariates used to obtain inverse probability weights are often error-prone. Although substantial measurement error in important confounders is known to undermine control of confounders in conventional unweighted regression models, this issue has received comparatively limited attention in the MSM literature. Here we propose a novel application of the simulation-extrapolation (SIMEX) procedure to address measurement error in time-varying covariates, and we compare 2 approaches. The direct approach to SIMEX-based correction targets outcome model parameters, while the indirect approach corrects the weights estimated using the exposure model. We assess the performance of the proposed methods in simulations under different clinically plausible assumptions. The simulations demonstrate that measurement errors in time-dependent covariates may induce substantial bias in MSM estimators of causal effects of time-varying exposures, and that both proposed SIMEX approaches yield practically unbiased estimates in scenarios featuring low-to-moderate degrees of error. We illustrate the proposed approach in a simple analysis of the relationship between sustained virological response and liver fibrosis progression among persons infected with hepatitis C virus, while accounting for measurement error in γ-glutamyltransferase, using data collected in the Canadian Co-infection Cohort Study from 2003 to 2014.


Subject(s)
Bias , Computer Simulation , Epidemiologic Measurements , Epidemiologic Research Design , Models, Statistical , Canada/epidemiology , Coinfection/epidemiology , Confounding Factors, Epidemiologic , Data Interpretation, Statistical , Disease Progression , Female , HIV Infections/complications , HIV Infections/drug therapy , HIV Infections/epidemiology , Hepatitis C/complications , Hepatitis C/drug therapy , Hepatitis C/epidemiology , Humans , Liver Diseases/epidemiology , Liver Diseases/etiology , Logistic Models , Longitudinal Studies , Male , Outcome Assessment, Health Care/statistics & numerical data , Proportional Hazards Models , Time Factors
19.
Stat Med ; 35(22): 3987-4007, 2016 09 30.
Article in English | MEDLINE | ID: mdl-27264206

ABSTRACT

Motivated by a genetic application, this paper addresses the problem of fitting regression models when the predictor is a proportion measured with error. While the problem of dealing with additive measurement error in fitting regression models has been extensively studied, the problem where the additive error is of a binomial nature has not been addressed. The measurement errors here are heteroscedastic for two reasons; dependence on the underlying true value and changing sampling effort over observations. While some of the previously developed methods for treating additive measurement error with heteroscedasticity can be used in this setting, other methods need modification. A new version of simulation extrapolation is developed, and we also explore a variation on the standard regression calibration method that uses a beta-binomial model based on the fact that the true value is a proportion. Although most of the methods introduced here can be used for fitting non-linear models, this paper will focus primarily on their use in fitting a linear model. While previous work has focused mainly on estimation of the coefficients, we will, with motivation from our example, also examine estimation of the variance around the regression line. In addressing these problems, we also discuss the appropriate manner in which to bootstrap for both inferences and bias assessment. The various methods are compared via simulation, and the results are illustrated using our motivating data, for which the goal is to relate the methylation rate of a blood sample to the age of the individual providing the sample. Copyright © 2016 John Wiley & Sons, Ltd.


Subject(s)
DNA Methylation , Regression Analysis , Sulfites , Humans , Linear Models , Models, Statistical
20.
Stat Med ; 33(12): 2062-76, 2014 May 30.
Article in English | MEDLINE | ID: mdl-24339017

ABSTRACT

This paper investigates the use of SIMEX, a simulation-based measurement error correction technique, for meta-analysis of studies involving the baseline risk of subjects in the control group as explanatory variable. The approach accounts for the measurement error affecting the information about either the outcome in the treatment group or the baseline risk available from each study, while requiring no assumption about the distribution of the true unobserved baseline risk. This robustness property, together with the feasibility of computation, makes SIMEX very attractive. The approach is suggested as an alternative to the usual likelihood analysis, which can provide misleading inferential results when the commonly assumed normal distribution for the baseline risk is violated. The performance of SIMEX is compared to the likelihood method and to the moment-based correction through an extensive simulation study and the analysis of two datasets from the medical literature.


Subject(s)
Bias , Data Interpretation, Statistical , Meta-Analysis as Topic , Algorithms , Clinical Trials as Topic , Likelihood Functions , Risk , Treatment Outcome
SELECTION OF CITATIONS
SEARCH DETAIL