Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 96
Filter
Add more filters

Publication year range
1.
Pharm Stat ; 2024 Apr 17.
Article in English | MEDLINE | ID: mdl-38631678

ABSTRACT

Accurate frequentist performance of a method is desirable in confirmatory clinical trials, but is not sufficient on its own to justify the use of a missing data method. Reference-based conditional mean imputation, with variance estimation justified solely by its frequentist performance, has the surprising and undesirable property that the estimated variance becomes smaller the greater the number of missing observations; as explained under jump-to-reference it effectively forces the true treatment effect to be exactly zero for patients with missing data.

2.
Biom J ; 66(1): e2300085, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37823668

ABSTRACT

For simulation studies that evaluate methods of handling missing data, we argue that generating partially observed data by fixing the complete data and repeatedly simulating the missingness indicators is a superficially attractive idea but only rarely appropriate to use.


Subject(s)
Research , Data Interpretation, Statistical , Computer Simulation
3.
Stat Med ; 42(7): 1082-1095, 2023 03 30.
Article in English | MEDLINE | ID: mdl-36695043

ABSTRACT

One of the main challenges when using observational data for causal inference is the presence of confounding. A classic approach to account for confounding is the use of propensity score techniques that provide consistent estimators of the causal treatment effect under four common identifiability assumptions for causal effects, including that of no unmeasured confounding. Propensity score matching is a very popular approach which, in its simplest form, involves matching each treated patient to an untreated patient with a similar estimated propensity score, that is, probability of receiving the treatment. The treatment effect can then be estimated by comparing treated and untreated patients within the matched dataset. When missing data arises, a popular approach is to apply multiple imputation to handle the missingness. The combination of propensity score matching and multiple imputation is increasingly applied in practice. However, in this article we demonstrate that combining multiple imputation and propensity score matching can lead to over-coverage of the confidence interval for the treatment effect estimate. We explore the cause of this over-coverage and we evaluate, in this context, the performance of a correction to Rubin's rules for multiple imputation proposed by finding that this correction removes the over-coverage.


Subject(s)
Propensity Score , Humans , Data Interpretation, Statistical , Causality
4.
Stat Med ; 42(27): 4917-4930, 2023 11 30.
Article in English | MEDLINE | ID: mdl-37767752

ABSTRACT

In network meta-analysis, studies evaluating multiple treatment comparisons are modeled simultaneously, and estimation is informed by a combination of direct and indirect evidence. Network meta-analysis relies on an assumption of consistency, meaning that direct and indirect evidence should agree for each treatment comparison. Here we propose new local and global tests for inconsistency and demonstrate their application to three example networks. Because inconsistency is a property of a loop of treatments in the network meta-analysis, we locate the local test in a loop. We define a model with one inconsistency parameter that can be interpreted as loop inconsistency. The model builds on the existing ideas of node-splitting and side-splitting in network meta-analysis. To provide a global test for inconsistency, we extend the model across multiple independent loops with one degree of freedom per loop. We develop a new algorithm for identifying independent loops within a network meta-analysis. Our proposed models handle treatments symmetrically, locate inconsistency in loops rather than in nodes or treatment comparisons, and are invariant to choice of reference treatment, making the results less dependent on model parameterization. For testing global inconsistency in network meta-analysis, our global model uses fewer degrees of freedom than the existing design-by-treatment interaction approach and has the potential to increase power. To illustrate our methods, we fit the models to three network meta-analyses varying in size and complexity. Local and global tests for inconsistency are performed and we demonstrate that the global model is invariant to choice of independent loops.


Subject(s)
Algorithms , Research Design , Humans , Network Meta-Analysis
5.
Clin Trials ; 20(5): 497-506, 2023 10.
Article in English | MEDLINE | ID: mdl-37277978

ABSTRACT

INTRODUCTION: The ICH E9 addendum outlining the estimand framework for clinical trials was published in 2019 but provides limited guidance around how to handle intercurrent events for non-inferiority studies. Once an estimand is defined, it is also unclear how to deal with missing values using principled analyses for non-inferiority studies. METHODS: Using a tuberculosis clinical trial as a case study, we propose a primary estimand, and an additional estimand suitable for non-inferiority studies. For estimation, multiple imputation methods that align with the estimands for both primary and sensitivity analysis are proposed. We demonstrate estimation methods using the twofold fully conditional specification multiple imputation algorithm and then extend and use reference-based multiple imputation for a binary outcome to target the relevant estimands, proposing sensitivity analyses under each. We compare the results from using these multiple imputation methods with those from the original study. RESULTS: Consistent with the ICH E9 addendum, estimands can be constructed for a non-inferiority trial which improves on the per-protocol/intention-to-treat-type analysis population previously advocated, involving respectively a hypothetical or treatment policy strategy to handle relevant intercurrent events. Results from using the 'twofold' multiple imputation approach to estimate the primary hypothetical estimand, and using reference-based methods for an additional treatment policy estimand, including sensitivity analyses to handle the missing data, were consistent with the original study's reported per-protocol and intention-to-treat analysis in failing to demonstrate non-inferiority. CONCLUSIONS: Using carefully constructed estimands and appropriate primary and sensitivity estimators, using all the information available, results in a more principled and statistically rigorous approach to analysis. Doing so provides an accurate interpretation of the estimand.


Subject(s)
Models, Statistical , Research Design , Humans , Algorithms , Data Interpretation, Statistical , Equivalence Trials as Topic
6.
Stat Med ; 41(25): 5000-5015, 2022 11 10.
Article in English | MEDLINE | ID: mdl-35959539

ABSTRACT

BACKGROUND: Substantive model compatible multiple imputation (SMC-MI) is a relatively novel imputation method that is particularly useful when the analyst's model includes interactions, non-linearities, and/or partially observed random slope variables. METHODS: Here we thoroughly investigate a SMC-MI strategy based on joint modeling of the covariates of the analysis model. We provide code to apply the proposed strategy and we perform an extensive simulation work to test it in various circumstances. We explore the impact on the results of various factors, including whether the missing data are at the individual or cluster level, whether there are non-linearities and whether the imputation model is correctly specified. Finally, we apply the imputation methods to the motivating example data. RESULTS: SMC-JM appears to be superior to standard JM imputation, particularly in presence of large variation in random slopes, non-linearities, and interactions. Results seem to be robust to slight mis-specification of the imputation model for the covariates. When imputing level 2 data, enough clusters have to be observed in order to obtain unbiased estimates of the level 2 parameters. CONCLUSIONS: SMC-JM is preferable to standard JM imputation in presence of complexities in the analysis model of interest, such as non-linearities or random slopes.


Subject(s)
Models, Statistical , Research Design , Humans , Computer Simulation
7.
Stat Med ; 41(5): 838-844, 2022 02 28.
Article in English | MEDLINE | ID: mdl-35146786

ABSTRACT

Since its inception in 1969, the MSc in medical statistics program has placed a high priority on training students from Africa. In this article, we review how the program has shaped, and in turn been shaped by, two substantial capacity building initiatives: (a) a fellowship program, funded by the UK Medical Research Council, and run through the International Statistical Epidemiology Group at the LSHTM, and (b) the Sub-Saharan capacity building in Biostatistics (SSACAB) initiative, administered through the Developing Excellence in Leadership, Training and Science in Africa (DELTAS) program of the African Academy of Sciences. We reflect on the impact of both initiatives, and the implications for future work in this area.


Subject(s)
Capacity Building , Tropical Medicine , Africa South of the Sahara/epidemiology , Humans , Hygiene , London , Public Health , Tropical Medicine/education
8.
Clin Trials ; 19(5): 522-533, 2022 10.
Article in English | MEDLINE | ID: mdl-35850542

ABSTRACT

BACKGROUND/AIMS: Tuberculosis remains one of the leading causes of death from an infectious disease globally. Both choices of outcome definitions and approaches to handling events happening post-randomisation can change the treatment effect being estimated, but these are often inconsistently described, thus inhibiting clear interpretation and comparison across trials. METHODS: Starting from the ICH E9(R1) addendum's definition of an estimand, we use our experience of conducting large Phase III tuberculosis treatment trials and our understanding of the estimand framework to identify the key decisions regarding how different event types are handled in the primary outcome definition, and the important points that should be considered in making such decisions. A key issue is the handling of intercurrent (i.e. post-randomisation) events (ICEs) which affect interpretation of or preclude measurement of the intended final outcome. We consider common ICEs including treatment changes and treatment extension, poor adherence to randomised treatment, re-infection with a new strain of tuberculosis which is different from the original infection, and death. We use two completed tuberculosis trials (REMoxTB and STREAM Stage 1) as illustrative examples. These trials tested non-inferiority of new tuberculosis treatment regimens versus a control regimen. The primary outcome was a binary composite endpoint, 'favourable' or 'unfavourable', which was constructed from several components. RESULTS: We propose the following improvements in handling the above-mentioned ICEs and loss to follow-up (a post-randomisation event that is not in itself an ICE). First, changes to allocated regimens should not necessarily be viewed as an unfavourable outcome; from the patient perspective, the potential harms associated with a change in the regimen should instead be directly quantified. Second, handling poor adherence to randomised treatment using a per-protocol analysis does not necessarily target a clear estimand; instead, it would be desirable to develop ways to estimate the treatment effects more relevant to programmatic settings. Third, re-infection with a new strain of tuberculosis could be handled with different strategies, depending on whether the outcome of interest is the ability to attain culture negativity from infection with any strain of tuberculosis, or specifically the presenting strain of tuberculosis. Fourth, where possible, death could be separated into tuberculosis-related and non-tuberculosis-related and handled using appropriate strategies. Finally, although some losses to follow-up would result in early treatment discontinuation, patients lost to follow-up before the end of the trial should not always be classified as having an unfavourable outcome. Instead, loss to follow-up should be separated from not completing the treatment, which is an ICE and may be considered as an unfavourable outcome. CONCLUSION: The estimand framework clarifies many issues in tuberculosis trials but also challenges trialists to justify and improve their outcome definitions. Future trialists should consider all the above points in defining their outcomes.


Subject(s)
Reinfection , Research Design , Causality , Humans
9.
Clin Infect Dis ; 73(2): 195-202, 2021 07 15.
Article in English | MEDLINE | ID: mdl-32448894

ABSTRACT

BACKGROUND: Using data from the COHERE collaboration, we investigated whether primary prophylaxis for pneumocystis pneumonia (PcP) might be withheld in all patients on antiretroviral therapy (ART) with suppressed plasma human immunodeficiency virus (HIV) RNA (≤400 copies/mL), irrespective of CD4 count. METHODS: We implemented an established causal inference approach whereby observational data are used to emulate a randomized trial. Patients taking PcP prophylaxis were eligible for the emulated trial if their CD4 count was ≤200 cells/µL in line with existing recommendations. We compared the following 2 strategies for stopping prophylaxis: (1) when CD4 count was >200 cells/µL for >3 months or (2) when the patient was virologically suppressed (2 consecutive HIV RNA ≤400 copies/mL). Patients were artificially censored if they did not comply with these stopping rules. We estimated the risk of primary PcP in patients on ART, using the hazard ratio (HR) to compare the stopping strategies by fitting a pooled logistic model, including inverse probability weights to adjust for the selection bias introduced by the artificial censoring. RESULTS: A total of 4813 patients (10 324 person-years) complied with eligibility conditions for the emulated trial. With primary PcP diagnosis as an endpoint, the adjusted HR (aHR) indicated a slightly lower, but not statistically significant, different risk for the strategy based on viral suppression alone compared with the existing guidelines (aHR, .8; 95% confidence interval, .6-1.1; P = .2). CONCLUSIONS: This study suggests that primary PcP prophylaxis might be safely withheld in confirmed virologically suppressed patients on ART, regardless of their CD4 count.


Subject(s)
AIDS-Related Opportunistic Infections , HIV Infections , Pneumonia, Pneumocystis , AIDS-Related Opportunistic Infections/prevention & control , CD4 Lymphocyte Count , HIV , HIV Infections/complications , HIV Infections/drug therapy , Humans , Pneumonia, Pneumocystis/prevention & control , Pragmatic Clinical Trials as Topic
10.
Am J Epidemiol ; 190(4): 663-672, 2021 04 06.
Article in English | MEDLINE | ID: mdl-33057574

ABSTRACT

Marginal structural models (MSMs) are commonly used to estimate causal intervention effects in longitudinal nonrandomized studies. A common challenge when using MSMs to analyze observational studies is incomplete confounder data, where a poorly informed analysis method will lead to biased estimates of intervention effects. Despite a number of approaches described in the literature for handling missing data in MSMs, there is little guidance on what works in practice and why. We reviewed existing missing-data methods for MSMs and discussed the plausibility of their underlying assumptions. We also performed realistic simulations to quantify the bias of 5 methods used in practice: complete-case analysis, last observation carried forward, the missingness pattern approach, multiple imputation, and inverse-probability-of-missingness weighting. We considered 3 mechanisms for nonmonotone missing data encountered in research based on electronic health record data. Further illustration of the strengths and limitations of these analysis methods is provided through an application using a cohort of persons with sleep apnea: the research database of the French Observatoire Sommeil de la Fédération de Pneumologie. We recommend careful consideration of 1) the reasons for missingness, 2) whether missingness modifies the existing relationships among observed data, and 3) the scientific context and data source, to inform the choice of the appropriate method(s) for handling partially observed confounders in MSMs.


Subject(s)
Computer Simulation , Electronic Health Records/statistics & numerical data , Models, Statistical , Data Interpretation, Statistical , Humans
11.
Eur Respir J ; 57(3)2021 03.
Article in English | MEDLINE | ID: mdl-33093119

ABSTRACT

Real-world data provide the potential for generating evidence on drug treatment effects in groups excluded from trials, but rigorous, validated methodology for doing so is lacking. We investigated whether non-interventional methods applied to real-world data could reproduce results from the landmark TORCH COPD trial.We performed a historical cohort study (2000-2017) of COPD drug treatment effects in the UK Clinical Practice Research Datalink (CPRD). Two control groups were selected from CPRD by applying TORCH inclusion/exclusion criteria and 1:1 matching to TORCH participants, as follows. Control group 1: people with COPD not prescribed fluticasone propionate (FP)-salmeterol (SAL); control group 2: people with COPD prescribed SAL only. FP-SAL exposed groups were then selected from CPRD by propensity score matching to each control group. Outcomes studied were COPD exacerbations, death from any cause and pneumonia.2652 FP-SAL exposed people were propensity score matched to 2652 FP-SAL unexposed people while 991 FP-SAL exposed people were propensity score matched to 991 SAL exposed people. Exacerbation rate ratio was comparable to TORCH for FP-SAL versus SAL (0.85, 95% CI 0.74-0.97 versus 0.88, 0.81-0.95) but not for FP-SAL versus no FP-SAL (1.30, 1.19-1.42 versus 0.75, 0.69-0.81). In addition, active comparator results were consistent with TORCH for mortality (hazard ratio 0.93, 0.65-1.32 versus 0.93, 0.77-1.13) and pneumonia (risk ratio 1.39, 1.04-1.87 versus 1.47, 1.25-1.73).We obtained very similar results to the TORCH trial for active comparator analyses, but were unable to reproduce placebo-controlled results. Application of these validated methods for active comparator analyses to groups excluded from randomised controlled trials provides a practical way for contributing to the evidence base and supporting COPD treatment decisions.


Subject(s)
Bronchodilator Agents , Pulmonary Disease, Chronic Obstructive , Administration, Inhalation , Androstadienes , Bronchodilator Agents/therapeutic use , Cohort Studies , Drug Combinations , Fluticasone/therapeutic use , Fluticasone-Salmeterol Drug Combination , Humans , Pulmonary Disease, Chronic Obstructive/drug therapy , Randomized Controlled Trials as Topic , Treatment Outcome
12.
Biom J ; 63(5): 915-947, 2021 06.
Article in English | MEDLINE | ID: mdl-33624862

ABSTRACT

Missing data are ubiquitous in medical research, yet there is still uncertainty over when restricting to the complete records is likely to be acceptable, when more complex methods (e.g. maximum likelihood, multiple imputation and Bayesian methods) should be used, how they relate to each other and the role of sensitivity analysis. This article seeks to address both applied practitioners and researchers interested in a more formal explanation of some of the results. For practitioners, the framework, illustrative examples and code should equip them with a practical approach to address the issues raised by missing data (particularly using multiple imputation), alongside an overview of how the various approaches in the literature relate. In particular, we describe how multiple imputation can be readily used for sensitivity analyses, which are still infrequently performed. For those interested in more formal derivations, we give outline arguments for key results, use simple examples to show how methods relate, and references for full details. The ideas are illustrated with a cohort study, a multi-centre case control study and a randomised clinical trial.


Subject(s)
Case-Control Studies , Bayes Theorem , Cohort Studies , Data Interpretation, Statistical , Humans , Uncertainty
13.
BMC Med ; 18(1): 286, 2020 09 09.
Article in English | MEDLINE | ID: mdl-32900372

ABSTRACT

When designing a clinical trial, explicitly defining the treatment estimands of interest (that which is to be estimated) can help to clarify trial objectives and ensure the questions being addressed by the trial are clinically meaningful. There are several challenges when defining estimands. Here, we discuss a number of these in the context of trials of treatments for patients hospitalised with COVID-19 and make suggestions for how estimands should be defined for key outcomes. We suggest that treatment effects should usually be measured as differences in proportions (or risk or odds ratios) for outcomes such as death and requirement for ventilation, and differences in means for outcomes such as the number of days ventilated. We further recommend that truncation due to death should be handled differently depending on whether a patient- or resource-focused perspective is taken; for the former, a composite approach should be used, while for the latter, a while-alive approach is preferred. Finally, we suggest that discontinuation of randomised treatment should be handled from a treatment policy perspective, where non-adherence is ignored in the analysis (i.e. intention to treat).


Subject(s)
Betacoronavirus , Coronavirus Infections/therapy , Pneumonia, Viral/therapy , COVID-19 , Clinical Trials as Topic , Coronavirus Infections/drug therapy , Hospitalization , Humans , Odds Ratio , Pandemics , Research Design , SARS-CoV-2 , COVID-19 Drug Treatment
14.
Stat Med ; 39(21): 2815-2842, 2020 09 20.
Article in English | MEDLINE | ID: mdl-32419182

ABSTRACT

Missing data due to loss to follow-up or intercurrent events are unintended, but unfortunately inevitable in clinical trials. Since the true values of missing data are never known, it is necessary to assess the impact of untestable and unavoidable assumptions about any unobserved data in sensitivity analysis. This tutorial provides an overview of controlled multiple imputation (MI) techniques and a practical guide to their use for sensitivity analysis of trials with missing continuous outcome data. These include δ- and reference-based MI procedures. In δ-based imputation, an offset term, δ, is typically added to the expected value of the missing data to assess the impact of unobserved participants having a worse or better response than those observed. Reference-based imputation draws imputed values with some reference to observed data in other groups of the trial, typically in other treatment arms. We illustrate the accessibility of these methods using data from a pediatric eczema trial and a chronic headache trial and provide Stata code to facilitate adoption. We discuss issues surrounding the choice of δ in δ-based sensitivity analysis. We also review the debate on variance estimation within reference-based analysis and justify the use of Rubin's variance estimator in this setting, since as we further elaborate on within, it provides information anchored inference.


Subject(s)
Data Interpretation, Statistical , Child , Humans
15.
BMC Med Res Methodol ; 20(1): 208, 2020 08 12.
Article in English | MEDLINE | ID: mdl-32787782

ABSTRACT

BACKGROUND: The coronavirus pandemic (Covid-19) presents a variety of challenges for ongoing clinical trials, including an inevitably higher rate of missing outcome data, with new and non-standard reasons for missingness. International drug trial guidelines recommend trialists review plans for handling missing data in the conduct and statistical analysis, but clear recommendations are lacking. METHODS: We present a four-step strategy for handling missing outcome data in the analysis of randomised trials that are ongoing during a pandemic. We consider handling missing data arising due to (i) participant infection, (ii) treatment disruptions and (iii) loss to follow-up. We consider both settings where treatment effects for a 'pandemic-free world' and 'world including a pandemic' are of interest. RESULTS: In any trial, investigators should; (1) Clarify the treatment estimand of interest with respect to the occurrence of the pandemic; (2) Establish what data are missing for the chosen estimand; (3) Perform primary analysis under the most plausible missing data assumptions followed by; (4) Sensitivity analysis under alternative plausible assumptions. To obtain an estimate of the treatment effect in a 'pandemic-free world', participant data that are clinically affected by the pandemic (directly due to infection or indirectly via treatment disruptions) are not relevant and can be set to missing. For primary analysis, a missing-at-random assumption that conditions on all observed data that are expected to be associated with both the outcome and missingness may be most plausible. For the treatment effect in the 'world including a pandemic', all participant data is relevant and should be included in the analysis. For primary analysis, a missing-at-random assumption - potentially incorporating a pandemic time-period indicator and participant infection status - or a missing-not-at-random assumption with a poorer response may be most relevant, depending on the setting. In all scenarios, sensitivity analysis under credible missing-not-at-random assumptions should be used to evaluate the robustness of results. We highlight controlled multiple imputation as an accessible tool for conducting sensitivity analyses. CONCLUSIONS: Missing data problems will be exacerbated for trials active during the Covid-19 pandemic. This four-step strategy will facilitate clear thinking about the appropriate analysis for relevant questions of interest.


Subject(s)
Outcome Assessment, Health Care/statistics & numerical data , Practice Guidelines as Topic , Randomized Controlled Trials as Topic/statistics & numerical data , Research Design/statistics & numerical data , Betacoronavirus/physiology , COVID-19 , Comorbidity , Coronavirus Infections/epidemiology , Coronavirus Infections/therapy , Coronavirus Infections/virology , Humans , Outcome Assessment, Health Care/methods , Pandemics , Pneumonia, Viral/epidemiology , Pneumonia, Viral/therapy , Pneumonia, Viral/virology , Randomized Controlled Trials as Topic/methods , Reproducibility of Results , SARS-CoV-2
16.
BMC Med Res Methodol ; 20(1): 66, 2020 03 17.
Article in English | MEDLINE | ID: mdl-32183708

ABSTRACT

BACKGROUND: Missing data are an inevitable challenge in Randomised Controlled Trials (RCTs), particularly those with Patient Reported Outcome Measures. Methodological guidance suggests that to avoid incorrect conclusions, studies should undertake sensitivity analyses which recognise that data may be 'missing not at random' (MNAR). A recommended approach is to elicit expert opinion about the likely outcome differences for those with missing versus observed data. However, few published trials plan and undertake these elicitation exercises, and so lack the external information required for these sensitivity analyses. The aim of this paper is to provide a framework that anticipates and allows for MNAR data in the design and analysis of clinical trials. METHODS: We developed a framework for performing and using expert elicitation to frame sensitivity analysis in RCTs with missing outcome data. The framework includes the following steps: first defining the scope of the elicitation exercise, second developing the elicitation tool, third eliciting expert opinion about the missing outcomes, fourth evaluating the elicitation results, and fifth analysing the trial data. We provide guidance on key practical challenges that arise when adopting this approach in trials: the criteria for identifying relevant experts, the outcome scale for presenting data to experts, the appropriate representation of expert opinion, and the evaluation of the elicitation results.The framework was developed within the POPPI trial, which investigated whether a preventive, complex psychological intervention, commenced early in ICU, would reduce the development of patient-reported post-traumatic stress disorder symptom severity, and improve health-related quality of life. We illustrate the key aspects of the proposed framework using the POPPI trial. RESULTS: For the POPPI trial, 113 experts were identified with potentially suitable knowledge and asked to participate in the elicitation exercise. The 113 experts provided 59 usable elicitation questionnaires. The sensitivity analysis found that the results from the primary analysis were robust to alternative MNAR mechanisms. CONCLUSIONS: Future studies can adopt this framework to embed expert elicitation within the design of clinical trials. This will provide the information required for MNAR sensitivity analyses that examine the robustness of the trial conclusions to alternative, but realistic assumptions about the missing data.


Subject(s)
Data Analysis , Expert Testimony , Humans , Quality of Life , Surveys and Questionnaires
17.
Health Econ ; 29(2): 171-184, 2020 02.
Article in English | MEDLINE | ID: mdl-31845455

ABSTRACT

Missing data are a common issue in cost-effectiveness analysis (CEA) alongside randomised trials and are often addressed assuming the data are 'missing at random'. However, this assumption is often questionable, and sensitivity analyses are required to assess the implications of departures from missing at random. Reference-based multiple imputation provides an attractive approach for conducting such sensitivity analyses, because missing data assumptions are framed in an intuitive way by making reference to other trial arms. For example, a plausible not at random mechanism in a placebo-controlled trial would be to assume that participants in the experimental arm who dropped out stop taking their treatment and have similar outcomes to those in the placebo arm. Drawing on the increasing use of this approach in other areas, this paper aims to extend and illustrate the reference-based multiple imputation approach in CEA. It introduces the principles of reference-based imputation and proposes an extension to the CEA context. The method is illustrated in the CEA of the CoBalT trial evaluating cognitive behavioural therapy for treatment-resistant depression. Stata code is provided. We find that reference-based multiple imputation provides a relevant and accessible framework for assessing the robustness of CEA conclusions to different missing data assumptions.


Subject(s)
Cost-Benefit Analysis , Data Interpretation, Statistical , Models, Statistical , Research Design , Cognitive Behavioral Therapy , Depressive Disorder, Treatment-Resistant/therapy , Humans , Randomized Controlled Trials as Topic
18.
Clin Trials ; 17(6): 644-653, 2020 12.
Article in English | MEDLINE | ID: mdl-33153304

ABSTRACT

BACKGROUND: Designing trials to reduce treatment duration is important in several therapeutic areas, including tuberculosis and bacterial infections. We recently proposed a new randomised trial design to overcome some of the limitations of standard two-arm non-inferiority trials. This DURATIONS design involves randomising patients to a number of duration arms and modelling the so-called 'duration-response curve'. This article investigates the operating characteristics (type-1 and type-2 errors) of different statistical methods of drawing inference from the estimated curve. METHODS: Our first estimation target is the shortest duration non-inferior to the control (maximum) duration within a specific risk difference margin. We compare different methods of estimating this quantity, including using model confidence bands, the delta method and bootstrap. We then explore the generalisability of results to estimation targets which focus on absolute event rates, risk ratio and gradient of the curve. RESULTS: We show through simulations that, in most scenarios and for most of the estimation targets, using the bootstrap to estimate variability around the target duration leads to good results for DURATIONS design-appropriate quantities analogous to power and type-1 error. Using model confidence bands is not recommended, while the delta method leads to inflated type-1 error in some scenarios, particularly when the optimal duration is very close to one of the randomised durations. CONCLUSIONS: Using the bootstrap to estimate the optimal duration in a DURATIONS design has good operating characteristics in a wide range of scenarios and can be used with confidence by researchers wishing to design a DURATIONS trial to reduce treatment duration. Uncertainty around several different targets can be estimated with this bootstrap approach.


Subject(s)
Randomized Controlled Trials as Topic/methods , Research Design , Equivalence Trials as Topic , Humans , Models, Statistical , Odds Ratio , ROC Curve , Randomized Controlled Trials as Topic/statistics & numerical data , Sample Size , Statistics as Topic , Time Factors
19.
Stat Med ; 38(29): 5547-5564, 2019 12 20.
Article in English | MEDLINE | ID: mdl-31647136

ABSTRACT

One of the biggest challenges for network meta-analysis is inconsistency, which occurs when the direct and indirect evidence conflict. Inconsistency causes problems for the estimation and interpretation of treatment effects and treatment contrasts. Krahn and colleagues proposed the net heat approach as a graphical tool for identifying and locating inconsistency within a network of randomized controlled trials. For networks with a treatment loop, the net heat plot displays statistics calculated by temporarily removing each design one at a time, in turn, and assessing the contribution of each remaining design to the inconsistency. The net heat plot takes the form of a matrix which is displayed graphically with coloring indicating the degree of inconsistency in the network. Applied to a network of individual participant data assessing overall survival in 7531 patients with lung cancer, we were surprised to find no evidence of important inconsistency from the net heat approach; this contradicted other approaches for assessing inconsistency such as the Bucher approach, Cochran's Q statistic, node-splitting, and the inconsistency parameter approach, which all suggested evidence of inconsistency within the network at the 5% level. Further theoretical work shows that the calculations underlying the net heat plot constitute an arbitrary weighting of the direct and indirect evidence which may be misleading. We illustrate this further using a simulation study and a network meta-analysis of 10 treatments for diabetes. We conclude that the net heat plot does not reliably signal inconsistency or identify designs that cause inconsistency.


Subject(s)
Network Meta-Analysis , Biostatistics , Computer Graphics , Computer Simulation , Databases, Factual/statistics & numerical data , Diabetes Mellitus, Type 2/drug therapy , Humans , Hypoglycemic Agents/therapeutic use , Lung Neoplasms/mortality , Lung Neoplasms/therapy , Models, Statistical
20.
Stat Med ; 38(5): 792-808, 2019 02 28.
Article in English | MEDLINE | ID: mdl-30328123

ABSTRACT

Multiple imputation (MI) has become popular for analyses with missing data in medical research. The standard implementation of MI is based on the assumption of data being missing at random (MAR). However, for missing data generated by missing not at random mechanisms, MI performed assuming MAR might not be satisfactory. For an incomplete variable in a given data set, its corresponding population marginal distribution might also be available in an external data source. We show how this information can be readily utilised in the imputation model to calibrate inference to the population by incorporating an appropriately calculated offset termed the "calibrated-δ adjustment." We describe the derivation of this offset from the population distribution of the incomplete variable and show how, in applications, it can be used to closely (and often exactly) match the post-imputation distribution to the population level. Through analytic and simulation studies, we show that our proposed calibrated-δ adjustment MI method can give the same inference as standard MI when data are MAR, and can produce more accurate inference under two general missing not at random missingness mechanisms. The method is used to impute missing ethnicity data in a type 2 diabetes prevalence case study using UK primary care electronic health records, where it results in scientifically relevant changes in inference for non-White ethnic groups compared with standard MI. Calibrated-δ adjustment MI represents a pragmatic approach for utilising available population-level information in a sensitivity analysis to explore potential departures from the MAR assumption.


Subject(s)
Data Interpretation, Statistical , Diabetes Mellitus, Type 2/epidemiology , Ethnicity/statistics & numerical data , Logistic Models , Models, Statistical , Electronic Health Records , Humans , Prevalence , Research Design
SELECTION OF CITATIONS
SEARCH DETAIL