ABSTRACT
An assurance calculation is a Bayesian alternative to a power calculation. One may be performed to aid the planning of a clinical trial, specifically setting the sample size or to support decisions about whether or not to perform a study. Immuno-oncology is a rapidly evolving area in the development of anticancer drugs. A common phenomenon that arises in trials of such drugs is one of delayed treatment effects, that is, there is a delay in the separation of the survival curves. To calculate assurance for a trial in which a delayed treatment effect is likely to be present, uncertainty about key parameters needs to be considered. If uncertainty is not considered, the number of patients recruited may not be enough to ensure we have adequate statistical power to detect a clinically relevant treatment effect and the risk of an unsuccessful trial is increased. We present a new elicitation technique for when a delayed treatment effect is likely and show how to compute assurance using these elicited prior distributions. We provide an example to illustrate how this can be used in practice and develop open-source software to implement our methods. Our methodology has the potential to improve the success rate and efficiency of Phase III trials in immuno-oncology and for other treatments where a delayed treatment effect is expected to occur.
Subject(s)
Bayes Theorem , Research Design , Humans , Sample Size , Models, Statistical , Neoplasms/drug therapy , Neoplasms/therapy , Clinical Trials, Phase III as Topic/methods , Clinical Trials, Phase III as Topic/statistics & numerical data , Clinical Trials as Topic/methods , Computer Simulation , Antineoplastic Agents/therapeutic use , Time Factors , Survival Analysis , Treatment DelayABSTRACT
There are several steps to confirming the safety and efficacy of a new medicine. A sequence of trials, each with its own objectives, is usually required. Quantitative risk metrics can be useful for informing decisions about whether a medicine should transition from one stage of development to the next. To obtain an estimate of the probability of regulatory approval, pharmaceutical companies may start with industry-wide success rates and then apply to these subjective adjustments to reflect program-specific information. However, this approach lacks transparency and fails to make full use of data from previous clinical trials. We describe a quantitative Bayesian approach for calculating the probability of success (PoS) at the end of phase II which incorporates internal clinical data from one or more phase IIb studies, industry-wide success rates, and expert opinion or external data if needed. Using an example, we illustrate how PoS can be calculated accounting for differences between the phase II data and future phase III trials, and discuss how the methods can be extended to accommodate accelerated drug development pathways.
Subject(s)
Drug Development , Research Design , Bayes Theorem , Drug Development/methods , Humans , ProbabilityABSTRACT
Pharmaceutical companies regularly need to make decisions about drug development programs based on the limited knowledge from early stage clinical trials. In this situation, eliciting the judgements of experts is an attractive approach for synthesising evidence on the unknown quantities of interest. When calculating the probability of success for a drug development program, multiple quantities of interest-such as the effect of a drug on different endpoints-should not be treated as unrelated. We discuss two approaches for establishing a multivariate distribution for several related quantities within the SHeffield ELicitation Framework (SHELF). The first approach elicits experts' judgements about a quantity of interest conditional on knowledge about another one. For the second approach, we first elicit marginal distributions for each quantity of interest. Then, for each pair of quantities, we elicit the concordance probability that both lie on the same side of their respective elicited medians. This allows us to specify a copula to obtain the joint distribution of the quantities of interest. We show how these approaches were used in an elicitation workshop that was performed to assess the probability of success of the registrational program of an asthma drug. The judgements of the experts, which were obtained prior to completion of the pivotal studies, were well aligned with the final trial results.
Subject(s)
Asthma , Drug Development , Asthma/drug therapy , Humans , Pharmaceutical Preparations , ProbabilityABSTRACT
BACKGROUND: Magnetic resonance (MRI) scanning of the heart is an established part of the investigation of cardiovascular conditions in children. In young children, sedation is likely to be needed, and multiple controlled periods of apnea are often required to allow image acquisition. Suppression of spontaneous ventilation is possible with remifentanil; however, the dose required is uncertain. AIMS: To establish the dose of remifentanil, by infusion, required to suppress ventilation sufficiently to allow a 30-s apnea during MRI imaging of the heart. METHOD: Patients aged 1-6 years were exposed to different doses of remifentanil, and the success in achieving a 30-s apnea was recorded. A dose recommendation was made for each patient, informed by responses of previous patients using an adaptive Bayesian dose-escalation design. Other aspects of anesthesia were standardized. A final estimate of the dose needed to achieve a successful outcome in 80% of patients (ED80) was made using logistic regression. RESULTS: 38 patients were recruited, and apnea achieved in 31 patients. The estimate of the ED80 was 0.184 µg/kg/min (95% CI 0.178-0.190). Post hoc analysis revealed that higher doses were required in younger patients. CONCLUSION: The ED80 for this indication was 0.184 µg/kg/min (95% CI 0.178-0.190). This is different from optimal dosing identified for other indications and dosing of remifentanil should be specific to the clinical context in which it is used.
Subject(s)
Apnea , Propofol , Anesthesia, General , Anesthetics, Intravenous , Apnea/chemically induced , Bayes Theorem , Child , Child, Preschool , Humans , Infant , Magnetic Resonance Imaging , Piperidines , RemifentanilABSTRACT
BACKGROUND/AIMS: Dose-escalation studies are essential in the early stages of developing novel treatments, when the aim is to find a safe dose for administration in humans. Despite their great importance, many dose-escalation studies use study designs based on heuristic algorithms with well-documented drawbacks. Bayesian decision procedures provide a design alternative that is conceptually simple and methodologically sound, but very rarely used in practice, at least in part due to their perceived statistical complexity. There are currently very few easily accessible software implementations that would facilitate their application. METHODS: We have created MoDEsT, a free and easy-to-use web application for designing and conducting single-agent dose-escalation studies with a binary toxicity endpoint, where the objective is to estimate the maximum tolerated dose. MoDEsT uses a well-established Bayesian decision procedure based on logistic regression. The software has a user-friendly point-and-click interface, makes changes visible in real time, and automatically generates a range of graphs, tables, and reports. It is aimed at clinicians as well as statisticians with limited expertise in model-based dose-escalation designs, and does not require any statistical programming skills to evaluate the operating characteristics of, or implement, the Bayesian dose-escalation design. RESULTS: MoDEsT comes in two parts: a 'Design' module to explore design options and simulate their operating characteristics, and a 'Conduct' module to guide the dose-finding process throughout the study. We illustrate the practical use of both modules with data from a real phase I study in terminal cancer. CONCLUSION: Enabling both methodologists and clinicians to understand and apply model-based study designs with ease is a key factor towards their routine use in early-phase studies. We hope that MoDEsT will enable incorporation of Bayesian decision procedures for dose escalation at the earliest stage of clinical trial design, thus increasing their use in early-phase trials.
Subject(s)
Clinical Trials, Phase I as Topic , Maximum Tolerated Dose , Research Design , Software , Algorithms , Antioxidants/administration & dosage , Bayes Theorem , Dose-Response Relationship, Drug , Humans , Logistic Models , Neoplasms/drug therapy , Quercetin/administration & dosage , User-Computer InterfaceABSTRACT
Leveraging preclinical animal data for a phase I oncology trial is appealing yet challenging. In this paper, we use animal data to improve decision-making in a model-based dose-escalation procedure. We make a proposal for how to measure and address a prior-data conflict in a sequential study with a small sample size. Animal data are incorporated via a robust two-component mixture prior for the parameters of the human dose-toxicity relationship. The weights placed on each component of the prior are chosen empirically and updated dynamically as the trial progresses and more data accrue. After completion of each cohort, we use a Bayesian decision-theoretic approach to evaluate the predictive utility of the animal data for the observed human toxicity outcomes, reflecting the degree of agreement between dose-toxicity relationships in animals and humans. The proposed methodology is illustrated through several data examples and an extensive simulation study.
Subject(s)
Clinical Trials, Phase I as Topic , Drug Evaluation, Preclinical , Neoplasms , Research Design , Animals , Bayes Theorem , Computer Simulation , Humans , Neoplasms/drug therapy , Sample SizeABSTRACT
BACKGROUND: Performing well-powered randomised controlled trials (RCTs) of new treatments for rare diseases is often infeasible. However, with the increasing availability of historical data, incorporating existing information into trials with small sample sizes is appealing in order to increase the power. Bayesian approaches enable one to incorporate historical data into a trial's analysis through a prior distribution. METHODS: Motivated by a RCT intended to evaluate the impact on event-free survival of mifamurtide in patients with osteosarcoma, we performed a simulation study to evaluate the impact on trial operating characteristics of incorporating historical individual control data and aggregate treatment effect estimates. We used power priors derived from historical individual control data for baseline parameters of Weibull and piecewise exponential models, while we used a mixture prior to summarise aggregate information obtained on the relative treatment effect. The impact of prior-data conflicts, both with respect to the parameters and survival models, was evaluated for a set of pre-specified weights assigned to the historical information in the prior distributions. RESULTS: The operating characteristics varied according to the weights assigned to each source of historical information, the variance of the informative and vague component of the mixture prior and the level of commensurability between the historical and new data. When historical and new controls follow different survival distributions, we did not observe any advantage of choosing a piecewise exponential model compared to a Weibull model for the new trial analysis. However, we think that it remains appealing given the uncertainty that will often surround the shape of the survival distribution of the new data. CONCLUSION: In the setting of Sarcome-13 trial, and other similar studies in rare diseases, the gains in power and accuracy made possible by incorporating different types of historical information commensurate with the new trial data have to be balanced against the risk of biased estimates and a possible loss in power if data are not commensurate. The weights allocated to the historical data have to be carefully chosen based on this trade-off. Further simulation studies investigating methods for incorporating historical data are required to generalise the findings.
Subject(s)
Bayes Theorem , Computer Simulation , Randomized Controlled Trials as Topic/methods , Research Design , Acetylmuramyl-Alanyl-Isoglutamine/analogs & derivatives , Acetylmuramyl-Alanyl-Isoglutamine/therapeutic use , Adjuvants, Immunologic/therapeutic use , Algorithms , Control Groups , Humans , Models, Theoretical , Osteosarcoma/drug therapy , Phosphatidylethanolamines/therapeutic use , Sample SizeABSTRACT
BACKGROUND: The Notch pathway is frequently activated in cancer. Pathway inhibition by γ-secretase inhibitors has been shown to be effective in pre-clinical models of pancreatic cancer, in combination with gemcitabine. METHODS: A multi-centre, non-randomised Bayesian adaptive design study of MK-0752, administered per os weekly, in combination with gemcitabine administered intravenously on days 1, 8 and 15 (28 day cycle) at 800 or 1000 mg m-2, was performed to determine the safety of combination treatment and the recommended phase 2 dose (RP2D). Secondary and tertiary objectives included tumour response, plasma and tumour MK-0752 concentration, and inhibition of the Notch pathway in hair follicles and tumour. RESULTS: Overall, 44 eligible patients (performance status 0 or 1 with adequate organ function) received gemcitabine and MK-0752 as first or second line treatment for pancreatic cancer. RP2Ds of MK-0752 and gemcitabine as single agents could be combined safely. The Bayesian algorithm allowed further dose escalation, but pharmacokinetic analysis showed no increase in MK-0752 AUC (area under the curve) beyond 1800 mg once weekly. Tumour response evaluation was available in 19 patients; 13 achieved stable disease and 1 patient achieved a confirmed partial response. CONCLUSIONS: Gemcitabine and a γ-secretase inhibitor (MK-0752) can be combined at their full, single-agent RP2Ds.
Subject(s)
Antineoplastic Combined Chemotherapy Protocols/administration & dosage , Antineoplastic Combined Chemotherapy Protocols/adverse effects , Carcinoma, Pancreatic Ductal/drug therapy , Pancreatic Neoplasms/drug therapy , Adult , Aged , Amyloid Precursor Protein Secretases/antagonists & inhibitors , Antineoplastic Combined Chemotherapy Protocols/pharmacokinetics , Bayes Theorem , Benzene Derivatives/administration & dosage , Benzene Derivatives/adverse effects , Benzene Derivatives/pharmacokinetics , Carcinoma, Pancreatic Ductal/metabolism , Deoxycytidine/administration & dosage , Deoxycytidine/adverse effects , Deoxycytidine/analogs & derivatives , Deoxycytidine/pharmacokinetics , Drug Administration Schedule , Female , Humans , Infusions, Intravenous , Male , Middle Aged , Pancreatic Neoplasms/metabolism , Propionates/administration & dosage , Propionates/adverse effects , Propionates/pharmacokinetics , Receptors, Notch/antagonists & inhibitors , Receptors, Notch/metabolism , Signal Transduction/drug effects , Sulfones/administration & dosage , Sulfones/adverse effects , Sulfones/pharmacokinetics , GemcitabineABSTRACT
Adaptive designs can make clinical trials more flexible by utilising results accumulating in the trial to modify the trial's course in accordance with pre-specified rules. Trials with an adaptive design are often more efficient, informative and ethical than trials with a traditional fixed design since they often make better use of resources such as time and money, and might require fewer participants. Adaptive designs can be applied across all phases of clinical research, from early-phase dose escalation to confirmatory trials. The pace of the uptake of adaptive designs in clinical research, however, has remained well behind that of the statistical literature introducing new methods and highlighting their potential advantages. We speculate that one factor contributing to this is that the full range of adaptations available to trial designs, as well as their goals, advantages and limitations, remains unfamiliar to many parts of the clinical community. Additionally, the term adaptive design has been misleadingly used as an all-encompassing label to refer to certain methods that could be deemed controversial or that have been inadequately implemented.We believe that even if the planning and analysis of a trial is undertaken by an expert statistician, it is essential that the investigators understand the implications of using an adaptive design, for example, what the practical challenges are, what can (and cannot) be inferred from the results of such a trial, and how to report and communicate the results. This tutorial paper provides guidance on key aspects of adaptive designs that are relevant to clinical triallists. We explain the basic rationale behind adaptive designs, clarify ambiguous terminology and summarise the utility and pitfalls of adaptive designs. We discuss practical aspects around funding, ethical approval, treatment supply and communication with stakeholders and trial participants. Our focus, however, is on the interpretation and reporting of results from adaptive design trials, which we consider vital for anyone involved in medical research. We emphasise the general principles of transparency and reproducibility and suggest how best to put them into practice.
Subject(s)
Clinical Trials as Topic/methods , Research Design/standards , Humans , Reproducibility of ResultsABSTRACT
Extrapolating from information available on one patient group to support conclusions about another is common in clinical research. For example, the findings of clinical trials, often conducted in highly selective patient cohorts, are routinely extrapolated to wider populations by policy makers. Meanwhile, the results of adult trials may be used to support conclusions about the effects of a medicine in children. For example, if the effective concentration of a drug can be assumed to be similar in adults and children, an appropriate paediatric dosing rule may be found by 'bridging', that is, by matching the adult effective concentration. However, this strategy may result in children receiving an ineffective or hazardous dose if, in fact, effective concentrations differ between adults and children. When there is uncertainty about the equality of effective concentrations, some pharmacokinetic-pharmacodynamic data may be needed in children to verify that differences are small. In this paper, we derive optimal group sequential tests that can be used to verify this assumption efficiently. Asymmetric inner wedge tests are constructed that permit early stopping to accept or reject an assumption of similar effective drug concentrations in adults and children. Asymmetry arises because the consequences of under- and over-dosing may differ. We show how confidence intervals can be obtained on termination of these tests and illustrate the small sample operating characteristics of designs using simulation. Copyright © 2016 John Wiley & Sons, Ltd.
Subject(s)
Age Factors , Drug Dosage Calculations , Statistics as Topic/methods , Adult , Child , Dose-Response Relationship, Drug , Humans , Pharmaceutical Preparations/administration & dosage , Pharmacokinetics , Treatment OutcomeABSTRACT
Background Bayesian statistics are an appealing alternative to the traditional frequentist approach to designing, analysing, and reporting of clinical trials, especially in rare diseases. Time-to-event endpoints are widely used in many medical fields. There are additional complexities to designing Bayesian survival trials which arise from the need to specify a model for the survival distribution. The objective of this article was to critically review the use and reporting of Bayesian methods in survival trials. Methods A systematic review of clinical trials using Bayesian survival analyses was performed through PubMed and Web of Science databases. This was complemented by a full text search of the online repositories of pre-selected journals. Cost-effectiveness, dose-finding studies, meta-analyses, and methodological papers using clinical trials were excluded. Results In total, 28 articles met the inclusion criteria, 25 were original reports of clinical trials and 3 were re-analyses of a clinical trial. Most trials were in oncology (n = 25), were randomised controlled (n = 21) phase III trials (n = 13), and half considered a rare disease (n = 13). Bayesian approaches were used for monitoring in 14 trials and for the final analysis only in 14 trials. In the latter case, Bayesian survival analyses were used for the primary analysis in four cases, for the secondary analysis in seven cases, and for the trial re-analysis in three cases. Overall, 12 articles reported fitting Bayesian regression models (semi-parametric, n = 3; parametric, n = 9). Prior distributions were often incompletely reported: 20 articles did not define the prior distribution used for the parameter of interest. Over half of the trials used only non-informative priors for monitoring and the final analysis (n = 12) when it was specified. Indeed, no articles fitting Bayesian regression models placed informative priors on the parameter of interest. The prior for the treatment effect was based on historical data in only four trials. Decision rules were pre-defined in eight cases when trials used Bayesian monitoring, and in only one case when trials adopted a Bayesian approach to the final analysis. Conclusion Few trials implemented a Bayesian survival analysis and few incorporated external data into priors. There is scope to improve the quality of reporting of Bayesian methods in survival trials. Extension of the Consolidated Standards of Reporting Trials statement for reporting Bayesian clinical trials is recommended.
Subject(s)
Bayes Theorem , Clinical Trials as Topic , Survival Analysis , Clinical Trials, Phase III as Topic , Humans , Neoplasms/therapy , Randomized Controlled Trials as Topic , Statistics as TopicABSTRACT
Multi-arm clinical trials that compare several active treatments to a common control have been proposed as an efficient means of making an informed decision about which of several treatments should be evaluated further in a confirmatory study. Additional efficiency is gained by incorporating interim analyses and, in particular, seamless Phase II/III designs have been the focus of recent research. Common to much of this work is the constraint that selection and formal testing should be based on a single efficacy endpoint, despite the fact that in practice, safety considerations will often play a central role in determining selection decisions. Here, we develop a multi-arm multi-stage design for a trial with an efficacy and safety endpoint. The safety endpoint is explicitly considered in the formulation of the problem, selection of experimental arm and hypothesis testing. The design extends group-sequential ideas and considers the scenario where a minimal safety requirement is to be fulfilled and the treatment yielding the best combined safety and efficacy trade-off satisfying this constraint is selected for further testing. The treatment with the best trade-off is selected at the first interim analysis, while the whole trial is allowed to compose of J analyses. We show that the design controls the familywise error rate in the strong sense and illustrate the method through an example and simulation. We find that the design is robust to misspecification of the correlation between the endpoints and requires similar numbers of subjects to a trial based on efficacy alone for moderately correlated endpoints.
Subject(s)
Clinical Trials, Phase III as Topic , Models, Statistical , Research Design , Angiotensin II Type 1 Receptor Blockers/therapeutic use , Benzimidazoles/therapeutic use , Benzoates/therapeutic use , Computer Simulation , Decision Making , Endpoint Determination , HIV Seropositivity , Humans , Insulin Resistance , Patient Safety , Sample Size , TelmisartanABSTRACT
We consider seamless phase II/III clinical trials that compare K treatments with a common control in phase II then test the most promising treatment against control in phase III. The final hypothesis test for the selected treatment can use data from both phases, subject to controlling the familywise type I error rate. We show that the choice of method for conducting the final hypothesis test has a substantial impact on the power to demonstrate that an effective treatment is superior to control. To understand these differences in power, we derive decision rules maximizing power for particular configurations of treatment effects. A rule with such an optimal frequentist property is found as the solution to a multivariate Bayes decision problem. The optimal rules that we derive depend on the assumed configuration of treatment means. However, we are able to identify two decision rules with robust efficiency: a rule using a weighted average of the phase II and phase III data on the selected treatment and control, and a closed testing procedure using an inverse normal combination rule and a Dunnett test for intersection hypotheses. For the first of these rules, we find the optimal division of a given total sample size between phases II and III. We also assess the value of using phase II data in the final analysis and find that for many plausible scenarios, between 50% and 70% of the phase II numbers on the selected treatment and control would need to be added to the phase III sample size in order to achieve the same increase in power. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Subject(s)
Clinical Trials, Phase II as Topic/statistics & numerical data , Clinical Trials, Phase III as Topic/statistics & numerical data , Data Interpretation, Statistical , Decision Support Techniques , Research Design , Bayes Theorem , Clinical Trials, Phase II as Topic/methods , Clinical Trials, Phase II as Topic/standards , Clinical Trials, Phase III as Topic/methods , Clinical Trials, Phase III as Topic/standards , Humans , Sample SizeABSTRACT
This paper considers the design and interpretation of clinical trials comparing treatments for conditions so rare that worldwide recruitment efforts are likely to yield total sample sizes of 50 or fewer, even when patients are recruited over several years. For such studies, the sample size needed to meet a conventional frequentist power requirement is clearly infeasible. Rather, the expectation of any such trial has to be limited to the generation of an improved understanding of treatment options. We propose a Bayesian approach for the conduct of rare-disease trials comparing an experimental treatment with a control where patient responses are classified as a success or failure. A systematic elicitation from clinicians of their beliefs concerning treatment efficacy is used to establish Bayesian priors for unknown model parameters. The process of determining the prior is described, including the possibility of formally considering results from related trials. As sample sizes are small, it is possible to compute all possible posterior distributions of the two success rates. A number of allocation ratios between the two treatment groups can be considered with a view to maximising the prior probability that the trial concludes recommending the new treatment when in fact it is non-inferior to control. Consideration of the extent to which opinion can be changed, even by data from the best feasible design, can help to determine whether such a trial is worthwhile.
Subject(s)
Bayes Theorem , Clinical Trials as Topic/methods , Models, Statistical , Randomized Controlled Trials as Topic/methods , Rare Diseases/therapy , Child , Humans , Mycophenolic Acid/analogs & derivatives , Mycophenolic Acid/therapeutic use , Polyarteritis Nodosa/drug therapy , Remission Induction , Research Design , Sample Size , Treatment OutcomeABSTRACT
AIMS: In the EU, development of new medicines for children should follow a prospectively agreed paediatric investigation plan (PIP). Finding the right dose for children is crucial but challenging due to the variability of pharmacokinetics across age groups and the limited sample sizes available. We examined strategies adopted in PIPs to support paediatric dosing recommendations to identify common assumptions underlying dose investigations and the attempts planned to verify them in children. METHODS: We extracted data from 73 PIP opinions recently adopted by the Paediatric Committee of the European Medicines Agency. These opinions represented 79 medicinal development programmes and comprised a total of 97 dose investigation studies. We identified the design of these dose investigation studies, recorded the analyses planned and determined the criteria used to define target doses. RESULTS: Most dose investigation studies are clinical trials (83 of 97) that evaluate a single dosing rule. Sample sizes used to investigate dose are highly variable across programmes, with smaller numbers used in younger children (< 2 years). Many studies (40 of 97) do not pre-specify a target dose criterion. Of those that do, most (33 of 57 studies) guide decisions using pharmacokinetic data alone. CONCLUSIONS: Common assumptions underlying dose investigation strategies include dose proportionality and similar exposure-response relationships in adults and children. Few development programmes pre-specify steps to verify assumptions in children. There is scope for the use of Bayesian methods as a framework for synthesizing existing information to quantify prior uncertainty about assumptions. This process can inform the design of optimal drug development strategies.
Subject(s)
Pediatrics , Prescription Drugs/administration & dosage , Child , Clinical Trials as Topic , Drug Discovery , Humans , Pharmacokinetics , Sample SizeABSTRACT
Aim: To contextualize the effectiveness of tisagenlecleucel versus real-world standard of care (SoC) in relapsed/refractory follicular lymphoma. Materials & methods: A retrospective indirect matched comparison study using data from the phase II ELARA trial and the US Flatiron Health Research Database. Results: Complete response rate was 69.1 versus 17.7% and the overall response rate was 85.6 versus 58.1% in tisagenlecleucel versus SoC, post weighting by odds. For overall survival, an estimated reduction in the risk of death was observed in favor of tisagenlecleucel over SoC. The hazard ratio for progression-free survival was 0.45 (95% CI: 0.26, 0.88), and for time-to-next treatment was 0.34 (95% CI: 0.15, 0.78) with tisagenlecleucel versus SoC. Conclusion: A consistent trend toward improved efficacy end points was observed in favor of tisagenlecleucel versus SoC.
Subject(s)
Lymphoma, Follicular , Humans , Lymphoma, Follicular/therapy , Retrospective Studies , Standard of Care , Neoplasm Recurrence, LocalABSTRACT
In randomised controlled trials, the effect of treatment on those who comply with allocation to active treatment can be estimated by comparing their outcome to those in the comparison group who would have complied with active treatment had they been allocated to it. We compare three estimators of the causal effect of treatment on compliers when this is a parameter in a proportional hazards model and quantify the bias due to omitting baseline prognostic factors. Causal estimates are found directly by maximising a novel partial likelihood; based on a structural proportional hazards model; and based on a 'corrected dataset' derived after fitting a rank-preserving structural failure time model. Where necessary, we extend these methods to incorporate baseline covariates. Comparisons use simulated data and a real data example. Analysing the simulated data, we found that all three methods are accurate when an important covariate was included in the proportional hazards model (maximum bias 5.4%). However, failure to adjust for this prognostic factor meant that causal treatment effects were underestimated (maximum bias 11.4%), because estimators were based on a misspecified marginal proportional hazards model. Analysing the real data example, we found that adjusting causal estimators is important to correct for residual imbalances in prognostic factors present between trial arms after randomisation. Our results show that methods of estimating causal treatment effects for time-to-event outcomes should be extended to incorporate covariates, thus providing an informative compliment to the corresponding intention-to-treat analysis.
Subject(s)
Randomized Controlled Trials as Topic/statistics & numerical data , Bias , Biostatistics , Causality , Humans , Likelihood Functions , Male , Myocardial Infarction/diet therapy , Myocardial Infarction/mortality , Patient Compliance/statistics & numerical data , Prognosis , Proportional Hazards Models , Prostatic Neoplasms/mortality , Prostatic Neoplasms/surgery , Secondary Prevention , Time Factors , Treatment Outcome , Watchful WaitingABSTRACT
The point at which clinical development programs transition from early phase to pivotal trials is a critical milestone. Substantial uncertainty about the outcome of pivotal trials may remain even after seeing positive early phase data, and companies may need to make difficult prioritization decisions for their portfolio. The probability of success (PoS) of a program, a single number expressed as a percentage reflecting the multitude of risks that may influence the final program outcome, is a key decision-making tool. Despite its importance, companies often rely on crude industry benchmarks that may be "adjusted" by experts based on undocumented criteria and which are typically misaligned with the definition of success used to drive commercial forecasts, leading to overly optimistic expected net present value calculations. We developed a new framework to assess the PoS of a program before pivotal trials begin. Our definition of success encompasses the successful outcome of pivotal trials, regulatory approval and meeting the requirements for market access as outlined in the target product profile. The proposed approach is organized in four steps and uses an innovative Bayesian approach to synthesize all relevant evidence. The new PoS framework is systematic and transparent. It will help organizations to make more informed decisions. In this paper, we outline the rationale and elaborate on the structure of the proposed framework, provide examples, and discuss the benefits and challenges associated with its adoption.
Subject(s)
Bayes Theorem , Humans , Probability , UncertaintyABSTRACT
In this paper, we develop a general Bayesian hierarchical model for bridging across patient subgroups in phase I oncology trials, for which preliminary information about the dose-toxicity relationship can be drawn from animal studies. Parameters that re-scale the doses to adjust for intrinsic differences in toxicity, either between animals and humans or between human subgroups, are introduced to each dose-toxicity model. Appropriate priors are specified for these scaling parameters, which capture the magnitude of uncertainty surrounding the animal-to-human translation and bridging assumption. After mapping data onto a common, 'average' human dosing scale, human dose-toxicity parameters are assumed to be exchangeable either with the standardised, animal study-specific parameters, or between themselves across human subgroups. Random-effects distributions are distinguished by different covariance matrices that reflect the between-study heterogeneity in animals and humans. Possibility of non-exchangeability is allowed to avoid inferences for extreme subgroups being overly influenced by their complementary data. We illustrate the proposed approach with hypothetical examples, and use simulation to compare the operating characteristics of trials analysed using our Bayesian model with several alternatives. Numerical results show that the proposed approach yields robust inferences, even when data from multiple sources are inconsistent and/or the bridging assumptions are incorrect.