RESUMO
Individual probabilistic assessments on the risk of cancer, primary or secondary, will not be understood by most patients. That is the essence of our arguments in this paper. Greater understanding can be achieved by extensive, intensive, and detailed counseling. But since probability itself is a concept that easily escapes our everyday intuition-consider the famous Monte Hall paradox-then it would also be wise to advise patients and potential patients, to not put undue weight on any probabilistic assessment. Such assessments can be of value to the epidemiologist in the investigation of different potential etiologies describing cancer evolution or to the clinical trialist as a way to maximize design efficiency. But to an ordinary individual we cannot anticipate that these assessments will be correctly interpreted.
Assuntos
Neoplasias da Mama , Humanos , Feminino , Neoplasias da Mama/epidemiologia , Neoplasias da Mama/genética , Probabilidade , Medição de RiscoRESUMO
We investigate a statistical framework for Phase I clinical trials that test the safety of two or more agents in combination. For such studies, the traditional assumption of a simple monotonic relation between dose and the probability of an adverse event no longer holds. Nonetheless, the dose toxicity (adverse event) relationship will obey an assumption of partial ordering in that there will be pairs of combinations for which the ordering of the toxicity probabilities is known. Some authors have considered how to best estimate the maximum tolerated dose (a dose providing a rate of toxicity as close as possible to some target rate) in this setting. A related, and equally interesting, problem is to partition the 2-dimensional dose space into two sub-regions: doses with probabilities of toxicity lower and greater than the target. We carry out a detailed investigation of this problem. The theoretical framework for this is the recently presented semiparametric dose finding method. This results in a number of proposals one of which can be viewed as an extension of the Product of Independent beta Priors Escalation method (PIPE). We derive useful asymptotic properties which also apply to the PIPE method when seen as a special case of the more general method given here. Simulation studies provide added confidence concerning the good behaviour of the operating characteristics.
RESUMO
The aims of Phase 1 trials in oncology have broadened considerably from simply demonstrating that the agent/regimen of interest is well tolerated in a relatively heterogeneous patient population to addressing multiple objectives under the heading of early-phase trials and, if possible, obtaining reliable evidence regarding clinical activity to lead to drug approvals via the Accelerated Approval approach or Breakthrough Therapy designation in cases where the tumours are rare, prognosis is poor or where there might be an unmet therapeutic need. Constructing a Phase 1 design that can address multiple objectives within the context of a single trial is not simple. Randomisation can play an important role, but carrying out such randomisation according to the principles of equipoise is a significant challenge in the Phase 1 setting. If the emerging data are not sufficient to definitively address the aims early on, then a proper design can reduce biases, enhance interpretability, and maximise information so that the Phase 1 data can be more compelling. This article outlines objectives and design considerations that need to be adhered to in order to respect ethical and scientific principles required for research in human subjects in early phase clinical trials.
Assuntos
Ensaios Clínicos Fase I como Assunto/métodos , Neoplasias/tratamento farmacológico , Ensaios Clínicos Controlados Aleatórios como Assunto/métodos , Viés , Aprovação de Drogas , Humanos , Neoplasias/metabolismo , Prognóstico , Resultado do TratamentoRESUMO
Little has been published in terms of dose-finding methodology in virology. Aside from a few papers focusing on HIV, the considerable progress in dose-finding methodology of the last 25 years has focused almost entirely on oncology. While adverse reactions to cytotoxic drugs may be life threatening, for anti-viral agents we anticipate something different: side effects that provoke the cessation of treatment. This would correspond to treatment failure. On the other hand, success would not be yes/no but would correspond to a range of responses, from small, no more than say 20% reduction in viral load to the complete elimination of the virus. Less than total success matters since this may allow the patient to achieve immune-mediated clearance. The motivation for this article is an upcoming dose-finding trial in chronic norovirus infection. We propose a novel methodology whose goal is twofold: first, to identify the dose that provides the most favorable distribution of treatment outcomes, and, second, to do this in a way that maximizes the treatment benefit for the patients included in the study.
Assuntos
Antivirais/administração & dosagem , Ensaios Clínicos como Assunto/estatística & dados numéricos , Viroses/tratamento farmacológico , Relação Dose-Resposta a Droga , Efeitos Colaterais e Reações Adversas Relacionados a Medicamentos , Humanos , Dose Máxima Tolerável , Projetos de PesquisaRESUMO
This paper studies the notion of coherence in interval-based dose-finding methods. An incoherent decision is either (a) a recommendation to escalate the dose following an observed dose-limiting toxicity or (b) a recommendation to deescalate the dose following a non-dose-limiting toxicity. In a simulated example, we illustrate that the Bayesian optimal interval method and the Keyboard method are not coherent. We generated dose-limiting toxicity outcomes under an assumed set of true probabilities for a trial of n=36 patients in cohorts of size 1, and we counted the number of incoherent dosing decisions that were made throughout this simulated trial. Each of the methods studied resulted in 13/36 (36%) incoherent decisions in the simulated trial. Additionally, for two different target dose-limiting toxicity rates, 20% and 30%, and a sample size of n=30 patients, we randomly generated 100 dose-toxicity curves and tabulated the number of incoherent decisions made by each method in 1000 simulated trials under each curve. For each method studied, the probability of incurring at least one incoherent decision during the conduct of a single trial is greater than 75%. Coherency is an important principle in the conduct of dose-finding trials. Interval-based methods violate this principle for cohorts of size 1 and require additional modifications to overcome this shortcoming. Researchers need to take a closer look at the dose assignment behavior of interval-based methods when using them to plan dose-finding studies.
Assuntos
Ensaios Clínicos como Assunto/métodos , Simulação por Computador , Dose Máxima Tolerável , Teorema de Bayes , Ensaios Clínicos como Assunto/estatística & dados numéricos , Simulação por Computador/estatística & dados numéricos , Relação Dose-Resposta a Droga , HumanosRESUMO
Survival model construction can be guided by goodness-of-fit techniques as well as measures of predictive strength. Here, we aim to bring together these distinct techniques within the context of a single framework. The goal is how to best characterize and code the effects of the variables, in particular time dependencies, when taken either singly or in combination with other related covariates. Simple graphical techniques can provide an immediate visual indication as to the goodness-of-fit but, in cases of departure from model assumptions, will point in the direction of a more involved and richer alternative model. These techniques appear to be intuitive. This intuition is backed up by formal theorems that underlie the process of building richer models from simpler ones. Measures of predictive strength are used in conjunction with these goodness-of-fit techniques and, again, formal theorems show that these measures can be used to help identify models closest to the unknown non-proportional hazards mechanism that we can suppose generates the observations. Illustrations from studies in breast cancer show how these tools can be of help in guiding the practical problem of efficient model construction for survival data.
Assuntos
Modelos Estatísticos , Neoplasias da Mama , Humanos , Modelos de Riscos ProporcionaisRESUMO
One aspect of an analysis of survival data based on the proportional hazards model that has been receiving increasing attention is that of the predictive ability or explained variation of the model. A number of contending measures have been suggested, including one measure, R2 (ß), which has been proposed given its several desirable properties, including its capacity to accommodate time-dependent covariates, a major feature of the model and one that gives rise to great generality. A thorough study of the properties of available measures, including the aforementioned measure, has been carried out recently. In that work, the authors used bootstrap techniques, particularly complex in the setting of censored data, in order to obtain estimates of precision. The motivation of this work is to provide analytical expressions of precision, in particular confidence interval estimates for R2 (ß). We use Taylor series approximations with and without local linearizing transforms. We also consider a very simple expression based on the Fisher's transformation. This latter approach has two great advantages. It is very easy and quick to calculate, and secondly, it can be obtained for any of the methods given in the recent review. A large simulation study is carried out to investigate the properties of the different methods. Finally, three well-known datasets in breast cancer, lymphoma and lung cancer research are given as illustrations. Copyright © 2017 John Wiley & Sons, Ltd.
Assuntos
Modelos de Riscos Proporcionais , Análise de Sobrevida , Bioestatística , Simulação por Computador , Intervalos de Confiança , Humanos , Modelos Estatísticos , Neoplasias/mortalidade , PrognósticoRESUMO
A relatively recent development in the design of Phase I dose-finding studies is the inclusion of expansion cohort(s), that is, the inclusion of several more patients at a level considered to be the maximum tolerated dose established at the conclusion of the 'pure' Phase I part. Little attention has been given to the additional statistical analysis, including design considerations, that we might wish to consider for this more involved design. For instance, how can we best make use of new information that may confirm or may tend to contradict the estimate of the maximum tolerated dose based on the dose escalation phase. Those patients included during the dose expansion phase may possess different eligibility criteria. During the expansion phase, we will also wish to have an eye on any evidence of efficacy, an aspect that clearly distinguishes such studies from the classical Phase I study. Here, we present a methodology that enables us to continue the monitoring of safety in the dose expansion cohort while simultaneously trying to assess efficacy and, in particular, which disease types may be the most promising to take forward for further study. The most elementary problem is where we only wish to take account of further toxicity information obtained during the dose expansion cohort, and where the initial design was model based or the standard 3+3. More complex set-ups also involve efficacy and the presence of subgroups. Copyright © 2016 John Wiley & Sons, Ltd.
Assuntos
Ensaios Clínicos Fase I como Assunto/métodos , Antineoplásicos/administração & dosagem , Bioestatística , Protocolos Clínicos , Ensaios Clínicos Fase I como Assunto/estatística & dados numéricos , Ensaios Clínicos Fase II como Assunto/métodos , Ensaios Clínicos Fase II como Assunto/estatística & dados numéricos , Estudos de Coortes , Simulação por Computador , Relação Dose-Resposta a Droga , Humanos , Dose Máxima Tolerável , Modelos Estatísticos , Neoplasias/tratamento farmacológico , Tamanho da Amostra , Resultado do TratamentoRESUMO
Adaptive, model-based, dose-finding methods, such as the continual reassessment method, have been shown to have good operating characteristics. One school of thought argues in favor of the use of parsimonious models, not modeling all aspects of the problem, and using a strict minimum number of parameters. In particular, for the standard situation of a single homogeneous group, it is common to appeal to a one-parameter model. Other authors argue for a more classical approach that models all aspects of the problem. Here, we show that increasing the dimension of the parameter space, in the context of adaptive dose-finding studies, is usually counter productive and, rather than leading to improvements in operating characteristics, the added dimensionality is likely to result in difficulties. Among these are inconsistency of parameter estimates, lack of coherence in escalation or de-escalation, erratic behavior, getting stuck at the wrong level, and, in almost all cases, poorer performance in terms of correct identification of the targeted dose. Our conclusions are based on both theoretical results and simulations. Copyright © 2016 John Wiley & Sons, Ltd.
Assuntos
Ensaios Clínicos Fase I como Assunto , Modelos Estatísticos , Relação Dose-Resposta a Droga , Humanos , Dose Máxima TolerávelRESUMO
The majority of methods for the design of phase I trials in oncology are based upon a single course of therapy, yet in actual practice, it may be the case that there is more than one treatment schedule for any given dose. Therefore, the probability of observing a dose-limiting toxicity may depend upon both the total amount of the dose given, as well as the frequency with which it is administered. The objective of the study then becomes to find an acceptable combination of both dose and schedule. Past literature on designing these trials has entailed the assumption that toxicity increases monotonically with both dose and schedule. In this article, we relax this assumption for schedules and present a dose-schedule finding design that can be generalized to situations in which we know the ordering between all schedules and those in which we do not. We present simulation results that compare our method with other suggested dose-schedule finding methodology.
Assuntos
Ensaios Clínicos Fase I como Assunto/métodos , Relação Dose-Resposta a Droga , Dose Máxima Tolerável , Modelos Estatísticos , Projetos de Pesquisa , Antineoplásicos/administração & dosagem , Simulação por Computador , Humanos , Síndromes Mielodisplásicas/tratamento farmacológicoRESUMO
A whole branch of theoretical statistics devotes itself to the analysis of clusters, the aim being to distinguish an apparent cluster arising randomly from one that is more likely to have been produced as a result of some systematic influence. There are many examples in medicine and some that involve both medicine and the legal field; criminal law in particular. Observed clusters or a series of cases in a given setting can set off alarm bells, the recent conviction of Lucy Letby in England being an example. It was an observed cluster, a series of deaths among neonates, that prompted the investigation of Letby. There have been other similar cases in the past and there will be similar cases in the future. Our purpose is not to reconsider any particular trial but, rather, to work with similar, indeed more extreme numbers of cases as a way to underline the statistical mistakes that can be made when attempting to make sense of the data. These notions are illustrated via a made-up case of 10 incidents where the anticipated count was only 2. The most common statistical analysis would associate a probability of less than 0.00005 with this outcome: A very rare event. However, a more careful analysis that avoids common pitfalls results in a probability close to 0.5, indicating that, given the circumstances, we were as likely to see 10 or more as we were to see less than 10.
RESUMO
The time-to-event continual reassessment method (TITE-CRM) was proposed to handle the problem of long trial duration in Phase 1 trials as a result of late-onset toxicities. Here, we implement the TITE-CRM in dose-finding trials of combinations of agents. When studying multiple agents, monotonicity of the dose-toxicity curve is not clearly defined. Therefore, the toxicity probabilities follow a partial order, meaning that there are pairs of treatments for which the ordering of the toxicity probabilities is not known at the start of the trial. A CRM design for partially ordered trials (PO-CRM) was recently proposed. Simulation studies show that extending the TITE-CRM to the partial order setting produces results similar to those of the PO-CRM in terms of maximum tolerated dose recommendation yet reduces the duration of the trial.
Assuntos
Ensaios Clínicos Fase I como Assunto/métodos , Interpretação Estatística de Dados , Simulação por Computador , Relação Dose-Resposta a Droga , Humanos , Dose Máxima TolerávelRESUMO
In this article, we implement a practical computational method for various semiparametric mixed effects models, estimating nonlinear functions by penalized splines. We approximate the integration of the penalized likelihood with respect to random effects with the use of adaptive Gaussian quadrature, which we can conveniently implement in SAS procedure NLMIXED. We carry out the selection of smoothing parameters through approximated generalized cross-validation scores. Our method has two advantages: (1) the estimation is more accurate than the current available quasi-likelihood method for sparse data, for example, binary data; and (2) it can be used in fitting more sophisticated models. We show the performance of our approach in simulation studies with longitudinal outcomes from three settings: binary, normal data after Box-Cox transformation, and count data with log-Gamma random effects. We also develop an estimation method for a longitudinal two-part nonparametric random effects model and apply it to analyze repeated measures of semicontinuous daily drinking records in a randomized controlled trial of topiramate.
Assuntos
Consumo de Bebidas Alcoólicas/epidemiologia , Alcoolismo/tratamento farmacológico , Frutose/análogos & derivados , Modelos Lineares , Fármacos Neuroprotetores/uso terapêutico , Frutose/uso terapêutico , Humanos , Funções Verossimilhança , Distribuição de Poisson , Ensaios Clínicos Controlados Aleatórios como Assunto , TopiramatoRESUMO
BACKGROUND: The two-stage, likelihood-based continual reassessment method (CRM-L) entails the specification of a set of design parameters prior to the beginning of its use in a study. The impression of clinicians is that the success of model-based designs, such as CRM-L, depends upon some of the choices made with regard to these specifications, such as the choice of parametric dose-toxicity model and the initial guess of toxicity probabilities. PURPOSE: In studying the efficiency and comparative performance of competing dose-finding designs for finite (typically small) samples, the nonparametric optimal benchmark is a useful tool. When comparing a dose-finding design to the optimal design, we are able to assess how much room there is for potential improvement. METHODS: The optimal method, based only on an assumption of monotonicity of the dose-toxicity function, is a valuable theoretical construct serving as a benchmark in theoretical studies, similar to that of a Cramér-Rao bound. We consider the performance of CRM-L under various design specifications and how it compares to the optimal design across a range of practical situations. RESULTS: Using simple recommendations for design specifications, the CRM-L will produce performances, in terms of identifying doses at and around the maximum tolerated dose (MTD), that are close to the optimal method on average over a broad group of dose-toxicity scenarios. LIMITATIONS: Although the simulation settings vary in the number of doses considered, the target toxicity rate, and the sample size, the results here are presented for a small, though widely used, set of two-stage CRM designs. CONCLUSIONS: Based on simulations here, and many others not shown, CRM-L is almost as accurate, in many scenarios, as the nonparametric optimal design. On average, there appears to be very little margin for improvement. Even if a finely tuned skeleton offers some improvement over a simple skeleton, the improvement is necessarily very small.
Assuntos
Ensaios Clínicos Fase I como Assunto/métodos , Estatística como Assunto/métodos , Simulação por Computador , Relação Dose-Resposta a Droga , Humanos , Funções Verossimilhança , Dose Máxima Tolerável , Probabilidade , Projetos de Pesquisa , Tamanho da AmostraRESUMO
In studies of survival and its association with treatment and other prognostic variables, elapsed time alone will often show itself to be among the strongest, if not the strongest, of the predictor variables. Kaplan-Meier curves will show the overall survival of each group and the general differences between groups due to treatment. However, the time-dependent nature of treatment effects is not always immediately transparent from these curves. More sophisticated tools are needed to spotlight the treatment effects. An important tool in this context is the treatment effect process. This tool can be potent in revealing the complex myriad of ways in which treatment can affect survival time. We look at a recently published study in which the outcome was relapse-free survival, and we illustrate how the use of the treatment effect process can provide a much deeper understanding of the relationship between time and treatment in this trial.
Assuntos
Estimativa de Kaplan-Meier , Humanos , PrognósticoRESUMO
In oncology clinical trials the guiding principle of model-based dose-finding designs for cytotoxic agents is to progress as fast as possible towards, and identify, the dose level most likely to be the MTD. Recent developments with non-cytotoxic agents have broadened the scope of early phase trials to include multiple objectives. The ultimate goal of dose-finding designs in our modern era is to collect the relevant information in the study for final RP2D determination. While some information is collected on dose levels below and in the vicinity of the MTD during the escalation (using conventional tools such as the Continual Reassessment Method for example), designs that include expansion cohorts or backfill patients effectively amplify the information collected on the lower dose levels. This is achieved by allocating patients to dose levels slightly differently during the study in order to take into account the possibility that "less (dose) might be more". The objective of this paper is to study the concept of amplification. Under the heading of controlled amplification we can include dose expansion cohorts and backfill patients among others. We make some general observations by defining these concepts more precisely and study a specific design that exploits the concept of controlled amplification.
Assuntos
Neoplasias , Projetos de Pesquisa , Humanos , Relação Dose-Resposta a Droga , Dose Máxima Tolerável , Neoplasias/tratamento farmacológico , OncologiaRESUMO
BACKGROUND: Various statistical methods have been used for data analysis in alcohol treatment studies. Trajectory analyses can better capture differences in treatment effects and may provide insight on the optimal duration of future clinical trials and grace periods. This improves on the limitation of commonly used parametric (e.g., linear) methods that cannot capture nonlinear temporal trends in the data. METHODS: We propose an exploratory approach, using more flexible smoothing mixed effects models, more accurately to characterize the temporal patterns of the drinking data. We estimated the trajectories of the treatment arms for data sets from 2 sources: a multisite topiramate study, and the Combined Pharmacotherapies (acamprosate and naltrexone) and Behavioral Interventions study. RESULTS: Our methods illustrate that drinking outcomes of both the topiramate and placebo arms declined over the entire course of the trial but with a greater rate of decline for the topiramate arm. By the point-wise confidence intervals, the heavy drinking probabilities for the topiramate arm might differ from those of the placebo arm as early as week 2. Furthermore, the heavy drinking probabilities of both arms seemed to stabilize at the end of the study. Overall, naltrexone was better than placebo in reducing drinking over time yet was not different from placebo for subjects receiving the combination of a brief medical management and an intensive combined behavioral intervention. CONCLUSIONS: The estimated trajectory plots clearly showed nonlinear temporal trends of the treatment with different medications on drinking outcomes and offered more detailed interpretation of the results. This trajectory analysis approach is proposed as a valid exploratory method for evaluating efficacy in pharmacotherapy trials in alcoholism.