Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
1.
BMC Med Res Methodol ; 20(1): 274, 2020 11 05.
Artigo em Inglês | MEDLINE | ID: mdl-33153438

RESUMO

BACKGROUND: In clinical trials with fixed study designs, statistical inference is only made when the trial is completed. In contrast, group sequential designs allow an early stopping of the trial at interim, either for efficacy when the treatment effect is significant or for futility when the treatment effect seems too small to justify a continuation of the trial. Efficacy stopping boundaries based on alpha spending functions have been widely discussed in the statistical literature, and there is also solid work on the choice of adequate futility stopping boundaries. Still, futility boundaries are often chosen with little or completely without theoretical justification, in particular in investigator initiated trails. Some authors contributed to fill this gap. In here, we rely on an idea of Schüler et al. (2017) who discuss optimality criteria for futility boundaries for the special case of trials with (multiple) time-to-event endpoints. Their concept can be adopted to define "optimal" futility boundaries (with respect to given performance indicators) for continuous endpoints. METHODS: We extend Schülers' definition for "optimal" futility boundaries to the most common study situation of a single continuous primary endpoint compared between two groups. First, we introduce the analytic algorithm to derive these futility boundaries. Second, the new concept is applied to a real clinical trial example. Finally, the performance of a study design with an "optimal" futility boundary is compared to designs with arbitrarily chosen futility boundaries. RESULTS: The presented concept of deriving futility boundaries allows to control the probability of wrongly stopping for futility, that means stopping for futility even if the treatment effect is promizing. At the same time, the loss in power is also controlled by this approach. Moreover, "optimal" futility boundaries improve the probability of correctly stopping for futility under the null hypothesis of no difference between two groups. CONCLUSIONS: The choice of futility boundaries should be thoroughly investigated at the planning stage. The sometimes met, arbitrary choice of futility boundaries can lead to a substantial negative impact on performance. Applying futility boundaries with predefined optimization criteria increases efficiency of group sequential designs. Other optimization criteria than proposed in here might be incorporated.


Assuntos
Futilidade Médica , Projetos de Pesquisa , Algoritmos , Humanos , Probabilidade , Pesquisadores
2.
Biom J ; 62(7): 1717-1729, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-32529689

RESUMO

While there is recognition that more informative clinical endpoints can support better decision-making in clinical trials, it remains a common practice to categorize endpoints originally measured on a continuous scale. The primary motivation for this categorization (and most commonly dichotomization) is the simplicity of the analysis. There is, however, a long argument that this simplicity can come at a high cost. Specifically, larger sample sizes are needed to achieve the same level of accuracy when using a dichotomized outcome instead of the original continuous endpoint. The degree of "loss of information" has been studied in the contexts of parallel-group designs and two-stage Phase II trials. Limited attention, however, has been given to the quantification of the associated losses in dose-ranging trials. In this work, we propose an approach to estimate the associated losses in Phase II dose-ranging trials that is free of the actual dose-ranging design used and depends on the clinical setting only. The approach uses the notion of a nonparametric optimal benchmark for dose-finding trials, an evaluation tool that facilitates the assessment of a dose-finding design by providing an upper bound on its performance under a given scenario in terms of the probability of the target dose selection. After demonstrating how the benchmark can be applied to Phase II dose-ranging trials, we use it to quantify the dichotomization losses. Using parameters from real clinical trials in various therapeutic areas, it is found that the ratio of sample sizes needed to obtain the same precision using continuous and binary (dichotomized) endpoints varies between 70% and 75% under the majority of scenarios but can drop to 50% in some cases.


Assuntos
Benchmarking , Relação Dose-Resposta a Droga , Projetos de Pesquisa , Ensaios Clínicos Fase II como Assunto , Simulação por Computador , Humanos , Probabilidade , Tamanho da Amostra
3.
Biometrics ; 76(1): 197-209, 2020 03.
Artigo em Inglês | MEDLINE | ID: mdl-31322732

RESUMO

We propose a novel response-adaptive randomization procedure for multi-armed trials with continuous outcomes that are assumed to be normally distributed. Our proposed rule is non-myopic, and oriented toward a patient benefit objective, yet maintains computational feasibility. We derive our response-adaptive algorithm based on the Gittins index for the multi-armed bandit problem, as a modification of the method first introduced in Villar et al. (Biometrics, 71, pp. 969-978). The resulting procedure can be implemented under the assumption of both known or unknown variance. We illustrate the proposed procedure by simulations in the context of phase II cancer trials. Our results show that, in a multi-armed setting, there are efficiency and patient benefit gains of using a response-adaptive allocation procedure with a continuous endpoint instead of a binary one. These gains persist even if an anticipated low rate of missing data due to deaths, dropouts, or complete responses is imputed online through a procedure first introduced in this paper. Additionally, we discuss how there are response-adaptive designs that outperform the traditional equal randomized design both in terms of efficiency and patient benefit measures in the multi-armed trial context.


Assuntos
Ensaios Clínicos Adaptados como Assunto/estatística & dados numéricos , Algoritmos , Biometria/métodos , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Ensaios Clínicos Fase II como Assunto/estatística & dados numéricos , Simulação por Computador , Determinação de Ponto Final/estatística & dados numéricos , Humanos , Modelos Estatísticos , Neoplasias/patologia , Neoplasias/terapia , Pacientes Desistentes do Tratamento/estatística & dados numéricos , Resultado do Tratamento
4.
Biostatistics ; 21(2): 189-201, 2020 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-30165594

RESUMO

An important tool to evaluate the performance of any design is an optimal benchmark proposed by O'Quigley and others (2002. Non-parametric optimal design in dose finding studies. Biostatistics3, 51-56) that provides an upper bound on the performance of a design under a given scenario. The original benchmark can only be applied to dose finding studies with a binary endpoint. However, there is a growing interest in dose finding studies involving continuous outcomes, but no benchmark for such studies has been developed. We show that the original benchmark and its extension by Cheung (2014. Simple benchmark for complex dose finding studies. Biometrics70, 389-397), when looked at from a different perspective, can be generalized to various settings with several discrete and continuous outcomes. We illustrate and compare the benchmark's performance in the setting of a dose finding Phase I clinical trial with a continuous toxicity endpoint and a Phase I/II trial with binary toxicity and continuous efficacy endpoints. We show that the proposed benchmark provides an accurate upper bound in these contexts and serves as a powerful tool for evaluating designs.


Assuntos
Benchmarking/métodos , Bioestatística/métodos , Ensaios Clínicos Fase I como Assunto/métodos , Ensaios Clínicos Fase II como Assunto/métodos , Determinação de Ponto Final/métodos , Dose Máxima Tolerável , Projetos de Pesquisa , Humanos
5.
Biom J ; 61(6): 1477-1492, 2019 11.
Artigo em Inglês | MEDLINE | ID: mdl-31298770

RESUMO

There is growing interest in integrated Phase I/II oncology clinical trials involving molecularly targeted agents (MTA). One of the main challenges of these trials are nontrivial dose-efficacy relationships and administration of MTAs in combination with other agents. While some designs were recently proposed for such Phase I/II trials, the majority of them consider the case of binary toxicity and efficacy endpoints only. At the same time, a continuous efficacy endpoint can carry more information about the agent's mechanism of action, but corresponding designs have received very limited attention in the literature. In this work, an extension of a recently developed information-theoretic design for the case of a continuous efficacy endpoint is proposed. The design transforms the continuous outcome using the logistic transformation and uses an information-theoretic argument to govern selection during the trial. The performance of the design is investigated in settings of single-agent and dual-agent trials. It is found that the novel design leads to substantial improvements in operating characteristics compared to a model-based alternative under scenarios with nonmonotonic dose/combination-efficacy relationships. The robustness of the design to missing/delayed efficacy responses and to the correlation in toxicity and efficacy endpoints is also investigated.


Assuntos
Biometria/métodos , Ensaios Clínicos Fase I como Assunto , Ensaios Clínicos Fase II como Assunto , Determinação de Ponto Final , Humanos , Terapia de Alvo Molecular , Neoplasias/tratamento farmacológico
6.
Stat Methods Med Res ; 28(6): 1852-1878, 2019 06.
Artigo em Inglês | MEDLINE | ID: mdl-29869564

RESUMO

When designing studies involving a continuous endpoint, the hypothesized difference in means ( θA ) and the assumed variability of the endpoint ( σ2 ) play an important role in sample size and power calculations. Traditional methods of sample size re-estimation often update one or both of these parameters using statistics observed from an internal pilot study. However, the uncertainty in these estimates is rarely addressed. We propose a hybrid classical and Bayesian method to formally integrate prior beliefs about the study parameters and the results observed from an internal pilot study into the sample size re-estimation of a two-stage study design. The proposed method is based on a measure of power called conditional expected power (CEP), which averages the traditional power curve using the prior distributions of θ and σ2 as the averaging weight, conditional on the presence of a positive treatment effect. The proposed sample size re-estimation procedure finds the second stage per-group sample size necessary to achieve the desired level of conditional expected interim power, an updated CEP calculation that conditions on the observed first-stage results. The CEP re-estimation method retains the assumption that the parameters are not known with certainty at an interim point in the trial. Notional scenarios are evaluated to compare the behavior of the proposed method of sample size re-estimation to three traditional methods.


Assuntos
Teorema de Bayes , Estudos de Equivalência como Asunto , Tamanho da Amostra , Determinação de Ponto Final , Humanos , Modelos Estatísticos
7.
Stat Med ; 36(1): 67-80, 2017 01 15.
Artigo em Inglês | MEDLINE | ID: mdl-27633877

RESUMO

Phase I oncology trials are designed to identify a safe dose with an acceptable toxicity profile. The dose is typically determined based on the probability of severe toxicity observed during the first treatment cycle, although patients continue to receive treatment for multiple cycles. In addition, the toxicity data from multiple types and grades are typically summarized into a single binary outcome of dose-limiting toxicity. A novel endpoint, the total toxicity profile, was previously developed to account for the multiple toxicity types and grades. In this work, we propose to account for longitudinal repeated measures of total toxicity profile over multiple treatment cycles, accounting for cumulative toxicity during dosing-finding. A linear mixed model was utilized in the Bayesian framework, with addition of Bayesian risk functions for decision-making in dose assignment. The performance of this design is evaluated using simulation studies and compared with the previously proposed quasi-likelihood continual reassessment method (QLCRM) design. Twelve clinical scenarios incorporating four different locations of maximum tolerated dose and three different time trends (decreasing, increasing, and no effect) were investigated. The proposed repeated measures design was comparable with the QLCRM when only cycle 1 data were utilized in dose-finding; however, it demonstrated an improvement over the QLCRM when data from multiple cycles were used across all scenarios. Copyright © 2016 John Wiley & Sons, Ltd.


Assuntos
Antineoplásicos/administração & dosagem , Antineoplásicos/toxicidade , Teorema de Bayes , Ensaios Clínicos Fase I como Assunto , Neoplasias/tratamento farmacológico , Projetos de Pesquisa , Simulação por Computador , Relação Dose-Resposta a Droga , Humanos , Dose Máxima Tolerável , Modelos Estatísticos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA