Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 119
Filter
1.
Stat Med ; 2024 May 23.
Article in English | MEDLINE | ID: mdl-38780538

ABSTRACT

When designing a randomized clinical trial to compare two treatments, the sample size required to have desired power with a specified type 1 error depends on the hypothesis testing procedure. With a binary endpoint (e.g., response), the trial results can be displayed in a 2 × 2 table. If one does the analysis conditional on the number of positive responses, then using Fisher's exact test has an actual type 1 error less than or equal to the specified nominal type 1 error. Alternatively, one can use one of many unconditional "exact" tests that also preserve the type 1 error and are less conservative than Fisher's exact test. In particular, the unconditional test of Boschloo is always at least as powerful as Fisher's exact test, leading to smaller required sample sizes for clinical trials. However, many statisticians have argued over the years that the conditional analysis with Fisher's exact test is the only appropriate procedure. Since having smaller clinical trials is an extremely important consideration, we review the general arguments given for the conditional analysis of a 2 × 2 table in the context of a randomized clinical trial. We find the arguments not relevant in this context, or, if relevant, not completely convincing, suggesting the sample-size advantage of the unconditional tests should lead to their recommended use. We also briefly suggest that since designers of clinical trials practically always have target null and alternative response rates, there is the possibility of using this information to improve the power of the unconditional tests.

2.
J Clin Oncol ; : JCO2400025, 2024 May 17.
Article in English | MEDLINE | ID: mdl-38759123

ABSTRACT

New oncology therapies that extend patients' lives beyond initial expectations and improving later-line treatments can lead to complications in clinical trial design and conduct. In particular, for trials with event-based analyses, the time to observe all the protocol-specified events can exceed the finite follow-up of a clinical trial or can lead to much delayed release of outcome data. With the advent of multiple classes of oncology therapies leading to much longer survival than in the past, this issue in clinical trial design and conduct has become increasingly important in recent years. We propose a straightforward prespecified backstop rule for trials with a time-to-event analysis and evaluate the impact of the rule with both simulated and real-world trial data. We then provide recommendations for implementing the rule across a range of oncology clinical trial settings.

3.
4.
Clin Cancer Res ; 30(4): 673-679, 2024 02 16.
Article in English | MEDLINE | ID: mdl-38048044

ABSTRACT

In recent years, there has been increased interest in incorporation of backfilling into dose-escalation clinical trials, which involves concurrently assigning patients to doses that have been previously cleared for safety by the dose-escalation design. Backfilling generates additional information on safety, tolerability, and preliminary activity on a range of doses below the maximum tolerated dose (MTD), which is relevant for selection of the recommended phase II dose and dose optimization. However, in practice, backfilling may not be rigorously defined in trial protocols and implemented consistently. Furthermore, backfilling designs require careful planning to minimize the probability of treating additional patients with potentially inactive agents (and/or subtherapeutic doses). In this paper, we propose a simple and principled approach to incorporate backfilling into the Bayesian optimal interval design (BOIN). The design integrates data from the dose-escalation and backfilling components of the design and ensures that the additional patients are treated at doses where some activity has been seen. Simulation studies demonstrated that the proposed backfilling BOIN design (BF-BOIN) generates additional data for future dose optimization, maintains the accuracy of the MTD identification, and improves patient safety without prolonging the trial duration.


Subject(s)
Neoplasms , Research Design , Humans , Bayes Theorem , Computer Simulation , Maximum Tolerated Dose , Dose-Response Relationship, Drug , Neoplasms/drug therapy
6.
J Clin Oncol ; 41(29): 4616-4620, 2023 Oct 10.
Article in English | MEDLINE | ID: mdl-37471685

ABSTRACT

Recent therapeutic advances have led to improved patient survival in many cancer settings. Although prolongation of survival remains the ultimate goal of cancer treatment, the availability of effective salvage therapies could make definitive phase III trials with primary overall survival (OS) end points difficult to complete in a timely manner. Therefore, to accelerate development of new therapies, many phase III trials of new cancer therapies are now designed with intermediate primary end points (eg, progression-free survival in the metastatic setting) with OS designated as a secondary end point. We review recently published phase III trials and assess contemporary practices for designing and reporting OS as a secondary end point. We then provide design and reporting recommendations for trials with OS as a secondary end point to safeguard OS data integrity and optimize access to the OS data for patient, clinician, and public-health stakeholders.

7.
J Natl Cancer Inst ; 115(1): 14-20, 2023 01 10.
Article in English | MEDLINE | ID: mdl-36161487

ABSTRACT

As precision medicine becomes more precise, the sizes of the molecularly targeted subpopulations become increasingly smaller. This can make it challenging to conduct randomized clinical trials of the targeted therapies in a timely manner. To help with this problem of a small patient subpopulation, a study design that is frequently proposed is to conduct a small randomized clinical trial (RCT) with the intent of augmenting the RCT control arm data with historical data from a set of patients who have received the control treatment outside the RCT (historical control data). In particular, strategies have been developed that compare the treatment outcomes across the cohorts of patients treated with the standard (control) treatment to guide the use of the historical data in the analysis; this can lessen the potential well-known biases of using historical controls without any randomization. Using some simple examples and completed studies, we demonstrate in this commentary that these strategies are unlikely to be useful in precision medicine applications.


Subject(s)
Precision Medicine , Research Design , Humans , Treatment Outcome
8.
J Natl Cancer Inst ; 115(5): 492-497, 2023 05 08.
Article in English | MEDLINE | ID: mdl-36534891

ABSTRACT

The goal of dose optimization during drug development is to identify a dose that preserves clinical benefit with optimal tolerability. Traditionally, the maximum tolerated dose in a small phase I dose escalation study is used in the phase II trial assessing clinical activity of the agent. Although it is possible that this dose level could be altered in the phase II trial if an unexpected level of toxicity is seen, no formal dose optimization has routinely been incorporated into later stages of drug development. Recently it has been suggested that formal dose optimization (involving randomly assigning patients between 2 or more dose levels) be routinely performed early in drug development, even before it is known that the experimental therapy has any clinical activity at any dose level. We consider the relative merits of performing dose optimization earlier vs later in the drug development process and demonstrate that a considerable number of patients may be exposed to ineffective therapies unless dose optimization is delayed until after clinical activity or benefit of the new agent has been established. We conclude that patient and public health interests may be better served by conducting dose optimization after (or during) phase III evaluation, with some exceptions when dose optimization should be performed after activity shown in phase II evaluation.


Subject(s)
Drug Development , Research Design , Humans , Maximum Tolerated Dose , Dose-Response Relationship, Drug
9.
J Natl Cancer Inst ; 114(9): 1222-1227, 2022 09 09.
Article in English | MEDLINE | ID: mdl-35583264

ABSTRACT

Recently developed clinical-benefit outcome scales by the European Society for Medical Oncology and the American Society of Clinical Oncology allow standardized objective evaluation of outcomes of randomized clinical trials. However, incorporation of clinical-benefit outcome scales into trial designs highlights a number of statistical issues: the relationship between minimal clinical benefit and the target treatment-effect alternative used in the trial design, designing trials to assess long-term benefit, potential problems with using a trial endpoint that is not overall survival, and how to incorporate subgroup analyses into the trial design. Using the European Society for Medical Oncology Magnitude of Clinical Benefit Scale as a basis for discussion, we review what these issues are and how they can guide the choice of trial-design target effects, appropriate endpoints, and prespecified subgroup analyses to increase the chances that the resulting trial outcomes can be appropriately evaluated for clinical benefit.


Subject(s)
Neoplasms , Humans , Medical Oncology/methods , Neoplasms/drug therapy
10.
Clin Trials ; 19(2): 158-161, 2022 04.
Article in English | MEDLINE | ID: mdl-34991348

ABSTRACT

Response-adaptive randomization, which changes the randomization ratio as a randomized clinical trial progresses, is inefficient as compared to a fixed 1:1 randomization ratio in terms of increased required sample size. It is also known that response-adaptive randomization leads to biased treatment effects if there are time trends in the accruing outcome data, for example, due to changes in the patient population being accrued, evaluation methods, or concomitant treatments. Response-adaptive-randomization analysis methods that account for potential time trends, such as time-block stratification or re-randomization, can eliminate this bias. However, as shown in this Commentary, these analysis methods cause a large additional inefficiency of response-adaptive randomization, regardless of whether a time trend actually exists.


Subject(s)
Research Design , Bias , Humans , Random Allocation , Sample Size
11.
J Natl Cancer Inst ; 114(2): 187-190, 2022 02 07.
Article in English | MEDLINE | ID: mdl-34289052

ABSTRACT

Efficient biomarker-driven randomized clinical trials are a key tool for implementing precision oncology. A commonly used biomarker phase III design is focused on testing the treatment effect in biomarker-positive and overall study populations. This approach may result in recommending new treatments to biomarker-negative patients when these treatments have no benefit for these patients.


Subject(s)
Neoplasms , Research Design , Biomarkers , Humans , Medical Oncology , Neoplasms/drug therapy , Neoplasms/therapy , Precision Medicine , Randomized Controlled Trials as Topic
12.
Clin Trials ; 18(6): 746, 2021 12.
Article in English | MEDLINE | ID: mdl-34524050
14.
Clin Trials ; 18(2): 188-196, 2021 04.
Article in English | MEDLINE | ID: mdl-33626896

ABSTRACT

BACKGROUND: Restricted mean survival time methods compare the areas under the Kaplan-Meier curves up to a time τ for the control and experimental treatments. Extraordinary claims have been made about the benefits (in terms of dramatically smaller required sample sizes) when using restricted mean survival time methods as compared to proportional hazards methods for analyzing noninferiority trials, even when the true survival distributions satisfy proportional hazardss. METHODS: Through some limited simulations and asymptotic power calculations, the authors compare the operating characteristics of restricted mean survival time and proportional hazards methods for analyzing both noninferiority and superiority trials under proportional hazardss to understand what relative power benefits there are when using restricted mean survival time methods for noninferiority testing. RESULTS: In the setting of low-event rates, very large targeted noninferiority margins, and limited follow-up past τ, restricted mean survival time methods have more power than proportional hazards methods. For superiority testing, proportional hazards methods have more power. This is not a small-sample phenomenon but requires a low-event rate and a large noninferiority margin. CONCLUSION: Although there are special settings where restricted mean survival time methods have a power advantage over proportional hazards methods for testing noninferiority, the larger issue in these settings is defining appropriate noninferiority margins. We find the restricted mean survival time methods lacking in these regards.


Subject(s)
Equivalence Trials as Topic , Research Design , Survival Rate , Humans , Proportional Hazards Models , Sample Size , Survival Analysis
16.
J Clin Oncol ; 38(17): 2003-2004, 2020 06 10.
Article in English | MEDLINE | ID: mdl-32315276
17.
J Natl Cancer Inst ; 112(2): 128-135, 2020 02 01.
Article in English | MEDLINE | ID: mdl-31545373

ABSTRACT

Designing and interpreting single-arm phase II trials of combinations of agents is challenging because it can be difficult, based on historical data, to identify levels of activity for which the combination would be worth pursuing. We identified Cancer Therapy Evaluation Program single-arm combination trials that were activated in 2008-2017 and tabulated their design characteristics and results. Positive trials were evaluated as to whether they provided credible evidence that the combination was better than its constituents. A total of 125 trials were identified, and 120 trials had results available. Twelve had designs where eligible patients were required to be resistant or refractory to all but one element of the combination. Only 17.8% of the 45 positive trials were deemed to provide credible evidence that the combination was better than its constituents. Of the 10 positive trials with observed rates 10 percentage points higher than their upper (alternative hypothesis) targets, only five were deemed to provide such credible evidence. Many trials were definitively negative, with observed clinical activity at or below their lower (null hypothesis) targets. Ideally, use of single-arm combination trials should be restricted to settings where each agent is known to have minimal monotherapy activity (and a randomized trial is infeasible). In these settings, an observed signal is attributable to synergy and thus could be used to decide whether the combination is worth pursuing. In other settings, credible evidence can still be obtained if the observed activity is much higher than expected, but experience suggests that this is a rare occurrence.


Subject(s)
Neoplasms/therapy , Clinical Trials, Phase II as Topic , Combined Modality Therapy , Humans , Neoplasms/diagnosis , Neoplasms/mortality , Retreatment , Treatment Outcome
18.
J Natl Cancer Inst ; 112(8): 773-778, 2020 08 01.
Article in English | MEDLINE | ID: mdl-31868907

ABSTRACT

Molecular profiling of a patient's tumor to guide targeted treatment selection offers the potential to advance patient care by improving outcomes and minimizing toxicity (by avoiding ineffective treatments). However, current development of molecular profile (MP) panels is often based on applying institution-specific or subjective algorithms to nonrandomized patient cohorts. Consequently, obtaining reliable evidence that molecular profiling is offering clinical benefit and is ready for routine clinical practice is challenging. In particular, we discuss here the problems with interpreting for clinical utility nonrandomized studies that compare outcomes in patients treated based on their MP vs those treated with standard of care, studies that compare the progression-free survival (PFS) seen on a MP-directed treatment to the PFS seen for the same patient on a previous standard treatment (PFS ratio), and multibasket trials that evaluate the response rates of targeted therapies in specific molecularly defined subpopulations (regardless of histology). We also consider some limitations of randomized trial designs. A two-step strategy is proposed in which multiple mutation-agent pairs are tested for activity in one or more multibasket trials in the first step. The results of the first step are then used to identify promising mutation-agent pairs that are combined in a molecular panel that is then tested in the step-two strategy-design randomized clinical trial (the molecular panel-guided treatment for the selected mutations vs standard of care). This two-step strategy should allow rigorous evidence-driven identification of mutation-agent pairs that can be moved into routine clinical practice.


Subject(s)
Biomarkers, Tumor/genetics , Diagnostic Tests, Routine/trends , Gene Expression Profiling , Medical Oncology/trends , Clinical Trials as Topic/methods , Clinical Trials as Topic/standards , Clinical Trials as Topic/statistics & numerical data , Diagnostic Tests, Routine/methods , Gene Expression Profiling/methods , Gene Expression Profiling/trends , Gene Expression Regulation, Neoplastic , History, 21st Century , Humans , Medical Oncology/methods , Molecular Targeted Therapy/methods , Molecular Targeted Therapy/trends , Neoplasms/diagnosis , Neoplasms/epidemiology , Neoplasms/genetics , Neoplasms/therapy , Precision Medicine/methods , Precision Medicine/trends , Transcriptome , Treatment Outcome
20.
Clin Trials ; 16(6): 673-681, 2019 12.
Article in English | MEDLINE | ID: mdl-31409130

ABSTRACT

BACKGROUND: Nonadherence to treatment assignment in a noninferiority randomized trial is especially problematic because it attenuates observed differences between the treatment arms, possibly leading one to conclude erroneously that a truly inferior experimental therapy is noninferior to a standard therapy (inflated type 1 error probability). The Lachin-Foulkes adjustment is an increase in the sample size to account for random nonadherence for the design of a superiority trial with a time-to-event outcome; it has not been explored in the noninferiority trial setting nor with nonrandom nonadherence. Noninferiority trials where patients have knowledge of a personal prognostic risk score may lead to nonrandom nonadherence, as patients with a relatively high risk may be more likely to not adhere to the random assignment to the (reduced) experimental therapy, and patients with a relatively low risk score may be more likely to not adhere to the random assignment to the (more aggressive) standard therapy. METHODS: We investigated via simulations the properties of the Lachin-Foulkes adjustment in the noninferiority setting. We considered nonrandom in addition to random nonadherence to the treatment assignment. For nonrandom nonadherence, we used the scenario where a risk score, potentially associated with the between-arm treatment difference, influences patients' nonadherence. A sensitivity analysis is proposed for addressing the nonrandom nonadherence for this scenario. The noninferiority TAILORx adjuvant breast cancer trial, where eligibility was based on a genomic risk score, is used as an example throughout. RESULTS: The Lachin-Foulkes adjustment to the sample size improves the operating characteristics of noninferiority trials with random nonadherence. However, to maintain type 1 error probability, it is critical to adjust the noninferiorty margin as well as the sample size. With nonrandom nonadherence that is associated with a prognostic risk score, the type 1 error probability of the Lachin-Foulkes adjustment can be inflated (e.g. doubled) when the nonadherence is larger in the experimental arm than the standard arm. The proposed sensitivity analysis lessens the inflation in this situation. CONCLUSION: The Lachin-Foulkes adjustment to the sample size and noninferiority margin is a useful simple technique for attenuating the effects of random nonadherence in the noninferiority setting. With nonrandom nonadherence associated with a risk score known to the patients, the type 1 error probability can be inflated in certain situations. A proposed sensitivity analysis for these situations can attenuate the inflation.


Subject(s)
Equivalence Trials as Topic , Models, Statistical , Patient Compliance , Randomized Controlled Trials as Topic/methods , Humans , Proportional Hazards Models , Randomized Controlled Trials as Topic/statistics & numerical data , Research Design , Risk Factors , Sample Size
SELECTION OF CITATIONS
SEARCH DETAIL
...