Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 119
Filtrar
1.
Stat Med ; 2024 May 23.
Artigo em Inglês | MEDLINE | ID: mdl-38780538

RESUMO

When designing a randomized clinical trial to compare two treatments, the sample size required to have desired power with a specified type 1 error depends on the hypothesis testing procedure. With a binary endpoint (e.g., response), the trial results can be displayed in a 2 × 2 table. If one does the analysis conditional on the number of positive responses, then using Fisher's exact test has an actual type 1 error less than or equal to the specified nominal type 1 error. Alternatively, one can use one of many unconditional "exact" tests that also preserve the type 1 error and are less conservative than Fisher's exact test. In particular, the unconditional test of Boschloo is always at least as powerful as Fisher's exact test, leading to smaller required sample sizes for clinical trials. However, many statisticians have argued over the years that the conditional analysis with Fisher's exact test is the only appropriate procedure. Since having smaller clinical trials is an extremely important consideration, we review the general arguments given for the conditional analysis of a 2 × 2 table in the context of a randomized clinical trial. We find the arguments not relevant in this context, or, if relevant, not completely convincing, suggesting the sample-size advantage of the unconditional tests should lead to their recommended use. We also briefly suggest that since designers of clinical trials practically always have target null and alternative response rates, there is the possibility of using this information to improve the power of the unconditional tests.

2.
Clin Trials ; 19(2): 158-161, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-34991348

RESUMO

Response-adaptive randomization, which changes the randomization ratio as a randomized clinical trial progresses, is inefficient as compared to a fixed 1:1 randomization ratio in terms of increased required sample size. It is also known that response-adaptive randomization leads to biased treatment effects if there are time trends in the accruing outcome data, for example, due to changes in the patient population being accrued, evaluation methods, or concomitant treatments. Response-adaptive-randomization analysis methods that account for potential time trends, such as time-block stratification or re-randomization, can eliminate this bias. However, as shown in this Commentary, these analysis methods cause a large additional inefficiency of response-adaptive randomization, regardless of whether a time trend actually exists.


Assuntos
Projetos de Pesquisa , Viés , Humanos , Distribuição Aleatória , Tamanho da Amostra
3.
Clin Trials ; 18(2): 188-196, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-33626896

RESUMO

BACKGROUND: Restricted mean survival time methods compare the areas under the Kaplan-Meier curves up to a time τ for the control and experimental treatments. Extraordinary claims have been made about the benefits (in terms of dramatically smaller required sample sizes) when using restricted mean survival time methods as compared to proportional hazards methods for analyzing noninferiority trials, even when the true survival distributions satisfy proportional hazardss. METHODS: Through some limited simulations and asymptotic power calculations, the authors compare the operating characteristics of restricted mean survival time and proportional hazards methods for analyzing both noninferiority and superiority trials under proportional hazardss to understand what relative power benefits there are when using restricted mean survival time methods for noninferiority testing. RESULTS: In the setting of low-event rates, very large targeted noninferiority margins, and limited follow-up past τ, restricted mean survival time methods have more power than proportional hazards methods. For superiority testing, proportional hazards methods have more power. This is not a small-sample phenomenon but requires a low-event rate and a large noninferiority margin. CONCLUSION: Although there are special settings where restricted mean survival time methods have a power advantage over proportional hazards methods for testing noninferiority, the larger issue in these settings is defining appropriate noninferiority margins. We find the restricted mean survival time methods lacking in these regards.


Assuntos
Estudos de Equivalência como Asunto , Projetos de Pesquisa , Taxa de Sobrevida , Humanos , Modelos de Riscos Proporcionais , Tamanho da Amostra , Análise de Sobrevida
5.
Clin Trials ; 16(6): 673-681, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31409130

RESUMO

BACKGROUND: Nonadherence to treatment assignment in a noninferiority randomized trial is especially problematic because it attenuates observed differences between the treatment arms, possibly leading one to conclude erroneously that a truly inferior experimental therapy is noninferior to a standard therapy (inflated type 1 error probability). The Lachin-Foulkes adjustment is an increase in the sample size to account for random nonadherence for the design of a superiority trial with a time-to-event outcome; it has not been explored in the noninferiority trial setting nor with nonrandom nonadherence. Noninferiority trials where patients have knowledge of a personal prognostic risk score may lead to nonrandom nonadherence, as patients with a relatively high risk may be more likely to not adhere to the random assignment to the (reduced) experimental therapy, and patients with a relatively low risk score may be more likely to not adhere to the random assignment to the (more aggressive) standard therapy. METHODS: We investigated via simulations the properties of the Lachin-Foulkes adjustment in the noninferiority setting. We considered nonrandom in addition to random nonadherence to the treatment assignment. For nonrandom nonadherence, we used the scenario where a risk score, potentially associated with the between-arm treatment difference, influences patients' nonadherence. A sensitivity analysis is proposed for addressing the nonrandom nonadherence for this scenario. The noninferiority TAILORx adjuvant breast cancer trial, where eligibility was based on a genomic risk score, is used as an example throughout. RESULTS: The Lachin-Foulkes adjustment to the sample size improves the operating characteristics of noninferiority trials with random nonadherence. However, to maintain type 1 error probability, it is critical to adjust the noninferiorty margin as well as the sample size. With nonrandom nonadherence that is associated with a prognostic risk score, the type 1 error probability of the Lachin-Foulkes adjustment can be inflated (e.g. doubled) when the nonadherence is larger in the experimental arm than the standard arm. The proposed sensitivity analysis lessens the inflation in this situation. CONCLUSION: The Lachin-Foulkes adjustment to the sample size and noninferiority margin is a useful simple technique for attenuating the effects of random nonadherence in the noninferiority setting. With nonrandom nonadherence associated with a risk score known to the patients, the type 1 error probability can be inflated in certain situations. A proposed sensitivity analysis for these situations can attenuate the inflation.


Assuntos
Estudos de Equivalência como Asunto , Modelos Estatísticos , Cooperação do Paciente , Ensaios Clínicos Controlados Aleatórios como Assunto/métodos , Humanos , Modelos de Riscos Proporcionais , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Projetos de Pesquisa , Fatores de Risco , Tamanho da Amostra
6.
J Biopharm Stat ; 28(2): 264-281, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29083961

RESUMO

Methods for assessing whether a single biomarker is prognostic or predictive in the context of a control and experimental treatment are well known. With a panel of biomarkers, each component biomarker potentially measuring sensitivity to a different drug, it is not obvious how to extend these methods. We consider two situations, which lead to different ways of defining whether a biomarker panel is prognostic or predictive. In one, there are multiple experimental targeted treatments, each with an associated biomarker assay of the relevant target in the panel, along with a control treatment; the extension of the single-biomarker scenario to this situation is straightforward. In the other situation, there are many (nontargeted) treatments and a single assay that can be used to assess the sensitivity of the patient's tumor to the different treatments. In addition to evaluating previous approaches to this situation, we propose using regression models with varying assumptions to assess such panel biomarkers. Missing biomarker data can be problematic with the regression models, and, after demonstrating that a multiple imputation procedure does not work, we suggest a modified regression model that can accommodate some forms of missing data. We also address the notions of qualitative interactions in the biomarker panel setting.


Assuntos
Biomarcadores Tumorais/análise , Neoplasias/tratamento farmacológico , Medicina de Precisão/métodos , Medicina de Precisão/estatística & dados numéricos , Humanos , Modelos Estatísticos , Terapia de Alvo Molecular , Neoplasias/metabolismo , Prognóstico , Análise de Regressão , Resultado do Tratamento
7.
Biometrics ; 73(2): 706-708, 2017 06.
Artigo em Inglês | MEDLINE | ID: mdl-27775815

RESUMO

For a fallback randomized clinical trial design with a marker, Choai and Matsui (2015, Biometrics 71, 25-32) estimate the bias of the estimator of the treatment effect in the marker-positive subgroup conditional on the treatment effect not being statistically significant in the overall population. This is used to construct and examine conditionally bias-corrected estimators of the treatment effect for the marker-positive subgroup. We argue that it may not be appropriate to correct for conditional bias in this setting. Instead, we consider the unconditional bias of estimators of the treatment effect for marker-positive patients.


Assuntos
Biomarcadores/análise , Viés , Ensaios Clínicos como Assunto , Humanos
8.
Clin Trials ; 14(6): 597-604, 2017 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-28795844

RESUMO

BACKGROUND: Sample size adjustment designs, which allow increasing the study sample size based on interim analysis of outcome data from a randomized clinical trial, have been increasingly promoted in the biostatistical literature. Although it is recognized that group sequential designs can be at least as efficient as sample size adjustment designs, many authors argue that a key advantage of these designs is their flexibility; interim sample size adjustment decisions can incorporate information and business interests external to the trial. Recently, Chen et al. (Clinical Trials 2015) considered sample size adjustment applications in the time-to-event setting using a design (CDL) that limits adjustments to situations where the interim results are promising. The authors demonstrated that while CDL provides little gain in unconditional power (versus fixed-sample-size designs), there is a considerable increase in conditional power for trials in which the sample size is adjusted. METHODS: In time-to-event settings, sample size adjustment allows an increase in the number of events required for the final analysis. This can be achieved by either (a) following the original study population until the additional events are observed thus focusing on the tail of the survival curves or (b) enrolling a potentially large number of additional patients thus focusing on the early differences in survival curves. We use the CDL approach to investigate performance of sample size adjustment designs in time-to-event trials. RESULTS: Through simulations, we demonstrate that when the magnitude of the true treatment effect changes over time, interim information on the shape of the survival curves can be used to enrich the final analysis with events from the time period with the strongest treatment effect. In particular, interested parties have the ability to make the end-of-trial treatment effect larger (on average) based on decisions using interim outcome data. Furthermore, in "clinical null" cases where there is no benefit due to crossing survival curves, the sample size adjustment design is shown to increase the probability of recommending an ineffective therapy. CONCLUSION: Access to interim information on the shape of the survival curves may jeopardize the perceived integrity of trials using sample size adjustment designs. Therefore, given the lack of efficiency advantage over group sequential designs, sample size adjustment designs in time-to-event settings remain unjustified.


Assuntos
Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Projetos de Pesquisa/estatística & dados numéricos , Tamanho da Amostra , Humanos , Modelos de Riscos Proporcionais , Risco , Fatores de Tempo
9.
Clin Trials ; 14(1): 48-58, 2017 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-27590208

RESUMO

BACKGROUND: Futility (inefficacy) interim monitoring is an important component in the conduct of phase III clinical trials, especially in life-threatening diseases. Desirable futility monitoring guidelines allow timely stopping if the new therapy is harmful or if it is unlikely to demonstrate to be sufficiently effective if the trial were to continue to its final analysis. There are a number of analytical approaches that are used to construct futility monitoring boundaries. The most common approaches are based on conditional power, sequential testing of the alternative hypothesis, or sequential confidence intervals. The resulting futility boundaries vary considerably with respect to the level of evidence required for recommending stopping the study. PURPOSE: We evaluate the performance of commonly used methods using event histories from completed phase III clinical trials of the Radiation Therapy Oncology Group, Cancer and Leukemia Group B, and North Central Cancer Treatment Group. METHODS: We considered published superiority phase III trials with survival endpoints initiated after 1990. There are 52 studies available for this analysis from different disease sites. Total sample size and maximum number of events (statistical information) for each study were calculated using protocol-specified effect size, type I and type II error rates. In addition to the common futility approaches, we considered a recently proposed linear inefficacy boundary approach with an early harm look followed by several lack-of-efficacy analyses. For each futility approach, interim test statistics were generated for three schedules with different analysis frequency, and early stopping was recommended if the interim result crossed a futility stopping boundary. For trials not demonstrating superiority, the impact of each rule is summarized as savings on sample size, study duration, and information time scales. RESULTS: For negative studies, our results show that the futility approaches based on testing the alternative hypothesis and repeated confidence interval rules yielded less savings (compared to the other two rules). These boundaries are too conservative, especially during the first half of the study (<50% of information). The conditional power rules are too aggressive during the second half of the study (>50% of information) and may stop a trial even when there is a clinically meaningful treatment effect. The linear inefficacy boundary with three or more interim analyses provided the best results. For positive studies, we demonstrated that none of the futility rules would have stopped the trials. CONCLUSION: The linear inefficacy boundary futility approach is attractive from statistical, clinical, and logistical standpoints in clinical trials evaluating new anti-cancer agents.


Assuntos
Ensaios Clínicos Fase III como Assunto , Futilidade Médica , Neoplasias/terapia , Comitês de Monitoramento de Dados de Ensaios Clínicos , Humanos , Projetos de Pesquisa
10.
Clin Trials ; 13(6): 651-659, 2016 12.
Artigo em Inglês | MEDLINE | ID: mdl-27439306

RESUMO

BACKGROUND/AIMS: Factorial analyses of 2 × 2 trial designs are known to be problematic unless one can be sure that there is no interaction between the treatments (A and B). Instead, we consider non-factorial analyses of a factorial trial design that addresses clinically relevant questions of interest without any assumptions on the interaction. Primary questions of interest are as follows: (1) is A better than the control treatment C, (2) is B better than C, (3) is the combination of A and B (AB) better than C, and (4) is AB better than A, B, and C. METHODS: A simple three-step procedure that tests the first three primary questions of interest using a Bonferroni adjustment at the first step is proposed. A Hochberg procedure on the four primary questions is also considered. The two procedures are evaluated and compared in limited simulations. Published results from three completed trials with factorial designs are re-evaluated using the two procedures. RESULTS: Both suggested procedures (that answer multiple questions) require a 50%-60% increase in per arm sample size over a two-arm design asking a single question. The simulations suggest a slight advantage to the three-step procedure in terms of power (for the primary and secondary questions). The proposed procedures would have formally addressed the questions arising in the highlighted published trials arguably more simply than the pre-specified factorial analyses used. CONCLUSION: Factorial trial designs are an efficient way to evaluate two treatments, alone and in combination. In situations where a statistical interaction between the treatment effects cannot be assumed to be 0, simple non-factorial analyses are possible that directly assess the questions of interest without the zero interaction assumption.


Assuntos
Pesquisa Biomédica , Projetos de Pesquisa , Estatística como Assunto , Humanos , Ensaios Clínicos Controlados Aleatórios como Assunto
11.
Clin Trials ; 13(4): 391-9, 2016 08.
Artigo em Inglês | MEDLINE | ID: mdl-27136947

RESUMO

BACKGROUND: Interim monitoring is a key component of randomized clinical trial design from both ethical and efficiency perspectives. In studies with time-to-event endpoints, timing of interim analyses is typically based on observing a pre-specified proportion of the total number of events required for the final analysis. While most randomized clinical trial designs pool events over the experimental and control arms in determining the analysis times, some designs use only the control-arm events for scheduling interim looks. PURPOSE: To evaluate the performance of the pooled and control-arm-based interim monitoring approaches and to propose a new procedure, the earliest information time procedure, that combines the benefits of the two approaches. METHODS: The analytical and logistical considerations for the procedures are presented. The methodology is illustrated on data from three published randomized clinical trials. The procedures are compared in a simulation study. RESULTS: The control-arm approach results in a slight inflation of the study type I error in one-sided randomized clinical trial designs. When the new treatment is no better than the control treatment, the pooled-arm approach results in, on average, earlier stopping times than the control-arm approach. When the new treatment works exceptionally well, the average stopping times under the control-arm approach are earlier than those under the pooled approach. The proposed earliest information time procedure is shown to result in stopping times corresponding to the best (earliest) of the two approaches over the entire range of alternatives. LIMITATIONS: The earliest information time procedure may result in a slight inflation of the type I error (especially in small trials); when exact control of the type I error is required, it is necessary to use a simulation-based method to correct the inflation. CONCLUSION: In time-to-event settings, the earliest information time procedure is an attractive alternative to the pooled and control-arm approaches. Improving the timing of interim analyses helps to minimize patient exposure to inferior treatments and to accelerate dissemination of the study results.


Assuntos
Término Precoce de Ensaios Clínicos , Determinação de Ponto Final/normas , Ensaios Clínicos Controlados Aleatórios como Assunto , Projetos de Pesquisa , Estudos de Casos e Controles , Interpretação Estatística de Dados , Tomada de Decisões , Humanos , Futilidade Médica , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Fatores de Tempo , Resultado do Tratamento
12.
Clin Trials ; 18(6): 746, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34524050
13.
Stat Med ; 34(2): 265-80, 2015 Jan 30.
Artigo em Inglês | MEDLINE | ID: mdl-25363739

RESUMO

The comparison of overall survival curves between treatment arms will always be of interest in a randomized clinical trial involving a life-shortening disease. In some settings, the experimental treatment is only expected to affect the deaths caused by the disease, and the proportion of deaths caused by the disease is relatively low. In these settings, the ability to assess treatment-effect differences between Kaplan-Meier survival curves can be hampered by the large proportion of deaths in both arms that are unrelated to the disease. To address this problem, frequently displayed are cause-specific survival curves or cumulative incidence curves, which respectively censor and immortalize events (deaths) not caused by the disease. However, the differences between the experimental and control treatment arms for these curves overestimate the difference between the overall survival curves for the treatment arms and thus could result in overestimation of the benefit of the experimental treatment for the patients. To address this issue, we propose new estimators of overall survival for the treatment arms that are appropriate when the treatment does not affect the non-disease-related deaths. These new estimators give a more precise estimate of the treatment benefit, potentially enabling future patients to make a more informed decision concerning treatment choice. We also consider the case where an exponential assumption allows the simple presentation of mortality rates as the outcome measures. Applications are given for estimating overall survival in a prostate-cancer treatment randomized clinical trial, and for estimating the overall mortality rates in a prostate-cancer screening trial.


Assuntos
Detecção Precoce de Câncer/estatística & dados numéricos , Projetos de Pesquisa Epidemiológica , Estimativa de Kaplan-Meier , Avaliação de Processos e Resultados em Cuidados de Saúde/métodos , Neoplasias da Próstata/terapia , Causas de Morte , Humanos , Masculino , Avaliação de Processos e Resultados em Cuidados de Saúde/estatística & dados numéricos , Neoplasias da Próstata/diagnóstico , Neoplasias da Próstata/mortalidade , Anos de Vida Ajustados por Qualidade de Vida , Ensaios Clínicos Controlados Aleatórios como Assunto , Medição de Risco/métodos , Medição de Risco/estatística & dados numéricos , Fatores de Tempo
14.
Clin Trials ; 11(1): 19-27, 2014 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-24085774

RESUMO

BACKGROUND: New targeted anticancer therapies often benefit only a subset of patients with a given cancer. Definitive evaluation of these agents may require phase III randomized clinical trial designs that integrate evaluation of the new treatment and the predictive ability of the biomarker that putatively determines the sensitive subset. PURPOSE: We propose a new integrated biomarker design, the Marker Sequential Test (MaST) design, that allows sequential testing of the treatment effect in the biomarker subgroups and overall population while controlling the relevant type I error rates. METHODS: After defining the testing and error framework for integrated biomarker designs, we review the commonly used approaches to integrated biomarker testing. We then present a general form of the MaST design and describe how it can be used to provide proper control of false-positive error rates for biomarker-positive and biomarker-negative subgroups. The operating characteristics of the MaST design are compared by analytical methods and simulations to the sequential subgroup-specific design that sequentially assesses the treatment effect in the biomarker subgroups. Practical aspects of MaST design implementation are discussed. RESULTS: The MaST design is shown to have higher power relative to the sequential subgroup-specific design in situations where the treatment effect is homogeneous across biomarker subgroups, while preserving the power for settings where treatment benefit is limited to biomarker-positive subgroup. For example, in the time-to-event setting considered with 30% biomarker-positive prevalence, the MaST design provides up to a 30% increase in power in the biomarker-positive and biomarker-negative subgroups when the treatment benefits all patients equally, while sustaining less than a 2% loss of power against alternatives where the benefit is limited to the biomarker-positive subgroup. LIMITATIONS: The proposed design is appropriate for settings where it is reasonable to assume that the treatment will not be effective in the biomarker-negative patients unless it is effective in the biomarker-positive patients. CONCLUSION: The MaST trial design is a useful alternative to the sequential subgroup-specific design when it is important to consider the treatment effect in the biomarker-positive and biomarker-negative subgroups.


Assuntos
Biomarcadores , Ensaios Clínicos como Assunto/métodos , Projetos de Pesquisa , Antineoplásicos/uso terapêutico , Reações Falso-Positivas , Humanos , Modelos Estatísticos , Neoplasias/tratamento farmacológico , Valor Preditivo dos Testes
16.
Clin Cancer Res ; 30(4): 673-679, 2024 02 16.
Artigo em Inglês | MEDLINE | ID: mdl-38048044

RESUMO

In recent years, there has been increased interest in incorporation of backfilling into dose-escalation clinical trials, which involves concurrently assigning patients to doses that have been previously cleared for safety by the dose-escalation design. Backfilling generates additional information on safety, tolerability, and preliminary activity on a range of doses below the maximum tolerated dose (MTD), which is relevant for selection of the recommended phase II dose and dose optimization. However, in practice, backfilling may not be rigorously defined in trial protocols and implemented consistently. Furthermore, backfilling designs require careful planning to minimize the probability of treating additional patients with potentially inactive agents (and/or subtherapeutic doses). In this paper, we propose a simple and principled approach to incorporate backfilling into the Bayesian optimal interval design (BOIN). The design integrates data from the dose-escalation and backfilling components of the design and ensures that the additional patients are treated at doses where some activity has been seen. Simulation studies demonstrated that the proposed backfilling BOIN design (BF-BOIN) generates additional data for future dose optimization, maintains the accuracy of the MTD identification, and improves patient safety without prolonging the trial duration.


Assuntos
Neoplasias , Projetos de Pesquisa , Humanos , Teorema de Bayes , Simulação por Computador , Dose Máxima Tolerável , Relação Dose-Resposta a Droga , Neoplasias/tratamento farmacológico
17.
J Clin Oncol ; : JCO2400025, 2024 May 17.
Artigo em Inglês | MEDLINE | ID: mdl-38759123

RESUMO

New oncology therapies that extend patients' lives beyond initial expectations and improving later-line treatments can lead to complications in clinical trial design and conduct. In particular, for trials with event-based analyses, the time to observe all the protocol-specified events can exceed the finite follow-up of a clinical trial or can lead to much delayed release of outcome data. With the advent of multiple classes of oncology therapies leading to much longer survival than in the past, this issue in clinical trial design and conduct has become increasingly important in recent years. We propose a straightforward prespecified backstop rule for trials with a time-to-event analysis and evaluate the impact of the rule with both simulated and real-world trial data. We then provide recommendations for implementing the rule across a range of oncology clinical trial settings.

18.
Clin Trials ; 10(5): 754-60, 2013 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-23935162

RESUMO

BACKGROUND: Measurement error in time-to-event end points complicates interpretation of treatment effects in clinical trials. Non-differential measurement error is unlikely to produce large bias [1]. When error depends on treatment arm, bias is of greater concern. Blinded-independent central review (BICR) of all images from a trial is commonly undertaken to mitigate differential measurement-error bias that may be present in hazard ratios (HRs) based on local evaluations. Similar BICR and local evaluation HRs may provide reassurance about the treatment effect, but BICR adds considerable time and expense to trials. METHODS: We describe a BICR audit strategy [2] and apply it to five randomized controlled trials to evaluate its use and to provide practical guidelines. The strategy requires BICR on a subset of study subjects, rather than a complete-case BICR, and makes use of an auxiliary-variable estimator. RESULTS: When the effect size is relatively large, the method provides a substantial reduction in the size of the BICRs. In a trial with 722 participants and a HR of 0.48, an average audit of 28% of the data was needed and always confirmed the treatment effect as assessed by local evaluations. More moderate effect sizes and/or smaller trial sizes required larger proportions of audited images, ranging from 57% to 100% for HRs ranging from 0.55 to 0.77 and sample sizes between 209 and 737. LIMITATIONS: The method is developed for a simple random sample of study subjects. In studies with low event rates, more efficient estimation may result from sampling individuals with events at a higher rate. CONCLUSION: The proposed strategy can greatly decrease the costs and time associated with BICR, by reducing the number of images undergoing review. The savings will depend on the underlying treatment effect and trial size, with larger treatment effects and larger trials requiring smaller proportions of audited data.


Assuntos
Interpretação Estatística de Dados , Método Duplo-Cego , Auditoria Médica/métodos , Ensaios Clínicos Controlados Aleatórios como Assunto/métodos , Antineoplásicos/uso terapêutico , Neoplasias da Mama/tratamento farmacológico , Humanos , Fatores de Tempo
19.
J Natl Cancer Inst ; 115(1): 14-20, 2023 01 10.
Artigo em Inglês | MEDLINE | ID: mdl-36161487

RESUMO

As precision medicine becomes more precise, the sizes of the molecularly targeted subpopulations become increasingly smaller. This can make it challenging to conduct randomized clinical trials of the targeted therapies in a timely manner. To help with this problem of a small patient subpopulation, a study design that is frequently proposed is to conduct a small randomized clinical trial (RCT) with the intent of augmenting the RCT control arm data with historical data from a set of patients who have received the control treatment outside the RCT (historical control data). In particular, strategies have been developed that compare the treatment outcomes across the cohorts of patients treated with the standard (control) treatment to guide the use of the historical data in the analysis; this can lessen the potential well-known biases of using historical controls without any randomization. Using some simple examples and completed studies, we demonstrate in this commentary that these strategies are unlikely to be useful in precision medicine applications.


Assuntos
Medicina de Precisão , Projetos de Pesquisa , Humanos , Resultado do Tratamento
20.
J Clin Oncol ; 41(29): 4616-4620, 2023 Oct 10.
Artigo em Inglês | MEDLINE | ID: mdl-37471685

RESUMO

Recent therapeutic advances have led to improved patient survival in many cancer settings. Although prolongation of survival remains the ultimate goal of cancer treatment, the availability of effective salvage therapies could make definitive phase III trials with primary overall survival (OS) end points difficult to complete in a timely manner. Therefore, to accelerate development of new therapies, many phase III trials of new cancer therapies are now designed with intermediate primary end points (eg, progression-free survival in the metastatic setting) with OS designated as a secondary end point. We review recently published phase III trials and assess contemporary practices for designing and reporting OS as a secondary end point. We then provide design and reporting recommendations for trials with OS as a secondary end point to safeguard OS data integrity and optimize access to the OS data for patient, clinician, and public-health stakeholders.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA