RESUMO
In oncology, Phase II studies are crucial for clinical development plans as such studies identify potent agents with sufficient activity to continue development in the subsequent Phase III trials. Traditionally, Phase II studies are single-arm studies, with the primary endpoint being short-term treatment efficacy. However, drug safety is also an important consideration. In the context of such multiple-outcome designs, predictive probability-based Bayesian monitoring strategies have been developed to assess whether a clinical trial will provide enough evidence to continue with a Phase III study at the scheduled end of the trial. Therefore, we propose a new simple index vector to summarize the results that cannot be captured by existing strategies. Specifically, we define the worst and most promising situations for the potential effect of a treatment, then use the proposed index vector to measure the deviation between the two situations. Finally, simulation studies are performed to evaluate the operating characteristics of the design. The obtained results demonstrate that the proposed method makes appropriate interim go/no-go decisions.
RESUMO
In clinical trials with time-to-event data, the evaluation of treatment efficacy can be a long and complex process, especially when considering long-term primary endpoints. Using surrogate endpoints to correlate the primary endpoint has become a common practice to accelerate decision-making. Moreover, the ethical need to minimize sample size and the practical need to optimize available resources have encouraged the scientific community to develop methodologies that leverage historical data. Relying on the general theory of group sequential design and using a Bayesian framework, the methodology described in this paper exploits a documented historical relationship between a clinical "final" endpoint and a surrogate endpoint to build an informative prior for the primary endpoint, using surrogate data from an early interim analysis of the clinical trial. The predictive probability of success of the trial is then used to define a futility-stopping rule. The methodology demonstrates substantial enhancements in trial operating characteristics when there is a good agreement between current and historical data. Furthermore, incorporating a robust approach that combines the surrogate prior with a vague component mitigates the impact of the minor prior-data conflicts while maintaining acceptable performance even in the presence of significant prior-data conflicts. The proposed methodology was applied to design a Phase III clinical trial in metastatic colorectal cancer, with overall survival as the primary endpoint and progression-free survival as the surrogate endpoint.
RESUMO
Precision medicine is an approach for disease treatment that defines treatment strategies based on the individual characteristics of the patients. Motivated by an open problem in cancer genomics, we develop a novel model that flexibly clusters patients with similar predictive characteristics and similar treatment responses; this approach identifies, via predictive inference, which one among a set of treatments is better suited for a new patient. The proposed method is fully model based, avoiding uncertainty underestimation attained when treatment assignment is performed by adopting heuristic clustering procedures, and belongs to the class of product partition models with covariates, here extended to include the cohesion induced by the normalized generalized gamma process. The method performs particularly well in scenarios characterized by considerable heterogeneity of the predictive covariates in simulation studies. A cancer genomics case study illustrates the potential benefits in terms of treatment response yielded by the proposed approach. Finally, being model based, the approach allows estimating clusters' specific response probabilities and then identifying patients more likely to benefit from personalized treatment.
Assuntos
Modelos Estatísticos , Neoplasias , Humanos , Medicina de Precisão/métodos , Probabilidade , Simulação por Computador , Neoplasias/genética , Neoplasias/terapia , Teorema de BayesRESUMO
Heavy selection for growth in turkeys has led to a decay in leg soundness and walking ability. In this study, different models and traits were used to investigate the genetic relationships between body weight (BW) and walking ability (WA) in a turkey population. The data consisted of BW and WA traits collected on 276,059 male birds. Body weight was measured at 12 and 20 wk and WA at 20 wk of age. For WA, birds were scored based on a 1 (bad) to 6 (good) grading system. Due to the small number of records with scores 5 and 6, birds with WA scores of 4, 5, and 6 were grouped together resulting in only 4 classes. Additionally, a binary classification of WA (scores 1 and 2 = Similarly, an estimate of the genetic correlation between WA and BW at 20 wk was -0.45, indicating a more pronounced class 1; scores 3, 4, 5, and 6 = class 2) was evaluated. The inheritability estimates of WA ranged between 0.25 and 0.27 depending on the number of classes. The Heritability of BW at 12 and 20 wk was 0.44 and 0.51, respectively. The genetic correlation between WA and BW at 12 wk was around -0.35, indicating that heavy birds tend to have poor WA. antagonistic relationship between BW and WA. The genetic correlation between BW at 12 and 20 wk was positive and high (0.80). The residual correlation between WA and BW at 12 and 20 wk of age was -0.07 and -0.02, respectively. The residual correlation between body weight traits was 0.57. Similar results were observed when a binary classification was adopted for WA. The probability of an individual with a given genetic merit expressing a certain class of WA was determined for different fixed effect designations. Predictive probabilities clearly showed that birds when hatched in the winter would have a small chance to exhibit good WA phenotypes.
Assuntos
Galinhas , Perus , Masculino , Animais , Perus/fisiologia , Peso Corporal/genética , Modelos Lineares , CaminhadaRESUMO
In oncology, phase II clinical trials are often planned as single-arm two-stage designs with a binary endpoint, for example, progression-free survival after 12 months, and the option to stop for futility after the first stage. Simon's two-stage design is a very popular approach but depending on the follow-up time required to measure the patients' outcomes the trial may have to be paused undesirably long. To shorten this forced interruption, it was proposed to use a short-term endpoint for the interim decision, such as progression-free survival after 3 months. We show that if the assumptions for the short-term endpoint are misspecified, the decision-making in the interim can be misleading, resulting in a great loss of statistical power. For the setting of a binary endpoint with nested measurements, such as progression-free survival, we propose two approaches that utilize all available short-term and long-term assessments of the endpoint to guide the interim decision. One approach is based on conditional power and the other is based on Bayesian posterior predictive probability of success. In extensive simulations, we show that both methods perform similarly, when appropriately calibrated, and can greatly improve power compared to the existing approach in settings with slow patient recruitment. Software code to implement the methods is made publicly available.
Assuntos
Tomada de Decisões , Projetos de Pesquisa , Humanos , Teorema de Bayes , Determinação de Ponto Final/métodos , ProbabilidadeRESUMO
In this short note, we express our viewpoint regarding declaring study success based on Bayesian predictive probability of study success.
Assuntos
Projetos de Pesquisa , Teorema de Bayes , ProbabilidadeRESUMO
Introduction: Efforts to develop biomarker-targeted anti-cancer therapies have progressed rapidly in recent years. With efforts to expedite regulatory reviews of promising therapies, several targeted cancer therapies have been granted accelerated approval on the basis of evidence acquired in single-arm phase II clinical trials. And yet, in the absence of randomization, patient prognosis for progression-free survival and overall survival may not have been studied under standard of care chemotherapies for emerging biomarker subpopulations prior to the submission of an accelerated approval application. Historical control rates used to design and evaluate emerging targeted therapies often arise as population averages, lacking specificity to the targeted genetic or immunophenotypic profile. Thus, historical trial results are inherently limited for inferring the potential "comparative efficacy" of novel targeted therapies. Consequently, randomization may be unavoidable in this setting. Innovations in design methodology are needed, however, to enable efficient implementation of randomized trials for agents that target biomarker subpopulations. Methods: This article proposes three randomized designs for early phase biomarker-guided oncology clinical trials. Each design utilizes the optimal efficiency predictive probability method to monitor multiple biomarker subpopulations for futility. Only designs with type I error between 0.05 and 0.1 and power of at least 0.8 were considered when selecting an optimal efficiency design from among the candidate designs formed by different combinations of posterior and predictive threshold. A simulation study motivated by the results reported in a recent clinical trial studying atezolizumab treatment in patients with locally advanced or metastatic urothelial carcinoma is used to evaluate the operating characteristics of the various designs. Results: Out of a maximum of 300 total patients, we find that the enrichment design has an average total sample size under the null of 101.0 and a total average sample size under the alternative of 218.0, as compared to 144.8 and 213.8 under the null and alternative, respectively, for the stratified control arm design. The pooled control arm design enrolled a total of 113.2 patients under the null and 159.6 under the alternative, out of a maximum of 200. These average sample sizes that are 23-48% smaller under the alternative and 47-64% smaller under the null, as compared to the realized sample size of 310 patients in the phase II study of atezolizumab. Discussion: Our findings suggest that potentially smaller phase II trials to those used in practice can be designed using randomization and futility stopping to efficiently obtain more information about both the treatment and control groups prior to phase III study.
RESUMO
BACKGROUND: To perform virtual re-executions of a breast cancer clinical trial with a time-to-event outcome to demonstrate what would have happened if the trial had used various Bayesian adaptive designs instead. METHODS: We aimed to retrospectively "re-execute" a randomised controlled trial that compared two chemotherapy regimens for women with metastatic breast cancer (ANZ 9311) using Bayesian adaptive designs. We used computer simulations to estimate the power and sample sizes of a large number of different candidate designs and shortlisted designs with the either highest power or the lowest average sample size. Using the real-world data, we explored what would have happened had ANZ 9311 been conducted using these shortlisted designs. RESULTS: We shortlisted ten adaptive designs that had higher power, lower average sample size, and a lower false positive rate, compared to the original trial design. Adaptive designs that prioritised small sample size reduced the average sample size by up to 37% when there was no clinical effect and by up to 17% at the target clinical effect. Adaptive designs that prioritised high power increased power by up to 5.9 percentage points without a corresponding increase in type I error. The performance of the adaptive designs when applied to the real-world ANZ 9311 data was consistent with the simulations. CONCLUSION: The shortlisted Bayesian adaptive designs improved power or lowered the average sample size substantially. When designing new oncology trials, researchers should consider whether a Bayesian adaptive design may be beneficial.
Assuntos
Neoplasias da Mama , Teorema de Bayes , Neoplasias da Mama/tratamento farmacológico , Feminino , Humanos , Projetos de Pesquisa , Estudos Retrospectivos , Tamanho da AmostraRESUMO
BACKGROUND: Low birth weight is a public health problem in Africa with the cause attributable to malaria in pregnancy. World Health Organization recommends the use of intermittent preventive treatment in pregnancy (IPTp) with sulfadoxine-pyrimethamine to prevent malaria during pregnancy. The objective of this study was to evaluate the prevalence and trajectories of birth weight and the direct impact and relationship between sulfadoxine-pyrimethamine and birth weight in Ghana since 2003. METHOD: This study used secondary data obtained from the Demographic and Health Survey conducted in Ghana since 2003. Low birth weight was defined as weight < 2500 g irrespective of the gestational age of the foetus, while normal birth weight was between 2500 g to < 4000 g and macrosomia was = > 4000 g. In all the analysis, we adjusted for clustering, stratification and weighting to reduce bias and improve precision of the estimates. Analysis was performed on each survey year as well as the pooled dataset. The generalized ordered partial proportional odds model was used due to violations of the parallel regression model assumptions. Efforts were made to identify all confounding variables and these were adjusted for. Predictive analysis was also executed. RESULTS: The overall prevalence of low birth weight was 9% while that of macrosomia was 13%. The low birth weight for 2003 was 12% while in 2008 it was 21% and then 68% in 2014. The mean birth weight of the children in 2014 was 3.16 (3.14, 3.19), 2008 was 3.37 (3.28, 3.45) and 2003 was 3.59 (3.49, 3.69) while that of the pooled data was 3.28 (3.25, 3.30). The adjusted model (taking into consideration all confounding variables) showed that non-uptake of SP could result in 51% odds of giving birth to a low-birth-weight compared with normal birth weight child. An insignificant result was observed between macrosomia and low birth weight. CONCLUSION: There is higher probability that low birth weight could increase over the next couple of years if measures are not taking to reverse the current trajectories. The uptake of sulfadoxine-pyrimethamine should continue to be encouraged and recommended because it has a direct beneficial effect on the weight of the child.
Assuntos
Antimaláricos/administração & dosagem , Peso ao Nascer , Modelos Estatísticos , Pirimetamina/administração & dosagem , Sulfadoxina/administração & dosagem , Adulto , Demografia , Combinação de Medicamentos , Feminino , Macrossomia Fetal , Gana/epidemiologia , Humanos , Recém-Nascido de Baixo Peso , Malária/prevenção & controle , Gravidez , Complicações Parasitárias na Gravidez/prevenção & controleRESUMO
In phase II oncology trials, two-stage design allowing early stopping for futility and/or efficacy is frequently used. However, this design based on frequentist statistical approaches could not guarantee a high posterior probability of attending the pre-specified clinically interesting rate from a Bayesian perspective. Here, we proposed a new Bayesian design enabling early terminating for efficacy as well as futility. In addition to the clinically uninteresting and interesting response rate, a prior distribution of response rate, the minimum posterior threshold probabilities and the lengths of the highest posterior density intervals were specified in the design. Finally, we defined the feasible design with the highest total effective predictive probability. We studied the properties of the proposed design and applied it to an oncology trial as an example. The proposed design ensured that the observed response rate fell within prespecified levels of posterior probability. The proposed design provides an alternative design to single-arm two-stage trials.
Assuntos
Ensaios Clínicos Fase II como Assunto , Neoplasias , Projetos de Pesquisa , Teorema de Bayes , Humanos , Oncologia , Neoplasias/tratamento farmacológico , ProbabilidadeRESUMO
We propose a new adaptive threshold detection and enrichment design in which the biomarker threshold is adaptively estimated and updated by optimizing a trade-off between the size of the biomarker positive population and the magnitude of the treatment effect in that population. Enrichment is based on an enrollment criterion that accounts for the uncertainty in estimation of the threshold. Early termination for futility is allowed based on predictive success probability. Valid testing and estimation techniques for the treatment effect overall and inpatient subgroups are studied. Simulations and an example demonstrate advantages of the proposed design over existing designs.
Assuntos
Ensaios Clínicos como Assunto , Projetos de Pesquisa , Biomarcadores , Humanos , ProbabilidadeRESUMO
Most existing phase II clinical trial designs focus on conventional chemotherapy with binary tumor response as the endpoint. The advent of novel therapies, such as molecularly targeted agents and immunotherapy, has made the endpoint of phase II trials more complicated, often involving ordinal, nested, and coprimary endpoints. We propose a simple and flexible Bayesian optimal phase II predictive probability (OPP) design that handles binary and complex endpoints in a unified way. The Dirichlet-multinomial model is employed to accommodate different types of endpoints. At each interim, given the observed interim data, we calculate the Bayesian predictive probability of success, should the trial continue to the maximum planned sample size, and use it to make the go/no-go decision. The OPP design controls the type I error rate, maximizes power or minimizes the expected sample size, and is easy to implement, because the go/no-go decision boundaries can be enumerated and included in the protocol before the onset of the trial. Simulation studies show that the OPP design has satisfactory operating characteristics.
Assuntos
Biometria/métodos , Ensaios Clínicos Fase I como Assunto , Determinação de Ponto Final , Teorema de Bayes , Humanos , ProbabilidadeRESUMO
Cystic fibrosis (CF) is a progressive, genetic disease characterized by frequent, prolonged drops in lung function. Accurately predicting rapid underlying lung-function decline is essential for clinical decision support and timely intervention. Determining whether an individual is experiencing a period of rapid decline is complicated due to its heterogeneous timing and extent, and error component of the measured lung function. We construct individualized predictive probabilities for "nowcasting" rapid decline. We assume each patient's true longitudinal lung function, S(t), follows a nonlinear, nonstationary stochastic process, and accommodate between-patient heterogeneity through random effects. Corresponding lung-function decline at time t is defined as the rate of change, S'(t). We predict S'(t) conditional on observed covariate and measurement history by modeling a measured lung function as a noisy version of S(t). The method is applied to data on 30 879 US CF Registry patients. Results are contrasted with a currently employed decision rule using single-center data on 212 individuals. Rapid decline is identified earlier using predictive probabilities than the center's currently employed decision rule (mean difference: 0.65 years; 95% confidence interval (CI): 0.41, 0.89). We constructed a bootstrapping algorithm to obtain CIs for predictive probabilities. We illustrate real-time implementation with R Shiny. Predictive accuracy is investigated using empirical simulations, which suggest this approach more accurately detects peak decline, compared with a uniform threshold of rapid decline. Median area under the ROC curve estimates (Q1-Q3) were 0.817 (0.814-0.822) and 0.745 (0.741-0.747), respectively, implying reasonable accuracy for both. This article demonstrates how individualized rate of change estimates can be coupled with probabilistic predictive inference and implementation for a useful medical-monitoring approach.
Assuntos
Fibrose Cística , Fibrose Cística/diagnóstico , Fibrose Cística/genética , Progressão da Doença , Volume Expiratório Forçado , Humanos , Pulmão/diagnóstico por imagem , ProbabilidadeRESUMO
BACKGROUND: Bayesian predictive probability design, with a binary endpoint, is gaining attention for the phase II trial due to its innovative strategy. To make the Bayesian design more accessible, we elucidate this Bayesian approach with a R package to streamline a statistical plan, so biostatisticians and clinicians can easily integrate the design into clinical trial. METHODS: We utilize a Bayesian framework using Bayesian posterior probability and predictive probability to build a R package and develop a statistical plan for the trial design. With pre-defined sample sizes, the approach employs the posterior probability with a threshold to calculate the minimum number of responders needed at end of the study to claim efficacy. Then the predictive probability is applied to evaluate future success at interim stages and form stopping rule at each stage. RESULTS: An R package, 'BayesianPredictiveFutility', with associated graphical interface is developed for easy utilization of the trial design. The statistical tool generates a professional statistical plan with comprehensive results including a summary, details of study design, a series of tables and figures from stopping boundary for futility, Bayesian predictive probability, performance [probability of early termination (PET), type I error, and power], PET at each interim analysis, sensitivity analysis for predictive probability, posterior probability, sample size, and beta prior distribution. The statistical plan presents the methodology in a readable language fashion while preserving rigorous statistical arguments. The output formats (Word or PDF) are available to communicate with physicians or to be incorporated in the trial protocol. Two clinical trials in lung cancer are used to demonstrate its usefulness. CONCLUSIONS: Bayesian predictive probability method presents a flexible design in clinical trial. The statistical tool brings an added value to broaden the application.
RESUMO
The evolution of "informatics" technologies has the potential to generate massive databases, but the extent to which personalized medicine may be effectuated depends on the extent to which these rich databases may be utilized to advance understanding of the disease molecular profiles and ultimately integrated for treatment selection, necessitating robust methodology for dimension reduction. Yet, statistical methods proposed to address challenges arising with the high-dimensionality of omics-type data predominately rely on linear models and emphasize associations deriving from prognostic biomarkers. Existing methods are often limited for discovering predictive biomarkers that interact with treatment and fail to elucidate the predictive power of their resultant selection rules. In this article, we present a Bayesian predictive method for personalized treatment selection that is devised to integrate both the treatment predictive and disease prognostic characteristics of a particular patient's disease. The method appropriately characterizes the structural constraints inherent to prognostic and predictive biomarkers, and hence properly utilizes these complementary sources of information for treatment selection. The methodology is illustrated through a case study of lower grade glioma. Theoretical considerations are explored to demonstrate the manner in which treatment selection is impacted by prognostic features. Additionally, simulations based on an actual leukemia study are provided to ascertain the method's performance with respect to selection rules derived from competing methods.
Assuntos
Biometria/métodos , Medicina de Precisão , Teorema de Bayes , Glioma/diagnóstico , Glioma/tratamento farmacológico , Glioma/patologia , Glioma/radioterapia , Humanos , Gradação de Tumores , Probabilidade , PrognósticoRESUMO
Molecular targeted therapies come often with lower toxicity profiles than traditional cytotoxic treatments, thus shifting drug development paradigm into establishing evidence of biological activity, target modulation, and pharmacodynamics effects of these therapies in early phase trials. Therefore, these trials need to address simultaneous evaluation of safety, proof-of-concept biological marker activity, or changes in continuous tumor size instead of binary response rate. Interim analyses are typically incorporated in the trial due to concerns regarding excessive toxicity and ineffective new treatment. There is a lack of interim strategies developed to monitor futility and/or efficacy for these types of continuous outcomes, especially in single-arm phase II trials. We propose a 2-stage design based on predictive probability to accommodate continuous endpoints, assuming a normal distribution with known variance. Simulation results and case study demonstrated that the proposed design can incorporate an interim stop for futility as well as for efficacy while maintaining desirable design properties. As expected, using continuous tumor size resulted in reduced sample sizes for both optimal and minimax designs. A limited exploration of various priors was performed and shown to be robust. As research rapidly moves to incorporate more molecular targeted therapies, it will accommodate new types of outcomes while allowing for flexible stopping rules to continue optimizing trial resources and prioritize agents with compelling early phase data.
Assuntos
Ensaios Clínicos Fase II como Assunto/métodos , Estatística como Assunto , Humanos , Modelos Estatísticos , Probabilidade , Fatores de Tempo , Resultado do TratamentoRESUMO
Relating disease status to imaging data stands to increase the clinical significance of neuroimaging studies. Many neurological and psychiatric disorders involve complex, systems-level alterations that manifest in functional and structural properties of the brain and possibly other clinical and biologic measures. We propose a Bayesian hierarchical model to predict disease status, which is able to incorporate information from both functional and structural brain imaging scans. We consider a two-stage whole brain parcellation, partitioning the brain into 282 subregions, and our model accounts for correlations between voxels from different brain regions defined by the parcellations. Our approach models the imaging data and uses posterior predictive probabilities to perform prediction. The estimates of our model parameters are based on samples drawn from the joint posterior distribution using Markov Chain Monte Carlo (MCMC) methods. We evaluate our method by examining the prediction accuracy rates based on leave-one-out cross validation, and we employ an importance sampling strategy to reduce the computation time. We conduct both whole-brain and voxel-level prediction and identify the brain regions that are highly associated with the disease based on the voxel-level prediction results. We apply our model to multimodal brain imaging data from a study of Parkinson's disease. We achieve extremely high accuracy, in general, and our model identifies key regions contributing to accurate prediction including caudate, putamen, and fusiform gyrus as well as several sensory system regions.
RESUMO
In clinical research and development, interim monitoring is critical for better decision-making and minimizing the risk of exposing patients to possible ineffective therapies. For interim futility or efficacy monitoring, predictive probability methods are widely adopted in practice. Those methods have been well studied for univariate variables. However, for longitudinal studies, predictive probability methods using univariate information from only completers may not be most efficient, and data from on-going subjects can be utilized to improve efficiency. On the other hand, leveraging information from on-going subjects could allow an interim analysis to be potentially conducted once a sufficient number of subjects reach an earlier time point. For longitudinal outcomes, we derive closed-form formulas for predictive probabilities, including Bayesian predictive probability, predictive power, and conditional power and also give closed-form solutions for predictive probability of success in a future trial and the predictive probability of success of the best dose. When predictive probabilities are used for interim monitoring, we study their distributions and discuss their analytical cutoff values or stopping boundaries that have desired operating characteristics. We show that predictive probabilities utilizing all longitudinal information are more efficient for interim monitoring than that using information from completers only. To illustrate their practical application for longitudinal data, we analyze 2 real data examples from clinical trials.
Assuntos
Teorema de Bayes , Ensaios Clínicos como Assunto/métodos , Estudos Longitudinais , Probabilidade , Simulação por Computador , Humanos , Modelos Estatísticos , Resultado do TratamentoRESUMO
The process of screening agents one-at-a-time under the current clinical trials system suffers from several deficiencies that could be addressed in order to extend financial and patient resources. In this article, we introduce a statistical framework for designing and conducting randomized multi-arm screening platforms with binary endpoints using Bayesian modeling. In essence, the proposed platform design consolidates inter-study control arms, enables investigators to assign more new patients to novel therapies, and accommodates mid-trial modifications to the study arms that allow both dropping poorly performing agents as well as incorporating new candidate agents. When compared to sequentially conducted randomized two-arm trials, screening platform designs have the potential to yield considerable reductions in cost, alleviate the bottleneck between phase I and II, eliminate bias stemming from inter-trial heterogeneity, and control for multiplicity over a sequence of a priori planned studies. When screening five experimental agents, our results suggest that platform designs have the potential to reduce the mean total sample size by as much as 40% and boost the mean overall response rate by as much as 15%. We explain how to design and conduct platform designs to achieve the aforementioned aims and preserve desirable frequentist properties for the treatment comparisons. In addition, we demonstrate how to conduct a platform design using look-up tables that can be generated in advance of the study. The gains in efficiency facilitated by platform design could prove to be consequential in oncologic settings, wherein trials often lack a proper control, and drug development suffers from low enrollment, long inter-trial latency periods, and an unacceptably high rate of failure in phase III.
Assuntos
Probabilidade , Projetos de Pesquisa , Teorema de Bayes , Ensaios Clínicos Fase II como Assunto/estatística & dados numéricos , Ensaios Clínicos Fase III como Assunto/estatística & dados numéricos , Ensaios Clínicos Controlados como Assunto/estatística & dados numéricosRESUMO
We compare posterior and predictive estimators and probabilities in response-adaptive randomization designs for two- and three-group clinical trials with binary outcomes. Adaptation based upon posterior estimates are discussed, as are two predictive probability algorithms: one using the traditional definition, the other using a skeptical distribution. Optimal and natural lead-in designs are covered. Simulation studies show: efficacy comparisons lead to more adaptation than center comparisons, though at some power loss; skeptically predictive efficacy comparisons and natural lead-in approaches lead to less adaptation but offer reduced allocation variability. Though nuanced, these results help clarify the power-adaptation trade-off in adaptive randomization.