RESUMO
In recent decades, several randomization designs have been proposed in the literature as better alternatives to the traditional permuted block design (PBD), providing higher allocation randomness under the same restriction of the maximum tolerated imbalance (MTI). However, PBD remains the most frequently used method for randomizing subjects in clinical trials. This status quo may reflect an inadequate awareness and appreciation of the statistical properties of these randomization designs, and a lack of simple methods for their implementation. This manuscript presents the analytic results of statistical properties for five randomization designs with MTI restriction based on their steady-state probabilities of the treatment imbalance Markov chain and compares them to those of the PBD. A unified framework for randomization sequence generation and real-time on-demand treatment assignment is proposed for the straightforward implementation of randomization algorithms with explicit formulas of conditional allocation probabilities. Topics associated with the evaluation, selection, and implementation of randomization designs are discussed. It is concluded that for two-arm equal allocation trials, several randomization designs offer stronger protection against selection bias than the PBD does, and their implementation is not necessarily more difficult than the implementation of the PBD.
Assuntos
Modelos Estatísticos , Projetos de Pesquisa , Humanos , Distribuição Aleatória , Viés de Seleção , ProbabilidadeRESUMO
BACKGROUND: The design of a multi-center randomized controlled trial (RCT) involves multiple considerations, such as the choice of the sample size, the number of centers and their geographic location, the strategy for recruitment of study participants, amongst others. There are plenty of methods to sequentially randomize patients in a multi-center RCT, with or without considering stratification factors. The goal of this paper is to perform a systematic assessment of such randomization methods for a multi-center 1:1 RCT assuming a competitive policy for the patient recruitment process. METHODS: We considered a Poisson-gamma model for the patient recruitment process with a uniform distribution of center activation times. We investigated 16 randomization methods (4 unstratified, 4 region-stratified, 4 center-stratified, 3 dynamic balancing randomization (DBR), and a complete randomization design) to sequentially randomize n = 500 patients. Statistical properties of the recruitment process and the randomization procedures were assessed using Monte Carlo simulations. The operating characteristics included time to complete recruitment, number of centers that recruited a given number of patients, several measures of treatment imbalance and estimation efficiency under a linear model for the response, the expected proportions of correct guesses under two different guessing strategies, and the expected proportion of deterministic assignments in the allocation sequence. RESULTS: Maximum tolerated imbalance (MTI) randomization methods such as big stick design, Ehrenfest urn design, and block urn design result in a better balance-randomness tradeoff than the conventional permuted block design (PBD) with or without stratification. Unstratified randomization, region-stratified randomization, and center-stratified randomization provide control of imbalance at a chosen level (trial, region, or center) but may fail to achieve balance at the other two levels. By contrast, DBR does a very good job controlling imbalance at all 3 levels while maintaining the randomized nature of treatment allocation. Adding more centers into the study helps accelerate the recruitment process but at the expense of increasing the number of centers that recruit very few (or no) patients-which may increase center-level imbalances for center-stratified and DBR procedures. Increasing the block size or the MTI threshold(s) may help obtain designs with improved randomness-balance tradeoff. CONCLUSIONS: The choice of a randomization method is an important component of planning a multi-center RCT. Dynamic balancing randomization with carefully chosen MTI thresholds could be a very good strategy for trials with the competitive policy for patient recruitment.
Assuntos
Projetos de Pesquisa , Humanos , Distribuição Aleatória , Tamanho da Amostra , Seleção de PacientesRESUMO
From 2016 to 2021, the National Institutes of Health Stroke Trials Network funded by National Institutes of Health/National Institute of Neurological Disorders and Stroke initiated ten multicenter randomized controlled clinical trials. Optimal subject randomization designs are demanded with 4 critical properties: (1) protection of treatment assignment randomness, (2) achievement of the desired treatment allocation ratio, (3) balancing of baseline covariates, and (4) ease of implementation. For acute stroke trials, it is necessary to minimize the time between eligibility assessment and treatment initiation. This article reviews the randomization designs for 3 trials currently enrolling in Stroke Trials Network funded by National Institutes of Health/National Institute of Neurological Disorders and Stroke, the SATURN (Statins in Intracerebral Hemorrhage Trial), the MOST (Multiarm Optimization of Stroke Thrombolysis Trial), and the FASTEST (Recombinant Factor VIIa for Hemorrhagic Stroke Trial). Randomization methods utilized in these trials include minimal sufficient balance, block urn design, big stick design, and step-forward randomization. Their advantages and limitations are reviewed and compared with traditional stratified permuted block design and minimization.
Assuntos
National Institute of Neurological Disorders and Stroke (USA) , Acidente Vascular Cerebral , Humanos , Hemorragia Cerebral/terapia , Estudos Multicêntricos como Assunto , National Institutes of Health (U.S.) , Distribuição Aleatória , Acidente Vascular Cerebral/tratamento farmacológico , Estados Unidos , Ensaios Clínicos Controlados Aleatórios como AssuntoRESUMO
When the number of baseline covariates whose imbalance needs to be controlled in a sequential randomized controlled trial is large, minimization is the most commonly used method for randomizing treatment assignments. The lack of allocation randomness associated with the minimization method has been the source of controversy, and the need to reduce even minor imbalances inherent in the minimization method has been challenged. The minimal sufficient balance (MSB) method is an alternative to the minimization method. It prevents serious imbalance from a large number of covariates while maintaining a high level of allocation randomness. In this study, the two treatment allocation methods are compared with regards to the effectiveness of balancing covariates across treatment arms and allocation randomness in equal allocation clinical trials. The MSB method proves to be equal or superior in both respects. In addition, type I error rate is preserved in analyses for both balancing methods, when using a binary endpoint.
Assuntos
Projetos de Pesquisa , Simulação por Computador , Distribuição AleatóriaRESUMO
Clinical trial design and analysis often assume study population homogeneity, although patient baseline profile and standard of care may evolve over time, especially in trials with long recruitment periods. The time-trend phenomenon can affect the treatment estimation and the operating characteristics of trials with Bayesian response adaptive randomization (BRAR). The mechanism of time-trend impact on BRAR is increasingly being studied but some aspects remain unclear. The goal of this research is to quantify the bias in treatment effect estimation due to the use of BRAR in the presence of time-trend. In addition, simulations are conducted to compare the performance of three commonly used BRAR algorithms under different time-trend patterns with and without early stopping rules. The results demonstrate that using these BRAR methods in a two-arm trial with time-trend may cause type I error inflation and treatment effect estimation bias. The magnitude and direction of the bias are affected by the parameters of the BRAR algorithm and the time-trend pattern.
Assuntos
Ensaios Clínicos Adaptados como Assunto/estatística & dados numéricos , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Projetos de Pesquisa/estatística & dados numéricos , Algoritmos , Teorema de Bayes , Interpretação Estatística de Dados , Humanos , Fatores de Tempo , Resultado do TratamentoRESUMO
BACKGROUND: Centralized outcome adjudication has been used widely in multicenter clinical trials in order to prevent potential biases and to reduce variations in important safety and efficacy outcome assessments. Adjudication procedures could vary significantly among different studies. In practice, the coordination of outcome adjudication procedures in many multicenter clinical trials remains as a manual process with low efficiency and high risk of delay. Motivated by the demands from two large clinical trial networks, a generic outcome adjudication module has been developed by the network's data management center within a homegrown clinical trial management system. In this article, the system design strategy and database structure are presented. METHODS: A generic database model was created to transfer different adjudication procedures into a unified set of sequential adjudication steps. Each adjudication step was defined by one activate condition, one lock condition, one to five categorical data items to capture adjudication results, and one free text field for general comments. Based on this model, a generic outcome adjudication user interface and a generic data processing program were developed within a homegrown clinical trial management system to provide automated coordination of outcome adjudication. RESULTS: By the end of 2014, this generic outcome adjudication module had been implemented in 10 multicenter trials. A total of 29 adjudication procedures were defined with the number of adjudication steps varying from 1 to 7. The implementation of a new adjudication procedure in this generic module took an experienced programmer 1 or 2 days. A total of 7336 outcome events had been adjudicated and 16,235 adjudication step activities had been recorded. In a multicenter trial, 1144 safety outcome event submissions went through a three-step adjudication procedure and reported a median of 3.95 days from safety event case report form submission to adjudication completion. In another trial, 277 clinical outcome events were adjudicated by a six-step procedure and took a median of 23.84 days from outcome event case report form submission to adjudication procedure completion. CONCLUSION: A generic outcome adjudication module integrated in the clinical trial management system made the automated coordination of efficacy and safety outcome adjudication a reality.
Assuntos
Internet , Integração de Sistemas , Resultado do Tratamento , Bases de Dados Factuais , Humanos , Estudos Multicêntricos como Assunto , Segurança do Paciente , Ensaios Clínicos Controlados Aleatórios como AssuntoRESUMO
In Phase III clinical trials for life-threatening conditions, some serious but expected adverse events, such as early deaths or congestive heart failure, are often treated as the secondary or co-primary endpoint, and are closely monitored by the Data and Safety Monitoring Committee (DSMC). A naïve group sequential design (GSD) for such a study is to specify univariate statistical boundaries for the efficacy and safety endpoints separately, and then implement the two boundaries during the study, even though the two endpoints are typically correlated. One problem with this naïve design, which has been noted in the statistical literature, is the potential loss of power. In this article, we develop an analytical tool to evaluate this negative impact for trials with non-trivial safety event rates, particularly when the safety monitoring is informal. Using a bivariate binary power function for the GSD with a random-effect component to account for subjective decision-making in safety monitoring, we demonstrate how, under common conditions, the power loss in the naïve design can be substantial. This tool may be helpful to entities such as the DSMCs when they wish to deviate from the prespecified stopping boundaries based on safety measures.
Assuntos
Ensaios Clínicos Fase III como Assunto , Interpretação Estatística de Dados , Segurança , Biometria , Tomada de Decisões , Determinação de Ponto Final , HumanosRESUMO
OBJECTIVES: Health information exchanges (HIEs) make possible the construction of databases to characterize patients as multisystem users (MSUs), those visiting emergency departments (EDs) of more than one hospital system within a region during a 1-year period. HIE data can inform an algorithm highlighting patients for whom information is more likely to be present in the HIE, leading to a higher yield HIE experience for ED clinicians and incentivizing their adoption of HIE. Our objective was to describe patient characteristics that determine which ED patients are likely to be MSUs and therefore have information in an HIE, thereby improving the efficacy of HIE use and increasing ED clinician perception of HIE benefit. METHODS: Data were extracted from a regional HIE involving four hospital systems (11 EDs) in the Charleston, South Carolina area. We used univariate and multivariable regression analyses to develop a predictive model for MSU status. RESULTS: Factors associated with MSUs included younger age groups, dual-payer insurance status, living in counties that are more rural, and one of at least six specific diagnoses: mental disorders; symptoms, signs, and ill-defined conditions; complications of pregnancy, childbirth, and puerperium; diseases of the musculoskeletal system; injury and poisoning; and diseases of the blood and blood-forming organs. For patients with multiple ED visits during 1 year, 43.8% of MSUs had ≥4 visits, compared with 18.0% of non-MSUs (P < 0.0001). CONCLUSIONS: This predictive model accurately identified patients cared for at multiple hospital systems and can be used to increase the likelihood that time spent logging on to the HIE will be a value-added effort for emergency physicians.
Assuntos
Serviço Hospitalar de Emergência , Troca de Informação em Saúde , Uso Excessivo dos Serviços de Saúde/prevenção & controle , Registro Médico Coordenado/métodos , Adulto , Redução de Custos , Registros Eletrônicos de Saúde/normas , Serviço Hospitalar de Emergência/economia , Serviço Hospitalar de Emergência/estatística & dados numéricos , Feminino , Troca de Informação em Saúde/normas , Troca de Informação em Saúde/estatística & dados numéricos , Humanos , Masculino , Pessoa de Meia-Idade , Melhoria de Qualidade , South CarolinaRESUMO
OBJECTIVES: A small but significant number of patients make frequent emergency department (ED) visits to multiple EDs within a region. We have a unique health information exchange (HIE) that includes every ED encounter in all hospital systems in our region. Using our HIE we were able to characterize all frequent ED users in our region, regardless of hospital visited or payer class. The objective of our study was to use data from an HIE to characterize patients in a region who are frequent ED users (FEDUs). METHODS: We constructed a database from a cohort of adult patients (18 years old or older) with information in a regional HIE for a 1-year period beginning in April 2012. Patients were defined as FEDUs (those who made four or more visits during the study period) and non-FEDUs (those who made fewer than four ED visits during the study period). Predictor variables included age, race, sex, payer class, county of residence, and International Classification of Diseases, Ninth Revision codes. Bivariate (χ(2)) and multivariate (logistic regression) analyses were performed to determine associations between predictor variables and the outcome of being a FEDU. RESULTS: The database contained 127,672 patients, 12,293 (9.6%) of whom were FEDUs. Logistic regression showed the following patient characteristics to be significantly associated with the outcome of being a FEDU: age 35 to 44 years; African American race; Medicaid, Medicare, and dual-pay payer class; and International Classification of Diseases, Ninth Revision codes 630 to 679 (complications of pregnancy, childbirth, and puerperium), 780 to 799 (ill-defined conditions), 280 to 289 (diseases of the blood), 290-319 (mental disorders), 680 to 709 (diseases of the skin and subcutaneous tissue), 710 to 739 (musculoskeletal and connective tissue disease), 460 to 519 (respiratory disease), and 520 to 579 (digestive disease). No significant differences were noted between men and women. CONCLUSIONS: Data from an HIE can be used to describe all of the patients within a region who are FEDUs, regardless of the hospital system they visited. This information can be used to focus care coordination efforts and link appropriate patients to a medical home. Future studies can be designed to learn the reasons why patients become FEDUs, and interventions can be developed to address deficiencies in health care that result in frequent ED visits.
Assuntos
Serviço Hospitalar de Emergência/estatística & dados numéricos , Troca de Informação em Saúde , Uso Excessivo dos Serviços de Saúde/prevenção & controle , Registro Médico Coordenado/métodos , Adulto , Fatores Etários , Etnicidade , Feminino , Troca de Informação em Saúde/normas , Troca de Informação em Saúde/estatística & dados numéricos , Humanos , Classificação Internacional de Doenças , Masculino , Transtornos Mentais/epidemiologia , Sistemas de Identificação de Pacientes/métodos , Gravidez , Complicações na Gravidez/epidemiologia , South Carolina/epidemiologiaAssuntos
Infecções por Coronavirus , National Institute of Neurological Disorders and Stroke (USA) , Pandemias , Pneumonia Viral , Acidente Vascular Cerebral/terapia , COVID-19 , Ensaios Clínicos como Assunto , Educação Médica , Humanos , Ensaios Clínicos Controlados Aleatórios como Assunto , Acidente Vascular Cerebral/diagnóstico por imagem , Acidente Vascular Cerebral/tratamento farmacológico , Reabilitação do Acidente Vascular Cerebral/estatística & dados numéricos , Telemedicina , Estados UnidosRESUMO
The question of when to adjust for important prognostic covariates often arises in the design of clinical trials, and there remain various opinions on whether to adjust during both randomization and analysis, at randomization alone, or at analysis alone. Furthermore, little is known about the impact of covariate adjustment in the context of noninferiority (NI) designs. The current simulation-based research explores this issue in the NI setting, as compared with the typical superiority setting, by assessing the differential impact on power, type I error, and bias in the treatment estimate as well as its standard error, in the context of logistic regression under both simple and covariate adjusted permuted block randomization algorithms. In both the superiority and NI settings, failure to adjust for covariates that influence outcome in the analysis phase, regardless of prior adjustment at randomization, results in treatment estimates that are biased toward zero, with standard errors that are deflated. However, as no treatment difference is approached under the null hypothesis in superiority and under the alternative in NI, this results in decreased power and nominal or conservative (deflated) type I error in the context of superiority but inflated power and type I error under NI. Results from the simulation study suggest that, regardless of the use of the covariate in randomization, it is appropriate to adjust for important prognostic covariates in analysis, as this yields nearly unbiased estimates of treatment as well as nominal type I error.
Assuntos
Modelos Logísticos , Ensaios Clínicos Controlados Aleatórios como Assunto , Projetos de Pesquisa , Algoritmos , Simulação por Computador , Humanos , PrognósticoRESUMO
Stratified permuted block randomization has been the dominant covariate-adaptive randomization procedure in clinical trials for several decades. Its high probability of deterministic assignment and low capacity of covariate balancing have been well recognized. The popularity of this sub-optimal method is largely due to its simplicity in implementation and the lack of better alternatives. Proposed in this paper is a two-stage covariate-adaptive randomization procedure that uses the block urn design or the big stick design in stage one to restrict the treatment imbalance within each covariate stratum, and uses the biased-coin minimization method in stage two to control imbalances in the distribution of additional covariates that are not included in the stratification algorithm. Analytical and simulation results show that the new randomization procedure significantly reduces the probability of deterministic assignments, and improve the covariate balancing capacity when compared to the traditional stratified permuted block randomization.
Assuntos
Distribuição Aleatória , Ensaios Clínicos Controlados Aleatórios como Assunto/métodos , Projetos de Pesquisa , Algoritmos , Viés , Ensaios Clínicos como Assunto , Simulação por Computador , Humanos , Probabilidade , Teoria da ProbabilidadeRESUMO
It is well known that competing demands exist between the control of important covariate imbalance and protection of treatment allocation randomness in confirmative clinical trials. When implementing a response-adaptive randomization algorithm in confirmative clinical trials designed under a frequentist framework, additional competing demands emerge between the shift of the treatment allocation ratio and the preservation of the power. Based on a large multicenter phase III stroke trial, we present a patient randomization scheme that manages these competing demands by applying a newly developed minimal sufficient balancing design for baseline covariates and a cap on the treatment allocation ratio shift in order to protect the allocation randomness and the power. Statistical properties of this randomization plan are studied by computer simulation. Trial operation characteristics, such as patient enrollment rate and primary outcome response delay, are also incorporated into the randomization plan.
Assuntos
Ensaios Clínicos Fase III como Assunto/métodos , Estudos Multicêntricos como Assunto/métodos , Seleção de Pacientes/ética , Ensaios Clínicos Controlados Aleatórios como Assunto/métodos , Acidente Vascular Cerebral/terapia , Teorema de Bayes , Distribuição de Qui-Quadrado , Ensaios Clínicos Fase III como Assunto/economia , Ensaios Clínicos Fase III como Assunto/ética , Simulação por Computador , Humanos , Estudos Multicêntricos como Assunto/ética , Ensaios Clínicos Controlados Aleatórios como Assunto/economia , Ensaios Clínicos Controlados Aleatórios como Assunto/éticaRESUMO
In logistic regression analysis for binary clinical trial data, adjusted treatment effect estimates are often not equivalent to unadjusted estimates in the presence of influential covariates. This article uses simulation to quantify the benefit of covariate adjustment in logistic regression. However, International Conference on Harmonization guidelines suggest that covariate adjustment be prespecified. Unplanned adjusted analyses should be considered secondary. Results suggest that if adjustment is not possible or unplanned in a logistic setting, balance in continuous covariates can alleviate some (but never all) of the shortcomings of unadjusted analyses. The case of log binomial regression is also explored.
Assuntos
Ensaios Clínicos como Assunto/estatística & dados numéricos , Interpretação Estatística de Dados , Modelos Logísticos , Isquemia Encefálica/diagnóstico , Isquemia Encefálica/tratamento farmacológico , Simulação por Computador , Fibrinolíticos/administração & dosagem , Humanos , Análise Numérica Assistida por Computador , Razão de Chances , Projetos de Pesquisa/estatística & dados numéricos , Índice de Gravidade de Doença , Acidente Vascular Cerebral/diagnóstico , Acidente Vascular Cerebral/tratamento farmacológico , Terapia Trombolítica , Ativador de Plasminogênio Tecidual/administração & dosagem , Resultado do TratamentoRESUMO
Randomized clinical trials, which aim to determine the efficacy and safety of drugs and medical devices, are a complex enterprise with myriad challenges, stakeholders, and traditions. Although the primary goal is scientific discovery, clinical trials must also fulfill regulatory, clinical, and ethical requirements. Innovations in clinical trials methodology have the potential to improve the quality of knowledge gained from trials, the protection of human subjects, and the efficiency of clinical research. Adaptive clinical trial methods represent a broad category of innovations intended to address a variety of long-standing challenges faced by investigators, such as sensitivity to previous assumptions and delayed identification of ineffective treatments. The implementation of adaptive clinical trial methods, however, requires greater planning and simulation compared with a more traditional design, along with more advanced administrative infrastructure for trial execution. The value of adaptive clinical trial methods in exploratory phase (phase 2) clinical research is generally well accepted, but the potential value and challenges of applying adaptive clinical trial methods in large confirmatory phase clinical trials are relatively unexplored, particularly in the academic setting. In the Adaptive Designs Accelerating Promising Trials Into Treatments (ADAPT-IT) project, a multidisciplinary team is studying how adaptive clinical trial methods could be implemented in planning actual confirmatory phase trials in an established, National Institutes of Health-funded clinical trials network. The overarching objectives of ADAPT-IT are to identify and quantitatively characterize the adaptive clinical trial methods of greatest potential value in confirmatory phase clinical trials and to elicit and understand the enthusiasms and concerns of key stakeholders that influence their willingness to try these innovative strategies.
Assuntos
Ensaios Clínicos Controlados Aleatórios como Assunto/métodos , Ensaios Clínicos Fase II como Assunto/métodos , Interpretação Estatística de Dados , Término Precoce de Ensaios Clínicos/métodos , Humanos , Comunicação Interdisciplinar , Projetos de Pesquisa , Tamanho da AmostraRESUMO
It is not uncommon to have experimental drugs under different stages of development for a given disease area. Methods are proposed for use when another treatment arm is to be added mid-study to an ongoing clinical trial. Monte Carlo simulation was used to compare potential analytical approaches for pairwise comparisons through a difference in means in independent normal populations including (1) a linear model adjusting for the design change (stage effect), (2) pooling data across the stages, or (3) the use of an adaptive combination test. In the presence of intra-stage correlation (or a non-ignorable fixed stage effect), simply pooling the data will result in a loss of power and will inflate the type I error rate. The linear model approach is more powerful, but the adaptive methods allow for flexibility (re-estimating sample size). The flexibility to add a treatment arm to an ongoing trial may result in cost savings as treatments that become ready for testing can be added to ongoing studies.
Assuntos
Ensaios Clínicos Fase III como Assunto/estatística & dados numéricos , Projetos de Pesquisa , Algoritmos , Simulação por Computador , Interpretação Estatística de Dados , Humanos , Modelos Lineares , Método de Monte Carlo , Tamanho da Amostra , Resultado do TratamentoRESUMO
To evaluate the performance of randomization designs under various parameter settings and trial sample sizes, and identify optimal designs with respect to both treatment imbalance and allocation randomness, we evaluate 260 design scenarios from 14 randomization designs under 15 sample sizes range from 10 to 300, using three measures for imbalance and three measures for randomness. The maximum absolute imbalance and the correct guess (CG) probability are selected to assess the trade-off performance of each randomization design. As measured by the maximum absolute imbalance and the CG probability, we found that performances of the 14 randomization designs are located in a closed region with the upper boundary (worst case) given by Efron's biased coin design (BCD) and the lower boundary (best case) from the Soares and Wu's big stick design (BSD). Designs close to the lower boundary provide a smaller imbalance and a higher randomness than designs close to the upper boundary. Our research suggested that optimization of randomization design is possible based on quantified evaluation of imbalance and randomness. Based on the maximum imbalance and CG probability, the BSD, Chen's biased coin design with imbalance tolerance method, and Chen's Ehrenfest urn design perform better than popularly used permuted block design, EBCD, and Wei's urn design.
Assuntos
Algoritmos , Ensaios Clínicos Controlados Aleatórios como Assunto/métodos , Projetos de Pesquisa , Simulação por Computador , Humanos , Probabilidade , Viés de SeleçãoRESUMO
In phase III clinical trials, some adverse events may not be rare or unexpected and can be considered as a primary measure for safety, particularly in trials of life-threatening conditions, such as stroke or traumatic brain injury. In some clinical areas, efficacy endpoints may be highly correlated with safety endpoints, yet the interim efficacy analyses under group sequential designs usually do not consider safety measures formally in the analyses. Furthermore, safety is often statistically monitored more frequently than efficacy measures. Because early termination of a trial in this situation can be triggered by either efficacy or safety, the impact of safety monitoring on the error probabilities of efficacy analyses may be nontrivial if the original design does not take the multiplicity effect into account. We estimate the actual error probabilities for a bivariate binary efficacy-safety response in large confirmatory group sequential trials. The estimated probabilities are verified by Monte Carlo simulation. Our findings suggest that type I error for efficacy analyses decreases as efficacy-safety correlation or between-group difference in the safety event rate increases. In addition, although power for efficacy is robust to misspecification of the efficacy-safety correlation, it decreases dramatically as between-group difference in the safety event rate increases.
Assuntos
Ensaios Clínicos Fase III como Assunto/métodos , Interpretação Estatística de Dados , Efeitos Colaterais e Reações Adversas Relacionados a Medicamentos , Simulação por Computador , Determinação de Ponto Final , Humanos , Método de Monte Carlo , Probabilidade , Projetos de Pesquisa , Resultado do TratamentoRESUMO
Minimization is among the most common methods for controlling baseline covariate imbalance at the randomization phase of clinical trials. Previous studies have found that minimization does not preserve allocation randomness as well as other methods, such as minimal sufficient balance, making it more vulnerable to allocation predictability and selection bias. Additionally, minimization has been shown in simulation studies to inadequately control serious covariate imbalances when modest biased coin probabilities (≤0.65) are used. This current study extends the investigation of randomization methods to the analysis phase, comparing the impact of treatment allocation methods on power and bias in estimating treatment effects on a binary outcome using logistic regression. Power and bias in the estimation of treatment effect was found to be comparable across complete randomization, minimization, and minimal sufficient balance in unadjusted analyses. Further, minimal sufficient balance was found to have the most modest impact on power and the least bias in covariate-adjusted analyses. The minimal sufficient balance method is recommended for use in clinical trials as an alternative to minimization when covariate-adaptive subject randomization takes place.