Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 36
Filtrar
1.
BMC Med Res Methodol ; 23(1): 301, 2023 12 19.
Artigo em Inglês | MEDLINE | ID: mdl-38114931

RESUMO

BACKGROUND: To demonstrate bioequivalence between two drug formulations, a pilot trial is often conducted prior to a pivotal trial to assess feasibility and gain preliminary information about the treatment effect. Due to the limited sample size, it is not recommended to perform significance tests at the conventional 5% level using pilot data to determine if a pivotal trial should take place. Whilst some authors suggest to relax the significance level, a Bayesian framework provides an alternative for informing the decision-making. Moreover, a Bayesian approach also readily permits possible incorporation of pilot data in priors for the parameters that underpin the pivotal trial. METHODS: We consider two-sequence, two-period crossover designs that compare test (T) and reference (R) treatments. We propose a robust Bayesian hierarchical model, embedded with a scaling factor, to elicit a Go/No-Go decision using predictive probabilities. Following a Go decision, the final analysis to formally establish bioequivalence can leverage both the pilot and pivotal trial data jointly. A simulation study is performed to evaluate trial operating characteristics. RESULTS: Compared with conventional procedures, our proposed method improves the decision-making to correctly allocate a Go decision in scenarios of bioequivalence. By choosing an appropriate threshold, the probability of correctly (incorrectly) making a No-Go (Go) decision can be ensured at a desired target level. Using both pilot and pivotal trial data in the final analysis can result in a higher chance of declaring bioequivalence. The false positive rate can be maintained in situations when T and R are not bioequivalent. CONCLUSIONS: The proposed methodology is novel and effective in different stages of bioequivalence assessment. It can greatly enhance the decision-making process in bioequivalence trials, particularly in situations with a small sample size.


Assuntos
Projetos de Pesquisa , Humanos , Teorema de Bayes , Simulação por Computador , Tamanho da Amostra , Equivalência Terapêutica , Ensaios Clínicos como Assunto
2.
Stat Methods Med Res ; 32(2): 287-304, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36384365

RESUMO

Repeated testing in a group sequential trial can result in bias in the maximum likelihood estimate of the unknown parameter of interest. Many authors have therefore proposed adjusted point estimation procedures, which attempt to reduce such bias. Here, we describe nine possible point estimators within a common general framework for a two-stage group sequential trial. We then contrast their performance in five example trial settings, examining their conditional and marginal biases and residual mean square error. By focusing on the case of a trial with a single interim analysis, additional new results aiding the determination of the estimators are given. Our findings demonstrate that the uniform minimum variance unbiased estimator, whilst being marginally unbiased, often has large conditional bias and residual mean square error. If one is concerned solely about inference on progression to the second trial stage, the conditional uniform minimum variance unbiased estimator may be preferred. Two estimators, termed mean adjusted estimators, which attempt to reduce the marginal bias, arguably perform best in terms of the marginal residual mean square error. In all, one should choose an estimator accounting for its conditional and marginal biases and residual mean square error; the most suitable estimator will depend on relative desires to minimise each of these factors. If one cares solely about the conditional and marginal biases, the conditional maximum likelihood estimate may be preferred provided lower and upper stopping boundaries are included. If the conditional and marginal residual mean square error are also of concern, two mean adjusted estimators perform well.


Assuntos
Funções Verossimilhança , Viés
3.
Clin Trials ; 20(1): 59-70, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36086822

RESUMO

BACKGROUND/AIMS: To evaluate how uncertainty in the intra-cluster correlation impacts whether a parallel-group or stepped-wedge cluster-randomized trial design is more efficient in terms of the required sample size, in the case of cross-sectional stepped-wedge cluster-randomized trials and continuous outcome data. METHODS: We motivate our work by reviewing how the intra-cluster correlation and standard deviation were justified in 54 health technology assessment reports on cluster-randomized trials. To enable uncertainty at the design stage to be incorporated into the design specification, we then describe how sample size calculation can be performed for cluster- randomized trials in the 'hybrid' framework, which places priors on design parameters and controls the expected power in place of the conventional frequentist power. Comparison of the parallel-group and stepped-wedge cluster-randomized trial designs is conducted by placing Beta and truncated Normal priors on the intra-cluster correlation, and a Gamma prior on the standard deviation. RESULTS: Many Health Technology Assessment reports did not adhere to the Consolidated Standards of Reporting Trials guideline of indicating the uncertainty around the assumed intra-cluster correlation, while others did not justify the assumed intra-cluster correlation or standard deviation. Even for a prior intra-cluster correlation distribution with a small mode, moderate prior densities on high intra-cluster correlation values can lead to a stepped-wedge cluster-randomized trial being more efficient because of the degree to which a stepped-wedge cluster-randomized trial is more efficient for high intra-cluster correlations. With careful specification of the priors, the designs in the hybrid framework can become more robust to, for example, an unexpectedly large value of the outcome variance. CONCLUSION: When there is difficulty obtaining a reliable value for the intra-cluster correlation to assume at the design stage, the proposed methodology offers an appealing approach to sample size calculation. Often, uncertainty in the intra-cluster correlation will mean a stepped-wedge cluster-randomized trial is more efficient than a parallel-group cluster-randomized trial design.


Assuntos
Projetos de Pesquisa , Humanos , Estudos Transversais , Incerteza , Ensaios Clínicos Controlados Aleatórios como Assunto , Tamanho da Amostra , Análise por Conglomerados
4.
Biostatistics ; 24(4): 1000-1016, 2023 10 18.
Artigo em Inglês | MEDLINE | ID: mdl-35993875

RESUMO

Basket trials are increasingly used for the simultaneous evaluation of a new treatment in various patient subgroups under one overarching protocol. We propose a Bayesian approach to sample size determination in basket trials that permit borrowing of information between commensurate subsets. Specifically, we consider a randomized basket trial design where patients are randomly assigned to the new treatment or control within each trial subset ("subtrial" for short). Closed-form sample size formulae are derived to ensure that each subtrial has a specified chance of correctly deciding whether the new treatment is superior to or not better than the control by some clinically relevant difference. Given prespecified levels of pairwise (in)commensurability, the subtrial sample sizes are solved simultaneously. The proposed Bayesian approach resembles the frequentist formulation of the problem in yielding comparable sample sizes for circumstances of no borrowing. When borrowing is enabled between commensurate subtrials, a considerably smaller trial sample size is required compared to the widely implemented approach of no borrowing. We illustrate the use of our sample size formulae with two examples based on real basket trials. A comprehensive simulation study further shows that the proposed methodology can maintain the true positive and false positive rates at desired levels.


Assuntos
Projetos de Pesquisa , Humanos , Tamanho da Amostra , Teorema de Bayes , Simulação por Computador
5.
J Clin Epidemiol ; 150: 72-79, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-35788399

RESUMO

BACKGROUND AND OBJECTIVES: To investigate how subgroup analyses of published Randomized Controlled Trials (RCTs) are performed when subgroups are created from continuous variables. METHODS: We carried out a review of RCTs published in 2016-2021 that included subgroup analyses. Information was extracted on whether any of the subgroups were based on continuous variables and, if so, how they were analyzed. RESULTS: Out of 428 reviewed papers, 258 (60.4%) reported RCTs with a subgroup analysis. Of these, 178/258 (69%) had at least one subgroup formed from a continuous variable and 14/258 (5.4%) were unclear. The vast majority (169/178, 94.9%) dichotomized the continuous variable and treated the subgroup as categorical. The most common way of dichotomizing was using a pre-specified cutpoint (129/169, 76.3%), followed by a data-driven cutpoint (26/169, 15.4%), such as the median. CONCLUSION: It is common for subgroup analyses to use continuous variables to define subgroups. The vast majority dichotomize the continuous variable and, consequently, may lose substantial amounts of statistical information (equivalent to reducing the sample size by at least a third). More advanced methods that can improve efficiency, through optimally choosing cutpoints or directly using the continuous information, are rarely used.


Assuntos
Ensaios Clínicos Controlados Aleatórios como Assunto , Humanos , Tamanho da Amostra
6.
World Neurosurg ; 161: 316-322, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-35505550

RESUMO

BACKGROUND: It is well accepted that randomized controlled trials provide the greatest quality of evidence about effectiveness and safety of new interventions. In neurosurgery, randomized controlled trials face challenges, with their use remaining relatively low compared with other clinical areas. Adaptive designs have emerged as a method for improving the efficiency and patient benefit of trials. They allow modifications to the trial design to be made as patient outcome data are collected. The benefit they provide is highly variable, predominantly governed by the time taken to observe the primary endpoint compared with the planned recruitment rate. They also face challenges in design, conduct, and reporting. METHODS: We provide an overview of the benefits and challenges of adaptive designs, with a focus on neurosurgery applications. To investigate how often an adaptive design may be advantageous in neurosurgery, we extracted data on recruitment rates and endpoint lengths for ongoing neurosurgery trials registered in ClinicalTrials.gov. RESULTS: We found that a majority of neurosurgery trials had a relatively short endpoint length compared with the planned recruitment period and therefore may benefit from an adaptive trial. However, we did not identify any ongoing ClinicalTrials.gov registered neurosurgery trials that mentioned using an adaptive design. CONCLUSIONS: Adaptive designs may provide benefits to neurosurgery trials and should be considered for use more widely. Use of some types of adaptive design, such as multiarm multistage, may further increase the number of interventions that can be tested with limited patient and financial resources.


Assuntos
Neurocirurgia , Humanos , Procedimentos Neurocirúrgicos , Ensaios Clínicos Controlados Aleatórios como Assunto , Projetos de Pesquisa
7.
Eur J Cancer ; 166: 270-278, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-35344852

RESUMO

BACKGROUND: Simon's two-stage design is a widely used adaptive design, particularly in phase II oncology trials due to its simplicity and efficiency. However, its efficiency can be adversely affected when the primary end-point takes time to observe, as is common in practice. METHODS: We propose an optimal design, taking the delay in observing treatment outcome into consideration and compare the efficiency gained from using Simon's design over a single-stage design for real-life oncology trials. Based on the results, we provide a general rule-of-thumb for determining whether a two-stage single-arm design can provide any added advantage over a single-stage design, given the recruitment rate and primary end-point length. RESULTS: We observed an average 15-30% loss in the estimated efficiency gain in real oncology trials that used Simon's design due to the delay in observing the treatment outcome. The delay-optimal design provides some advantage over Simon's design in terms of reduced sample size when the delay is large compared to the recruitment length. DISCUSSION: Simon's two-stage design provides large benefit over a single-stage design, in terms of reduced sample size, when the primary end-point length is no more than 10% of the total recruitment time. It provides no efficiency advantage when this ratio is above 50%.


Assuntos
Neoplasias , Projetos de Pesquisa , Humanos , Oncologia , Neoplasias/terapia , Tamanho da Amostra , Resultado do Tratamento
8.
J Biopharm Stat ; 32(6): 817-831, 2022 11 02.
Artigo em Inglês | MEDLINE | ID: mdl-35196204

RESUMO

The uniform minimum variance unbiased estimator (UMVUE) is, by definition, a solution to removing bias in estimation following a multi-stage single-arm trial with a primary dichotomous outcome. However, the UMVUE is known to have large residual mean squared error (RMSE). Therefore, we develop an optimisation approach to finding estimators with reduced RMSE for many response rates, which attain low bias. We demonstrate that careful choice of the optimisation parameters can lead to an estimator with often substantially reduced RMSE, without the introduction of appreciable bias.


Assuntos
Neoplasias , Humanos , Oncologia , Viés
9.
Stat Med ; 41(5): 877-890, 2022 02 28.
Artigo em Inglês | MEDLINE | ID: mdl-35023184

RESUMO

Adapting the final sample size of a trial to the evidence accruing during the trial is a natural way to address planning uncertainty. Since the sample size is usually determined by an argument based on the power of the trial, an interim analysis raises the question of how the final sample size should be determined conditional on the accrued information. To this end, we first review and compare common approaches to estimating conditional power, which is often used in heuristic sample size recalculation rules. We then discuss the connection of heuristic sample size recalculation and optimal two-stage designs, demonstrating that the latter is the superior approach in a fully preplanned setting. Hence, unplanned design adaptations should only be conducted as reaction to trial-external new evidence, operational needs to violate the originally chosen design, or post hoc changes in the optimality criterion but not as a reaction to trial-internal data. We are able to show that commonly discussed sample size recalculation rules lead to paradoxical adaptations where an initially planned optimal design is not invariant under the adaptation rule even if the planning assumptions do not change. Finally, we propose two alternative ways of reacting to newly emerging trial-external evidence in ways that are consistent with the originally planned design to avoid such inconsistencies.


Assuntos
Amigos , Projetos de Pesquisa , Humanos , Tamanho da Amostra , Incerteza
10.
BMC Cancer ; 22(1): 111, 2022 Jan 26.
Artigo em Inglês | MEDLINE | ID: mdl-35081926

RESUMO

BACKGROUND: To determine how much an augmented analysis approach could improve the efficiency of prostate-specific antigen (PSA) response analyses in clinical practice. PSA response rates are commonly used outcome measures in metastatic castration-resistant prostate cancer (mCRPC) trial reports. PSA response is evaluated by comparing continuous PSA data (e.g., change from baseline) to a threshold (e.g., 50% reduction). Consequently, information in the continuous data is discarded. Recent papers have proposed an augmented approach that retains the conventional response rate, but employs the continuous data to improve precision of estimation. METHODS: A literature review identified published prostate cancer trials that included a waterfall plot of continuous PSA data. This continuous data was extracted to enable the conventional and augmented approaches to be compared. RESULTS: Sixty-four articles, reporting results for 78 mCRPC treatment arms, were re-analysed. The median efficiency gain from using the augmented analysis, in terms of the implied increase to the sample size of the original study, was 103.2% (IQR [89.8,190.9%]). CONCLUSIONS: Augmented PSA response analysis requires no additional data to be collected and can be performed easily using available software. It improves precision of estimation to a degree that is equivalent to a substantial sample size increase. The implication of this work is that prostate cancer trials using PSA response as a primary endpoint could be delivered with fewer participants and, therefore, more rapidly with reduced cost.


Assuntos
Monitoramento de Medicamentos/métodos , Neoplasias de Próstata Resistentes à Castração/tratamento farmacológico , Ensaios Clínicos como Assunto , Humanos , Masculino , Antígeno Prostático Específico/efeitos dos fármacos , Neoplasias de Próstata Resistentes à Castração/imunologia , Resultado do Tratamento
11.
Stat Med ; 41(6): 1081-1099, 2022 03 15.
Artigo em Inglês | MEDLINE | ID: mdl-35064595

RESUMO

BACKGROUND: Stepped-wedge cluster randomized trial (SW-CRT) designs are often used when there is a desire to provide an intervention to all enrolled clusters, because of a belief that it will be effective. However, given there should be equipoise at trial commencement, there has been discussion around whether a pre-trial decision to provide the intervention to all clusters is appropriate. In pharmaceutical drug development, a solution to a similar desire to provide more patients with an effective treatment is to use a response adaptive (RA) design. METHODS: We introduce a way in which RA design could be incorporated in an SW-CRT, permitting modification of the intervention allocation during the trial. The proposed framework explicitly permits a balance to be sought between power and patient benefit considerations. A simulation study evaluates the methodology. RESULTS: In one scenario, for one particular RA design, the proportion of cluster-periods spent in the intervention condition was observed to increase from 32.2% to 67.9% as the intervention effect was increased. A cost of this was a 6.2% power drop compared to a design that maximized power by fixing the proportion of time in the intervention condition at 45.0%, regardless of the intervention effect. CONCLUSIONS: An RA approach may be most applicable to settings for which the intervention has substantial individual or societal benefit considerations, potentially in combination with notable safety concerns. In such a setting, the proposed methodology may routinely provide the desired adaptability of the roll-out speed, with only a small cost to the study's power.


Assuntos
Projetos de Pesquisa , Análise por Conglomerados , Simulação por Computador , Humanos , Ensaios Clínicos Controlados Aleatórios como Assunto , Resultado do Tratamento
12.
J Biopharm Stat ; 32(5): 671-691, 2022 09 03.
Artigo em Inglês | MEDLINE | ID: mdl-35077268

RESUMO

Phase II clinical trials are a critical aspect of the drug development process. With drug development costs ever increasing, novel designs that can improve the efficiency of phase II trials are extremely valuable.Phase II clinical trials for cancer treatments often measure a binary outcome. The final trial decision is generally to continue or cease development. When this decision is based solely on the result of a hypothesis test, the result may be known with certainty before the planned end of the trial. Unfortunately, there is often no opportunity for early stopping when this occurs.Some existing designs do permit early stopping in this case, accordingly reducing the required sample size and potentially speeding up drug development. However, more improvements can be achieved by stopping early when the final trial decision is very likely, rather than certain, known as stochastic curtailment. While some authors have proposed approaches of this form, these approaches have various limitations.In this work we address these limitations by proposing new design approaches for single-arm phase II binary outcome trials that use stochastic curtailment. We use exact distributions, avoid simulation, consider a wider range of possible designs and permit early stopping for promising treatments. As a result, we are able to obtain trial designs that have considerably reduced sample sizes on average.


Assuntos
Projetos de Pesquisa , Simulação por Computador , Humanos , Tamanho da Amostra
13.
Br J Cancer ; 126(2): 204-210, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-34750494

RESUMO

BACKGROUND: Efficient trial designs are required to prioritise promising drugs within Phase II trials. Adaptive designs are examples of such designs, but their efficiency is reduced if there is a delay in assessing patient responses to treatment. METHODS: Motivated by the WIRE trial in renal cell carcinoma (NCT03741426), we compare three trial approaches to testing multiple treatment arms: (1) single-arm trials in sequence with interim analyses; (2) a parallel multi-arm multi-stage trial and (3) the design used in WIRE, which we call the Multi-Arm Sequential Trial with Efficient Recruitment (MASTER) design. The MASTER design recruits patients to one arm at a time, pausing recruitment to an arm when it has recruited the required number for an interim analysis. We conduct a simulation study to compare how long the three different trial designs take to evaluate a number of new treatment arms. RESULTS: The parallel multi-arm multi-stage and the MASTER design are much more efficient than separate trials. The MASTER design provides extra efficiency when there is endpoint delay, or recruitment is very quick. CONCLUSIONS: We recommend the MASTER design as an efficient way of testing multiple promising cancer treatments in non-comparative Phase II trials.


Assuntos
Ensaios Clínicos Adaptados como Assunto/métodos , Ensaios Clínicos Fase II como Assunto/métodos , Simulação por Computador/normas , Oncologia/métodos , Neoplasias/tratamento farmacológico , Ensaios Clínicos Controlados não Aleatórios como Assunto/métodos , Projetos de Pesquisa/normas , Estudos de Coortes , Humanos , Neoplasias/patologia , Tamanho da Amostra , Resultado do Tratamento
14.
J R Stat Soc Ser C Appl Stat ; 71(5): 2014-2037, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-36636028

RESUMO

Basket trials are an innovative precision medicine clinical trial design evaluating a single targeted therapy across multiple diseases that share a common characteristic. To date, most basket trials have been conducted in early-phase oncology settings, for which several Bayesian methods permitting information sharing across subtrials have been proposed. With the increasing interest of implementing randomised basket trials, information borrowing could be exploited in two ways; considering the commensurability of either the treatment effects or the outcomes specific to each of the treatment groups between the subtrials. In this article, we extend a previous analysis model based on distributional discrepancy for borrowing over the subtrial treatment effects ('treatment effect borrowing', TEB) to borrowing over the subtrial groupwise responses ('treatment response borrowing', TRB). Simulation results demonstrate that both modelling strategies provide substantial gains over an approach with no borrowing. TRB outperforms TEB especially when subtrial sample sizes are small on all operational characteristics, while the latter has considerable gains in performance over TRB when subtrial sample sizes are large, or the treatment effects and groupwise mean responses are noticeably heterogeneous across subtrials. Further, we notice that TRB, and TEB can potentially lead to different conclusions in the analysis of real data.

15.
BMC Rheumatol ; 5(1): 54, 2021 Dec 07.
Artigo em Inglês | MEDLINE | ID: mdl-34872620

RESUMO

BACKGROUND: Composite responder endpoints feature frequently in rheumatology due to the multifaceted nature of many of these conditions. Current analysis methods used to analyse these endpoints discard much of the data used to classify patients as responders and are therefore highly inefficient, resulting in low power. We highlight a novel augmented methodology that uses more of the information available to improve the precision of reported treatment effects. Since these methods are more challenging to implement, we developed free, user-friendly software available in a web-based interface and as R packages. The software consists of two programs: one that supports the analysis of responder endpoints; the second that facilitates sample size estimation. We demonstrate the use of the software to conduct the analysis with both the augmented and standard analysis method using the MUSE study, a phase IIb trial in patients with systemic lupus erythematosus. RESULTS: The software outputs similar point estimates with smaller confidence intervals for the odds ratio, risk ratio and risk difference estimators using the augmented approach. The sample size required in each arm for a future trial using the novel approach based on the MUSE data is 50 versus 135 for the standard method, translating to a reduction in required sample size of approximately 63%. CONCLUSIONS: We encourage trialists to use the software demonstrated to implement the augmented methodology in future studies to improve efficiency.

16.
Artigo em Inglês | MEDLINE | ID: mdl-34950839

RESUMO

PURPOSE: Two-stage single-arm designs have historically been the most common design used in phase II oncology. They remain a mainstay today, particularly for trials in rare subgroups. Consequently, it is imperative such studies be designed, analyzed, and reported effectively. We comprehensively review such trials to examine whether this is the case. METHODS: Oncology trials that used Simon's two-stage design over a 5-year period were identified and reviewed. They were evaluated for whether they reported sufficient design (eg, required sample size) and analysis (eg, CI) details. Articles that did not adjust their inference for the incorporation of an interim analysis were also reanalyzed. RESULTS: Four-hundred twenty-five articles were included. Of these, just 47.5% provided the five components that ensure design reproducibility. Only 1.2% and 2.1% reported an adjusted point estimate or CI, respectively. Just 55.3% provided the final stage rejection bound, indicating many trials did not test a hypothesis for their primary outcome. Trial reanalyses suggested reported point estimates underestimated treatment effects and reported CIs were too narrow. CONCLUSION: Key design details of two-stage single-arm trials are often unreported. Their inference is rarely performed such as to remove bias introduced by the interim analysis. These findings are particular alarming when considered against the growing trend in which nonrandomized trials make up a large proportion of all evidence on a treatment's effectiveness in a rare biomarker-defined patient subgroup. Future studies must improve the way they are analyzed and reported.


Assuntos
Projetos de Pesquisa/normas , Ensaios Clínicos como Assunto/métodos , Ensaios Clínicos como Assunto/normas , Humanos , Reprodutibilidade dos Testes , Projetos de Pesquisa/tendências
17.
BMC Rheumatol ; 5(1): 21, 2021 Jul 02.
Artigo em Inglês | MEDLINE | ID: mdl-34210348

RESUMO

BACKGROUND: Despite progress that has been made in the treatment of many immune-mediated inflammatory diseases (IMIDs), there remains a need for improved treatments. Randomised controlled trials (RCTs) provide the highest form of evidence on the effectiveness of a potential new treatment regimen, but they are extremely expensive and time consuming to conduct. Consequently, much focus has been given in recent years to innovative design and analysis methods that could improve the efficiency of RCTs. In this article, we review the current use and future potential of these methods within the context of IMID trials. METHODS: We provide a review of several innovative methods that would provide utility in IMID research. These include novel study designs (adaptive trials, Sequential Multi-Assignment Randomised Trials, basket, and umbrella trials) and data analysis methodologies (augmented analyses of composite responder endpoints, using high-dimensional biomarker information to stratify patients, and emulation of RCTs from routinely collected data). IMID trials are now well-placed to embrace innovative methods. For example, well-developed statistical frameworks for adaptive trial design are ready for implementation, whilst the growing availability of historical datasets makes the use of Bayesian methods particularly applicable. To assess whether and how these innovative methods have been used in practice, we conducted a review via PubMed of clinical trials pertaining to any of 51 IMIDs that were published between 2018 and 20 in five high impact factor clinical journals. RESULTS: Amongst 97 articles included in the review, 19 (19.6%) used an innovative design method, but most of these were relatively straightforward examples of innovative approaches. Only two (2.1%) reported the use of evidence from routinely collected data, cohorts, or biobanks. Eight (9.2%) collected high-dimensional data. CONCLUSIONS: Application of innovative statistical methodology to IMID trials has the potential to greatly improve efficiency, to generalise and extrapolate trial results, and to further personalise treatment strategies. Currently, such methods are infrequently utilised in practice. New research is required to ensure that IMID trials can benefit from the most suitable methods.

18.
Contemp Clin Trials ; 107: 106459, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-34082076

RESUMO

INTRODUCTION: Most literature on optimal group-sequential designs focuses on minimising the expected sample size. We highlight other factors for consideration. METHODS: We discuss several quantities less-often considered in adaptive design: the median and standard deviation of the random required sample size, and the probability of committing an interim error. We consider how the optimal timing of interim analyses changes when these quantities are accounted for. RESULTS: Incorporating the standard deviation of the required sample size into an optimality framework, we demonstrate how and when this quantity means using a group-sequential approach is not optimal. The optimal timing of an interim analysis is shown to be highly dependent on the pre-specified preference for minimising the expected sample size relative to its standard deviation. CONCLUSIONS: Examining multiple factors, which measure the advantages and disadvantages of group-sequential designs, helps determine the best design for a specific trial.


Assuntos
Ensaios Clínicos como Assunto , Projetos de Pesquisa , Ensaios Clínicos como Assunto/métodos , Humanos , Tamanho da Amostra
19.
Pharm Stat ; 20(6): 990-1001, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-33759353

RESUMO

Umbrella trials are an innovative trial design where different treatments are matched with subtypes of a disease, with the matching typically based on a set of biomarkers. Consequently, when patients can be positive for more than one biomarker, they may be eligible for multiple treatment arms. In practice, different approaches could be applied to allocate patients who are positive for multiple biomarkers to treatments. However, to date there has been little exploration of how these approaches compare statistically. We conduct a simulation study to compare five approaches to handling treatment allocation in the presence of multiple biomarkers - equal randomisation; randomisation with fixed probability of allocation to control; Bayesian adaptive randomisation (BAR); constrained randomisation; and hierarchy of biomarkers. We evaluate these approaches under different scenarios in the context of a hypothetical phase II biomarker-guided umbrella trial. We define the pairings representing the pre-trial expectations on efficacy as linked pairs, and the other biomarker-treatment pairings as unlinked. The hierarchy and BAR approaches have the highest power to detect a treatment-biomarker linked interaction. However, the hierarchy procedure performs poorly if the pre-specified treatment-biomarker pairings are incorrect. The BAR method allocates a higher proportion of patients who are positive for multiple biomarkers to promising treatments when an unlinked interaction is present. In most scenarios, the constrained randomisation approach best balances allocation to all treatment arms. Pre-specification of an approach to deal with treatment allocation in the presence of multiple biomarkers is important, especially when overlapping subgroups are likely.


Assuntos
Projetos de Pesquisa , Teorema de Bayes , Biomarcadores , Simulação por Computador , Humanos , Distribuição Aleatória
20.
Commun Stat Theory Methods ; 50(1): 18-34, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33408437

RESUMO

We describe and compare two methods for the group sequential design of two-arm experiments with Poisson distributed data, which are based on a normal approximation and exact calculations respectively. A framework to determine near-optimal stopping boundaries is also presented. Using this framework, for a considered example, we demonstrate that a group sequential design could reduce the expected sample size under the null hypothesis by as much as 44% compared to a fixed sample approach. We conclude with a discussion of the advantages and disadvantages of the two presented procedures.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...