Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 15 de 15
Filtrar
1.
Stat Med ; 2024 Jul 09.
Artículo en Inglés | MEDLINE | ID: mdl-38980954

RESUMEN

In clinical settings with no commonly accepted standard-of-care, multiple treatment regimens are potentially useful, but some treatments may not be appropriate for some patients. A personalized randomized controlled trial (PRACTical) design has been proposed for this setting. For a network of treatments, each patient is randomized only among treatments which are appropriate for them. The aim is to produce treatment rankings that can inform clinical decisions about treatment choices for individual patients. Here we propose methods for determining sample size in a PRACTical design, since standard power-based methods are not applicable. We derive a sample size by evaluating information gained from trials of varying sizes. For a binary outcome, we quantify how many adverse outcomes would be prevented by choosing the top-ranked treatment for each patient based on trial results rather than choosing a random treatment from the appropriate personalized randomization list. In simulations, we evaluate three performance measures: mean reduction in adverse outcomes using sample information, proportion of simulated patients for whom the top-ranked treatment performed as well or almost as well as the best appropriate treatment, and proportion of simulated trials in which the top-ranked treatment performed better than a randomly chosen treatment. We apply the methods to a trial evaluating eight different combination antibiotic regimens for neonatal sepsis (NeoSep1), in which a PRACTical design addresses varying patterns of antibiotic choice based on disease characteristics and resistance. Our proposed approach produces results that are more relevant to complex decision making by clinicians and policy makers.

2.
BMC Med Res Methodol ; 24(1): 163, 2024 Jul 30.
Artículo en Inglés | MEDLINE | ID: mdl-39080538

RESUMEN

BACKGROUND: A platform trial approach allows adding arms to on-going trials to speed up intervention discovery programs. A control arm remains open for recruitment in a platform trial while intervention arms may be added after the onset of the study and could be terminated early for efficacy and/or futility when early stopping is allowed. The topic of utilising non-concurrent control data in the analysis of platform trials has been explored and discussed extensively. A less familiar issue is the presence of heterogeneity, which may exist for example due to modification of enrolment criteria and recruitment strategy. METHOD: We conduct a simulation study to explore the impact of heterogeneity on the analysis of a two-stage platform trial design. We consider heterogeneity in treatment effects and heteroscedasticity in outcome data across stages for a normally distributed endpoint. We examine the performance of some hypothesis testing procedures and modelling strategies. The use of non-concurrent control data is also considered accordingly. Alongside standard regression analysis, we examine the performance of a novel method that was known as the pairwise trials analysis. It is similar to a network meta-analysis approach but adjusts for treatment comparisons instead of individual studies using fixed effects. RESULTS: Several testing strategies with concurrent control data seem to control the type I error rate at the required level when there is heteroscedasticity in outcome data across stages and/or a random cohort effect. The main parameter of treatment effects in some analysis models correspond to overall treatment effects weighted by stage wise sample sizes; while others correspond to the effect observed within a single stage. The characteristics of the estimates are not affected significantly by the presence of a random cohort effect and/ or heteroscedasticity. CONCLUSION: In view of heterogeneity in treatment effect across stages, the specification of null hypotheses in platform trials may need to be more subtle. We suggest employing testing procedure of adaptive design as opposed to testing the statistics from regression models; comparing the estimates from the pairwise trials analysis method and the regression model with interaction terms may indicate if heterogeneity is negligible.


Asunto(s)
Proyectos de Investigación , Humanos , Proyectos de Investigación/estadística & datos numéricos , Ensayos Clínicos como Asunto/métodos , Ensayos Clínicos como Asunto/estadística & datos numéricos , Simulación por Computador , Modelos Estadísticos , Interpretación Estadística de Datos , Análisis de Regresión , Resultado del Tratamiento
3.
Stat Sci ; 38(2): 185-208, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-37324576

RESUMEN

Response-Adaptive Randomization (RAR) is part of a wider class of data-dependent sampling algorithms, for which clinical trials are typically used as a motivating application. In that context, patient allocation to treatments is determined by randomization probabilities that change based on the accrued response data in order to achieve experimental goals. RAR has received abundant theoretical attention from the biostatistical literature since the 1930's and has been the subject of numerous debates. In the last decade, it has received renewed consideration from the applied and methodological communities, driven by well-known practical examples and its widespread use in machine learning. Papers on the subject present different views on its usefulness, and these are not easy to reconcile. This work aims to address this gap by providing a unified, broad and fresh review of methodological and practical issues to consider when debating the use of RAR in clinical trials.

4.
Stat Med ; 42(8): 1156-1170, 2023 04 15.
Artículo en Inglés | MEDLINE | ID: mdl-36732886

RESUMEN

In some clinical scenarios, for example, severe sepsis caused by extensively drug resistant bacteria, there is uncertainty between many common treatments, but a conventional multiarm randomized trial is not possible because individual participants may not be eligible to receive certain treatments. The Personalised Randomized Controlled Trial design allows each participant to be randomized between a "personalised randomization list" of treatments that are suitable for them. The primary aim is to produce treatment rankings that can guide choice of treatment, rather than focusing on the estimates of relative treatment effects. Here we use simulation to assess several novel analysis approaches for this innovative trial design. One of the approaches is like a network meta-analysis, where participants with the same personalised randomization list are like a trial, and both direct and indirect evidence are used. We evaluate this proposed analysis and compare it with analyses making less use of indirect evidence. We also propose new performance measures including the expected improvement in outcome if the trial's rankings are used to inform future treatment rather than random choice. We conclude that analysis of a personalized randomized controlled trial can be performed by pooling data from different types of participants and is robust to moderate subgroup-by-intervention interactions based on the parameters of our simulation. The proposed approach performs well with respect to estimation bias and coverage. It provides an overall treatment ranking list with reasonable precision, and is likely to improve outcome on average if used to determine intervention policies and guide individual clinical decisions.


Asunto(s)
Ensayos Clínicos Controlados Aleatorios como Asunto , Proyectos de Investigación , Humanos , Medicina de Precisión , Participación del Paciente
5.
PLoS One ; 17(9): e0274272, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36094920

RESUMEN

When comparing the performance of multi-armed bandit algorithms, the potential impact of missing data is often overlooked. In practice, it also affects their implementation where the simplest approach to overcome this is to continue to sample according to the original bandit algorithm, ignoring missing outcomes. We investigate the impact on performance of this approach to deal with missing data for several bandit algorithms through an extensive simulation study assuming the rewards are missing at random. We focus on two-armed bandit algorithms with binary outcomes in the context of patient allocation for clinical trials with relatively small sample sizes. However, our results apply to other applications of bandit algorithms where missing data is expected to occur. We assess the resulting operating characteristics, including the expected reward. Different probabilities of missingness in both arms are considered. The key finding of our work is that when using the simplest strategy of ignoring missing data, the impact on the expected performance of multi-armed bandit strategies varies according to the way these strategies balance the exploration-exploitation trade-off. Algorithms that are geared towards exploration continue to assign samples to the arm with more missing responses (which being perceived as the arm with less observed information is deemed more appealing by the algorithm than it would otherwise be). In contrast, algorithms that are geared towards exploitation would rapidly assign a high value to samples from the arms with a current high mean irrespective of the level observations per arm. Furthermore, for algorithms focusing more on exploration, we illustrate that the problem of missing responses can be alleviated using a simple mean imputation approach.


Asunto(s)
Algoritmos , Simulación por Computador , Humanos , Investigación , Recompensa
6.
Stat Methods Med Res ; 31(11): 2104-2121, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-35876412

RESUMEN

Covariate adjustment via a regression approach is known to increase the precision of statistical inference when fixed trial designs are employed in randomized controlled studies. When an adaptive multi-arm design is employed with the ability to select treatments, it is unclear how covariate adjustment affects various aspects of the study. Consider the design framework that relies on pre-specified treatment selection rule(s) and a combination test approach for hypothesis testing. It is our primary goal to evaluate the impact of covariate adjustment on adaptive multi-arm designs with treatment selection. Our secondary goal is to show how the Uniformly Minimum Variance Conditionally Unbiased Estimator can be extended to account for covariate adjustment analytically. We find that adjustment with different sets of covariates can lead to different treatment selection outcomes and hence probabilities of rejecting hypotheses. Nevertheless, we do not see any negative impact on the control of the familywise error rate when covariates are included in the analysis model. When adjusting for covariates that are moderately or highly correlated with the outcome, we see various benefits to the analysis of the design. Conversely, there is negligible impact when including covariates that are uncorrelated with the outcome. Overall, pre-specification of covariate adjustment is recommended for the analysis of adaptive multi-arm design with treatment selection. Having the statistical analysis plan in place prior to the interim and final analyses is crucial, especially when a non-collapsible measure of treatment effect is considered in the trial.


Asunto(s)
Proyectos de Investigación , Probabilidad , Resultado del Tratamiento , Selección de Paciente , Simulación por Computador
7.
Stat Med ; 41(5): 877-890, 2022 02 28.
Artículo en Inglés | MEDLINE | ID: mdl-35023184

RESUMEN

Adapting the final sample size of a trial to the evidence accruing during the trial is a natural way to address planning uncertainty. Since the sample size is usually determined by an argument based on the power of the trial, an interim analysis raises the question of how the final sample size should be determined conditional on the accrued information. To this end, we first review and compare common approaches to estimating conditional power, which is often used in heuristic sample size recalculation rules. We then discuss the connection of heuristic sample size recalculation and optimal two-stage designs, demonstrating that the latter is the superior approach in a fully preplanned setting. Hence, unplanned design adaptations should only be conducted as reaction to trial-external new evidence, operational needs to violate the originally chosen design, or post hoc changes in the optimality criterion but not as a reaction to trial-internal data. We are able to show that commonly discussed sample size recalculation rules lead to paradoxical adaptations where an initially planned optimal design is not invariant under the adaptation rule even if the planning assumptions do not change. Finally, we propose two alternative ways of reacting to newly emerging trial-external evidence in ways that are consistent with the originally planned design to avoid such inconsistencies.


Asunto(s)
Amigos , Proyectos de Investigación , Humanos , Tamaño de la Muestra , Incertidumbre
8.
Anesthesiology ; 136(1): 148-161, 2022 01 01.
Artículo en Inglés | MEDLINE | ID: mdl-34724559

RESUMEN

BACKGROUND: The relationship between late clinical outcomes after injury and early dynamic changes between fibrinolytic states is not fully understood. The authors hypothesized that temporal transitions in fibrinolysis states using rotational thromboelastometry (ROTEM) would aid stratification of adverse late clinical outcomes and improve understanding of how tranexamic acid modulates the fibrinolytic response and impacts mortality. METHODS: The authors conducted a secondary analysis of previously collected data from trauma patients enrolled into an ongoing prospective cohort study (International Standard Randomised Controlled Trial Number [ISRCTN] 12962642) at a major trauma center in the United Kingdom. ROTEM was performed on admission and at 24 h with patients retrospectively grouped into three fibrinolysis categories: tissue factor-activated ROTEM maximum lysis of less than 5% (low); tissue factor-activated ROTEM maximum lysis of 5 to 15% (normal); or tissue factor-activated ROTEM maximum lysis of more than 15% (high). Primary outcomes were multiorgan dysfunction syndrome and 28-day mortality. RESULTS: Seven-hundred thirty-one patients were included: 299 (41%) were treated with tranexamic acid and 432 (59%) were untreated. Two different cohorts with low-maximum lysis at 24 h were identified: (1) severe brain injury and (2) admission shock and hemorrhage. Multiple organ dysfunction syndrome was greatest in those with low-maximum lysis on admission and at 24 h, and late mortality was four times higher than in patients who remained normal during the first 24 h (7 of 42 [17%] vs. 9 of 223 [4%]; P = 0.029). Patients that transitioned to or remained in low-maximum lysis had increased odds of organ dysfunction (5.43 [95% CI, 1.43 to 20.61] and 4.85 [95% CI, 1.83 to 12.83], respectively). Tranexamic acid abolished ROTEM hyperfibrinolysis (high) on admission, increased the frequency of persistent low-maximum lysis (67 of 195 [34%]) vs. 8 of 79 [10%]; P = 0.002), and was associated with reduced early mortality (28 of 195 [14%] vs. 23 of 79 [29%]; P = 0.015). No increase in late deaths, regardless of fibrinolysis transition patterns, was observed. CONCLUSIONS: Adverse late outcomes are more closely related to 24-h maximum lysis, irrespective of admission levels. Tranexamic acid alters early fibrinolysis transition patterns, but late mortality in patients with low-maximum lysis at 24 h is not increased.


Asunto(s)
Fibrinólisis/fisiología , Hemorragia/sangre , Hemorragia/mortalidad , Heridas y Lesiones/sangre , Heridas y Lesiones/mortalidad , Adulto , Antifibrinolíticos/administración & dosificación , Pruebas de Coagulación Sanguínea/tendencias , Estudios de Cohortes , Femenino , Fibrinólisis/efectos de los fármacos , Hemorragia/prevención & control , Humanos , Masculino , Persona de Mediana Edad , Estudios Prospectivos , Estudios Retrospectivos , Tromboelastografía/efectos de los fármacos , Tromboelastografía/tendencias , Factores de Tiempo , Ácido Tranexámico/administración & dosificación , Reino Unido/epidemiología , Heridas y Lesiones/tratamiento farmacológico
9.
Trials ; 22(1): 203, 2021 Mar 10.
Artículo en Inglés | MEDLINE | ID: mdl-33691748

RESUMEN

BACKGROUND: Platform trials improve the efficiency of the drug development process through flexible features such as adding and dropping arms as evidence emerges. The benefits and practical challenges of implementing novel trial designs have been discussed widely in the literature, yet less consideration has been given to the statistical implications of adding arms. MAIN: We explain different statistical considerations that arise from allowing new research interventions to be added in for ongoing studies. We present recent methodology development on addressing these issues and illustrate design and analysis approaches that might be enhanced to provide robust inference from platform trials. We also discuss the implication of changing the control arm, how patient eligibility for different arms may complicate the trial design and analysis, and how operational bias may arise when revealing some results of the trials. Lastly, we comment on the appropriateness and the application of platform trials in phase II and phase III settings, as well as publicly versus industry-funded trials. CONCLUSION: Platform trials provide great opportunities for improving the efficiency of evaluating interventions. Although several statistical issues are present, there are a range of methods available that allow robust and efficient design and analysis of these trials.


Asunto(s)
Interpretación Estadística de Datos , Proyectos de Investigación , Ensayos Clínicos como Asunto , Humanos
10.
Am Stat ; 75(4): 424-432, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34992303

RESUMEN

Sample size derivation is a crucial element of planning any confirmatory trial. The required sample size is typically derived based on constraints on the maximal acceptable Type I error rate and minimal desired power. Power depends on the unknown true effect and tends to be calculated either for the smallest relevant effect or a likely point alternative. The former might be problematic if the minimal relevant effect is close to the null, thus requiring an excessively large sample size, while the latter is dubious since it does not account for the a priori uncertainty about the likely alternative effect. A Bayesian perspective on sample size derivation for a frequentist trial can reconcile arguments about the relative a priori plausibility of alternative effects with ideas based on the relevance of effect sizes. Many suggestions as to how such "hybrid" approaches could be implemented in practice have been put forward. However, key quantities are often defined in subtly different ways in the literature. Starting from the traditional entirely frequentist approach to sample size derivation, we derive consistent definitions for the most commonly used hybrid quantities and highlight connections, before discussing and demonstrating their use in sample size derivation for clinical trials.

11.
Reprod Fertil ; 2(1): 69-80, 2021 01.
Artículo en Inglés | MEDLINE | ID: mdl-35128434

RESUMEN

BACKGROUND: Up to 28% of endometriosis patients do not get pain relief from therapeutic laparoscopy but this subgroup is not defined. OBJECTIVES: To identify any prognostic patient-specific factors (such as but not limited to patients' type or location of endometriosis, sociodemographics and lifestyle) associated with a clinically meaningful reduction in post-surgical pain response to operative laparoscopic surgery for endometriosis. SEARCH STRATEGY: PubMed, Cochrane and Embase databases were searched from inception to 19 May 2020 without language restrictions. Backward and forward citation tracking was used. SELECTION CRITERIA DATA COLLECTION AND ANALYSIS: Cohort studies reporting prognostic factors, along with scores for domains of pain associated with endometriosis before and after surgery, were included. Studies that compared surgeries, or laboratory tests, or outcomes without stratification were excluded. Results were synthesised but variation in study designs and inconsistency of outcome reporting precluded us from doing a meta-analysis. MAIN RESULTS: Five studies were included. Quality assessment using the Newcastle-Ottawa scale graded three studies as high, one as moderate and one as having a low risk of bias. Four of five included studies separately reported that a relationship exists between more severe endometriosis and stronger pain relief from laparoscopic surgery. CONCLUSION: Currently, there are few studies of appropriate quality to answer the research question. We recommend future studies report core outcome sets to enable meta-analysis. LAY SUMMARY: Endometriosis is a painful condition caused by displaced cells from the lining of the womb, causing inflammation and scarring inside the body. It affects 6-10% of women and there is no permanent cure. Medical and laparoscopic surgical treatments are available, but about 28% of patients do not get the hoped-for pain relief after surgery. Currently, there is no way of predicting who gets better and who does not. We systematically searched the world literature to establish who may get better, in order to improve counselling when women choose treatment options. We identified five studies of variable quality showing: More complex disease (in specialist hands) responds better to surgery than less, but more studies needed.


Asunto(s)
Endometriosis , Laparoscopía , Femenino , Humanos , Dolor Pélvico , Útero
12.
BMC Med Res Methodol ; 20(1): 165, 2020 06 24.
Artículo en Inglés | MEDLINE | ID: mdl-32580702

RESUMEN

BACKGROUND: Platform trials allow adding new experimental treatments to an on-going trial. This feature is attractive to practitioners due to improved efficiency. Nevertheless, the operating characteristics of a trial that adds arms have not been well-studied. One controversy is whether just the concurrent control data (i.e. of patients who are recruited after a new arm is added) should be used in the analysis of the newly added treatment(s), or all control data (i.e. non-concurrent and concurrent). METHODS: We investigate the benefits and drawbacks of using non-concurrent control data within a two-stage setting. We perform simulation studies to explore the impact of a linear and a step trend on the inference of the trial. We compare several analysis approaches when one includes all the control data or only concurrent control data in the analysis of the newly added treatment. RESULTS: When there is a positive trend and all the control data are used, the marginal power of rejecting the corresponding hypothesis and the type one error rate can be higher than the nominal value. A model-based approach adjusting for a stage effect is equivalent to using concurrent control data; an adjustment with a linear term may not guarantee valid inference when there is a non-linear trend. CONCLUSIONS: If strict error rate control is required then non-concurrent control data should not be used; otherwise it may be beneficial if the trend is sufficiently small. On the other hand, the root mean squared error of the estimated treatment effect can be improved through using non-concurrent control data.


Asunto(s)
Simulación por Computador , Humanos
13.
Stat Med ; 38(18): 3305-3321, 2019 08 15.
Artículo en Inglés | MEDLINE | ID: mdl-31115078

RESUMEN

Multiarm clinical trials, which compare several experimental treatments against control, are frequently recommended due to their efficiency gain. In practise, all potential treatments may not be ready to be tested in a phase II/III trial at the same time. It has become appealing to allow new treatment arms to be added into on-going clinical trials using a "platform" trial approach. To the best of our knowledge, many aspects of when to add arms to an existing trial have not been explored in the literature. Most works on adding arm(s) assume that a new arm is opened whenever a new treatment becomes available. This strategy may prolong the overall duration of a study or cause reduction in marginal power for each hypothesis if the adaptation is not well accommodated. Within a two-stage trial setting, we propose a decision-theoretic framework to investigate when to add or not to add a new treatment arm based on the observed stage one treatment responses. To account for different prospect of multiarm studies, we define utility in two different ways; one for a trial that aims to maximise the number of rejected hypotheses; the other for a trial that would declare a success when at least one hypothesis is rejected from the study. Our framework shows that it is not always optimal to add a new treatment arm to an existing trial. We illustrate a case study by considering a completed trial on knee osteoarthritis.


Asunto(s)
Ensayos Clínicos Adaptativos como Asunto/métodos , Ensayos Clínicos Controlados como Asunto/métodos , Teoría de las Decisiones , Ensayos Clínicos Adaptativos como Asunto/estadística & datos numéricos , Bioestadística , Protocolos Clínicos , Ensayos Clínicos Controlados como Asunto/estadística & datos numéricos , Crioterapia , Humanos , Análisis Multivariante , Bloqueo Nervioso , Osteoartritis de la Rodilla/fisiopatología , Osteoartritis de la Rodilla/terapia
14.
J Stat Plan Inference ; 199: 179-187, 2019 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-31007363

RESUMEN

Precision medicine, aka stratified/personalized medicine, is becoming more pronounced in the medical field due to advancement in computational ability to learn about patient genomic backgrounds. A biomaker, i.e. a type of biological process indicator, is often used in precision medicine to classify patient population into several subgroups. The aim of precision medicine is to tailor treatment regimes for different patient subgroups who suffer from the same disease. A multi-arm design could be conducted to explore the effect of treatment regimes on different biomarker subgroups. However, if treatments work only on certain subgroups, which is often the case, enrolling all patient subgroups in a confirmatory trial would increase the burden of a study. Having observed a phase II trial, we propose a design framework for finding an optimal design that could be implemented in a phase III study or a confirmatory trial. We consider two elements in our approach: Bayesian data analysis of observed data, and design of experiments. The first tool selects subgroups and treatments to be enrolled in the future trial whereas the second tool provides an optimal treatment randomization scheme for each selected/enrolled subgroups. Considering two independent treatments and two independent biomarkers, we illustrate our approach using simulation studies. We demonstrate efficiency gain, i.e. high probability of recommending truly effective treatments in the right subgroup, of the optimal design found by our framework over a randomized controlled trial and a biomarker-treatment linked trial.

15.
Stat Med ; 38(15): 2749-2766, 2019 07 10.
Artículo en Inglés | MEDLINE | ID: mdl-30912173

RESUMEN

Multiarm trials with follow-up on participants are commonly implemented to assess treatment effects on a population over the course of the studies. Dropout is an unavoidable issue especially when the duration of the multiarm study is long. Its impact is often ignored at the design stage, which may lead to less accurate statistical conclusions. We develop an optimal design framework for trials with repeated measurements, which takes potential dropouts into account, and we provide designs for linear mixed models where the presence of dropouts is noninformative and dependent on design variables. Our framework is illustrated through redesigning a clinical trial on Alzheimer's disease, whereby the benefits of our designs compared with standard designs are demonstrated through simulations.


Asunto(s)
Ensayos Clínicos como Asunto/métodos , Modelos Lineales , Pacientes Desistentes del Tratamiento , Proyectos de Investigación , Simulación por Computador , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...