Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 16 de 16
Filter
1.
Pharm Stat ; 2024 Aug 08.
Article in Spanish | MEDLINE | ID: mdl-39115134

ABSTRACT

Most published applications of the estimand framework have focused on superiority trials. However, non-inferiority trials present specific challenges compared to superiority trials. The International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use notes in their addendum on estimands and sensitivity analysis in clinical trials that there may be special considerations to the implementation of estimands in clinical trials with a non-inferiority objective yet provides little guidance. This paper discusses considerations that trial teams should make when defining estimands for a clinical trial with a non-inferiority objective. We discuss how the pre-addendum way of establishing non-inferiority can be embraced by the estimand framework including a discussion of the role of the Per Protocol analysis set. We examine what clinical questions of interest can be formulated in the context of non-inferiority trials and outline why we do not think it is sensible to describe an estimand as 'conservative'. The impact of the estimand framework on key considerations in non-inferiority trials such as whether trials should have more than one primary estimand, the choice of non-inferiority margin, assay sensitivity, switching from non-inferiority to superiority and estimation are discussed. We conclude by providing a list of recommendations, and important considerations for defining estimands for trials with a non-inferiority objective.

2.
Pharm Stat ; 2024 Aug 13.
Article in English | MEDLINE | ID: mdl-39138846

ABSTRACT

The ICH E9(R1) guideline outlines the estimand framework, which aligns planning, design, conduct, analysis, and interpretation of a clinical trial. The benefits and value of using this framework in clinical trials have been outlined in the literature, and guidance has been provided on how to choose the estimand and define the estimand attributes. Although progress has been made in the implementation of estimands in clinical trials, to the best of our knowledge, there is no published discussion on the basic principles that estimands in clinical trials should fulfill to be well defined and consistent with the ideas presented in the ICH E9(R1) guideline. Therefore, in this Viewpoint article, we propose four key principles for defining an estimand. These principles form a basis for well-defined treatment effects that reflect the estimand thinking process. We hope that this Viewpoint will complement ICH E9(R1) and stimulate a discussion on which fundamental properties an estimand in a clinical trial should have and that such discussions will eventually lead to an improved clarity and precision for defining estimands in clinical trials.

3.
Pharm Stat ; 22(1): 20-33, 2023 01.
Article in English | MEDLINE | ID: mdl-35757986

ABSTRACT

Conventional analyses of a composite of multiple time-to-event outcomes use the time to the first event. However, the first event may not be the most important outcome. To address this limitation, generalized pairwise comparisons and win statistics (win ratio, win odds, and net benefit) have become popular and have been applied to clinical trial practice. However, win ratio, win odds, and net benefit have typically been used separately. In this article, we examine the use of these three win statistics jointly for time-to-event outcomes. First, we explain the relation of point estimates and variances among the three win statistics, and the relation between the net benefit and the Mann-Whitney U statistic. Then we explain that the three win statistics are based on the same win proportions, and they test the same null hypothesis of equal win probabilities in two groups. We show theoretically that the Z-values of the corresponding statistical tests are approximately equal; therefore, the three win statistics provide very similar p-values and statistical powers. Finally, using simulation studies and data from a clinical trial, we demonstrate that, when there is no (or little) censoring, the three win statistics can complement one another to show the strength of the treatment effect. However, when the amount of censoring is not small, and without adjustment for censoring, the win odds and the net benefit may have an advantage for interpreting the treatment effect; with adjustment (e.g., IPCW adjustment) for censoring, the three win statistics can complement one another to show the strength of the treatment effect. For calculations we use the R package WINS, available on the CRAN (Comprehensive R Archive Network).


Subject(s)
Computer Simulation , Humans , Probability
4.
Stat Med ; 40(14): 3367-3384, 2021 06 30.
Article in English | MEDLINE | ID: mdl-33860957

ABSTRACT

The win ratio, a recently proposed measure for comparing the benefit of two treatment groups, allows ties in the data but ignores ties in the inference. In this article, we highlight some difficulties that this can lead to, and we propose to focus on the win odds instead, a modification of the win ratio which takes ties into account. We construct hypothesis tests and confidence intervals for the win odds, and we investigate their properties through simulations and in a case study. We conclude that the win odds should be preferred over the win ratio.

5.
Stat Med ; 40(26): 5702-5724, 2021 11 20.
Article in English | MEDLINE | ID: mdl-34327735

ABSTRACT

In heart failure (HF) trials efficacy is usually assessed by a composite endpoint including cardiovascular death (CVD) and heart failure hospitalizations (HFHs), which has traditionally been evaluated with a time-to-first-event analysis based on a Cox model. As a considerable fraction of events is ignored that way, methods for recurrent events were suggested, among others the semiparametric proportional rates models by Lin, Wei, Yang, and Ying (LWYY model) and Mao and Lin (Mao-Lin model). In our work we apply least false parameter theory to explain the behavior of the composite treatment effect estimates resulting from the Cox model, the LWYY model, and the Mao-Lin model in clinically relevant scenarios parameterized through joint frailty models. These account for both different treatment effects on the two outcomes (CVD, HFHs) and the positive correlation between their risk rates. For the important setting of beneficial outcome-specific treatment effects we show that the correlation results in composite treatment effect estimates, which are decreasing with trial duration. The estimate from the Cox model is affected more by the attenuation than the estimates from the recurrent event models, which both demonstrate very similar behavior. Since the Mao-Lin model turns out to be less sensitive to harmful effects on mortality, we conclude that, among the three investigated approaches, the LWYY model is the most appropriate one for the composite endpoint in HF trials. Our investigations are motivated and compared with empirical results from the PARADIGM-HF trial (ClinicalTrials.gov identifier: NCT01035255), a large multicenter trial including 8399 chronic HF patients.


Subject(s)
Heart Failure , Heart Failure/therapy , Humans , Proportional Hazards Models , Treatment Outcome
6.
Stat Med ; 39(14): 1980-1998, 2020 06 30.
Article in English | MEDLINE | ID: mdl-32207171

ABSTRACT

In randomized clinical trials, it is standard to include baseline variables in the primary analysis as covariates, as it is recommended by international guidelines. For the study design to be consistent with the analysis, these variables should also be taken into account when calculating the sample size to appropriately power the trial. Because assumptions made in the sample size calculation are always subject to some degree of uncertainty, a blinded sample size reestimation (BSSR) is recommended to adjust the sample size when necessary. In this article, we introduce a BSSR approach for count data outcomes with baseline covariates. Count outcomes are common in clinical trials and examples include the number of exacerbations in asthma and chronic obstructive pulmonary disease, relapses, and scan lesions in multiple sclerosis and seizures in epilepsy. The introduced methods are based on Wald and likelihood ratio test statistics. The approaches are illustrated by a clinical trial in epilepsy. The BSSR procedures proposed are compared in a Monte Carlo simulation study and shown to yield power values close to the target while not inflating the type I error rate.


Subject(s)
Models, Statistical , Research Design , Humans , Likelihood Functions , Recurrence , Sample Size
7.
Stat Med ; 39(27): 3968-3985, 2020 11 30.
Article in English | MEDLINE | ID: mdl-32815175

ABSTRACT

Blinded sample size re-estimation and information monitoring based on blinded data has been suggested to mitigate risks due to planning uncertainties regarding nuisance parameters. Motivated by a randomized controlled trial in pediatric multiple sclerosis (MS), a continuous monitoring procedure for overdispersed count data was proposed recently. However, this procedure assumed constant event rates, an assumption often not met in practice. Here we extend the procedure to accommodate time trends in the event rates considering two blinded approaches: (a) the mixture approach modeling the number of events by a mixture of two negative binomial distributions and (b) the lumping approach approximating the marginal distribution of the event counts by a negative binomial distribution. Through simulations the operating characteristics of the proposed procedures are investigated under decreasing event rates. We find that the type I error rate is not inflated relevantly by either of the monitoring procedures, with the exception of strong time dependencies where the procedure assuming constant rates exhibits some inflation. Furthermore, the procedure accommodating time trends has generally favorable power properties compared with the procedure based on constant rates which stops often too late. The proposed method is illustrated by the clinical trial in pediatric MS.


Subject(s)
Multiple Sclerosis , Research Design , Binomial Distribution , Child , Humans , Models, Statistical , Multiple Sclerosis/drug therapy , Sample Size , Time
8.
Pharm Stat ; 17(2): 126-143, 2018 03.
Article in English | MEDLINE | ID: mdl-29181869

ABSTRACT

Prior information is often incorporated informally when planning a clinical trial. Here, we present an approach on how to incorporate prior information, such as data from historical clinical trials, into the nuisance parameter-based sample size re-estimation in a design with an internal pilot study. We focus on trials with continuous endpoints in which the outcome variance is the nuisance parameter. For planning and analyzing the trial, frequentist methods are considered. Moreover, the external information on the variance is summarized by the Bayesian meta-analytic-predictive approach. To incorporate external information into the sample size re-estimation, we propose to update the meta-analytic-predictive prior based on the results of the internal pilot study and to re-estimate the sample size using an estimator from the posterior. By means of a simulation study, we compare the operating characteristics such as power and sample size distribution of the proposed procedure with the traditional sample size re-estimation approach that uses the pooled variance estimator. The simulation study shows that, if no prior-data conflict is present, incorporating external information into the sample size re-estimation improves the operating characteristics compared to the traditional approach. In the case of a prior-data conflict, that is, when the variance of the ongoing clinical trial is unequal to the prior location, the performance of the traditional sample size re-estimation procedure is in general superior, even when the prior information is robustified. When considering to include prior information in sample size re-estimation, the potential gains should be balanced against the risks.


Subject(s)
Clinical Trials as Topic/statistics & numerical data , Data Interpretation, Statistical , Models, Statistical , Clinical Trials as Topic/methods , Depression/drug therapy , Humans , Hypericum , Pilot Projects , Sample Size
9.
Stat Med ; 36(23): 3636-3653, 2017 Oct 15.
Article in English | MEDLINE | ID: mdl-28608469

ABSTRACT

In this article, we study blinded sample size re-estimation in the 'gold standard' design with internal pilot study for normally distributed outcomes. The 'gold standard' design is a three-arm clinical trial design that includes an active and a placebo control in addition to an experimental treatment. We focus on the absolute margin approach to hypothesis testing in three-arm trials at which the non-inferiority of the experimental treatment and the assay sensitivity are assessed by pairwise comparisons. We compare several blinded sample size re-estimation procedures in a simulation study assessing operating characteristics including power and type I error. We find that sample size re-estimation based on the popular one-sample variance estimator results in overpowered trials. Moreover, sample size re-estimation based on unbiased variance estimators such as the Xing-Ganju variance estimator results in underpowered trials, as it is expected because an overestimation of the variance and thus the sample size is in general required for the re-estimation procedure to eventually meet the target power. To overcome this problem, we propose an inflation factor for the sample size re-estimation with the Xing-Ganju variance estimator and show that this approach results in adequately powered trials. Because of favorable features of the Xing-Ganju variance estimator such as unbiasedness and a distribution independent of the group means, the inflation factor does not depend on the nuisance parameter and, therefore, can be calculated prior to a trial. Moreover, we prove that the sample size re-estimation based on the Xing-Ganju variance estimator does not bias the effect estimate. Copyright © 2017 John Wiley & Sons, Ltd.


Subject(s)
Clinical Trials as Topic/methods , Models, Statistical , Sample Size , Computer Simulation , Humans , Monte Carlo Method , Pilot Projects , Placebos , Reproducibility of Results , Research Design
10.
Stat Med ; 36(6): 883-898, 2017 03 15.
Article in English | MEDLINE | ID: mdl-27859506

ABSTRACT

The 'gold standard' design for three-arm trials refers to trials with an active control and a placebo control in addition to the experimental treatment group. This trial design is recommended when being ethically justifiable and it allows the simultaneous comparison of experimental treatment, active control, and placebo. Parametric testing methods have been studied plentifully over the past years. However, these methods often tend to be liberal or conservative when distributional assumptions are not met particularly with small sample sizes. In this article, we introduce a studentized permutation test for testing non-inferiority and superiority of the experimental treatment compared with the active control in three-arm trials in the 'gold standard' design. The performance of the studentized permutation test for finite sample sizes is assessed in a Monte Carlo simulation study under various parameter constellations. Emphasis is put on whether the studentized permutation test meets the target significance level. For comparison purposes, commonly used Wald-type tests, which do not make any distributional assumptions, are included in the simulation study. The simulation study shows that the presented studentized permutation test for assessing non-inferiority in three-arm trials in the 'gold standard' design outperforms its competitors, for instance the test based on a quasi-Poisson model, for count data. The methods discussed in this paper are implemented in the R package ThreeArmedTrials which is available on the comprehensive R archive network (CRAN). Copyright © 2016 John Wiley & Sons, Ltd.


Subject(s)
Clinical Trials as Topic/methods , Clinical Trials as Topic/standards , Equivalence Trials as Topic , Humans , Monte Carlo Method , Poisson Distribution , Sample Size , Statistical Distributions , Statistics as Topic
11.
Stat Med ; 35(4): 505-21, 2016 Feb 20.
Article in English | MEDLINE | ID: mdl-26388314

ABSTRACT

A three-arm clinical trial design with an experimental treatment, an active control, and a placebo control, commonly referred to as the gold standard design, enables testing of non-inferiority or superiority of the experimental treatment compared with the active control. In this paper, we propose methods for designing and analyzing three-arm trials with negative binomially distributed endpoints. In particular, we develop a Wald-type test with a restricted maximum-likelihood variance estimator for testing non-inferiority or superiority. For this test, sample size and power formulas as well as optimal sample size allocations will be derived. The performance of the proposed test will be assessed in an extensive simulation study with regard to type I error rate, power, sample size, and sample size allocation. For the purpose of comparison, Wald-type statistics with a sample variance estimator and an unrestricted maximum-likelihood estimator are included in the simulation study. We found that the proposed Wald-type test with a restricted variance estimator performed well across the considered scenarios and is therefore recommended for application in clinical trials. The methods proposed are motivated and illustrated by a recent clinical trial in multiple sclerosis. The R package ThreeArmedTrials, which implements the methods discussed in this paper, is available on CRAN.


Subject(s)
Clinical Trials as Topic , Endpoint Determination , Models, Statistical , Research Design , Computer Simulation , Dimethyl Fumarate/therapeutic use , Humans , Immunosuppressive Agents/therapeutic use , Magnetic Resonance Imaging , Monte Carlo Method , Multiple Sclerosis/drug therapy , Multiple Sclerosis/pathology , Placebos , Sample Size
12.
Stat Methods Med Res ; 31(10): 2004-2020, 2022 10.
Article in English | MEDLINE | ID: mdl-35698787

ABSTRACT

Late phase clinical trials are occasionally planned with one or more interim analyses to allow for early termination or adaptation of the study. While extensive theory has been developed for the analysis of ordered categorical data in terms of the Wilcoxon-Mann-Whitney test, there has been comparatively little discussion in the group sequential literature on how to provide repeated confidence intervals and simple power formulas to ease sample size determination. Dealing more broadly with the nonparametric Behrens-Fisher problem, we focus on the comparison of two parallel treatment arms and show that the Wilcoxon-Mann-Whitney test, the Brunner-Munzel test, as well as a test procedure based on the log win odds, a modification of the win ratio, asymptotically follow the canonical joint distribution. In addition to developing power formulas based on these results, simulations confirm the adequacy of the proposed methods for a range of scenarios. Lastly, we apply our methodology to the FREEDOMS clinical trial (ClinicalTrials.gov Identifier: NCT00289978) in patients with relapse-remitting multiple sclerosis.


Subject(s)
Statistics, Nonparametric , Clinical Trials as Topic , Humans , Sample Size
13.
Contemp Clin Trials ; 98: 106154, 2020 11.
Article in English | MEDLINE | ID: mdl-32961361

ABSTRACT

The first cases of coronavirus disease 2019 (COVID-19) were reported in December 2019 and the outbreak of SARS-CoV-2 was declared a pandemic in March 2020 by the World Health Organization. This sparked a plethora of investigations into diagnostics and vaccination for SARS-CoV-2, as well as treatments for COVID-19. Since COVID-19 is a severe disease associated with a high mortality, clinical trials in this disease should be monitored by a data monitoring committee (DMC), also known as data safety monitoring board (DSMB). DMCs in this indication face a number of challenges including fast recruitment requiring an unusually high frequency of safety reviews, more frequent use of complex designs and virtually no prior experience with the disease. In this paper, we provide a perspective on the work of DMCs for clinical trials of treatments for COVID-19. More specifically, we discuss organizational aspects of setting up and running DMCs for COVID-19 trials, in particular for trials with more complex designs such as platform trials or adaptive designs. Furthermore, statistical aspects of monitoring clinical trials of treatments for COVID-19 are considered. Some recommendations are made regarding the presentation of the data, stopping rules for safety monitoring and the use of external data. The proposed stopping boundaries are assessed in a simulation study motivated by clinical trials in COVID-19.


Subject(s)
COVID-19 Drug Treatment , COVID-19 Testing , Clinical Trials Data Monitoring Committees , Research Design/trends , Vaccination , COVID-19/diagnosis , COVID-19/epidemiology , COVID-19/prevention & control , Clinical Trials Data Monitoring Committees/organization & administration , Clinical Trials Data Monitoring Committees/standards , Clinical Trials Data Monitoring Committees/trends , Computer Simulation , Ethics Committees, Research , Humans , Randomized Controlled Trials as Topic/ethics , Randomized Controlled Trials as Topic/methods , Randomized Controlled Trials as Topic/statistics & numerical data , SARS-CoV-2
14.
Stat Methods Med Res ; 28(8): 2326-2347, 2019 08.
Article in English | MEDLINE | ID: mdl-29770729

ABSTRACT

Count data and recurrent events in clinical trials, such as the number of lesions in magnetic resonance imaging in multiple sclerosis, the number of relapses in multiple sclerosis, the number of hospitalizations in heart failure, and the number of exacerbations in asthma or in chronic obstructive pulmonary disease (COPD) are often modeled by negative binomial distributions. In this manuscript, we study planning and analyzing clinical trials with group sequential designs for negative binomial outcomes. We propose a group sequential testing procedure for negative binomial outcomes based on Wald statistics using maximum likelihood estimators. The asymptotic distribution of the proposed group sequential test statistics is derived. The finite sample size properties of the proposed group sequential test for negative binomial outcomes and the methods for planning the respective clinical trials are assessed in a simulation study. The simulation scenarios are motivated by clinical trials in chronic heart failure and relapsing multiple sclerosis, which cover a wide range of practically relevant settings. Our research assures that the asymptotic normal theory of group sequential designs can be applied to negative binomial outcomes when the hypotheses are tested using Wald statistics and maximum likelihood estimators. We also propose two methods, one based on Student's t-distribution and one based on resampling, to improve type I error rate control in small samples. The statistical methods studied in this manuscript are implemented in the R package gscounts, which is available for download on the Comprehensive R Archive Network (CRAN).


Subject(s)
Binomial Distribution , Clinical Trials as Topic , Research Design , Asthma/physiopathology , Chronic Disease , Heart Failure/therapy , Hospitalization/statistics & numerical data , Humans , Likelihood Functions , Magnetic Resonance Imaging , Multiple Sclerosis, Relapsing-Remitting/diagnostic imaging , Pulmonary Disease, Chronic Obstructive/physiopathology , Sample Size
15.
Stat Methods Med Res ; 28(8): 2385-2403, 2019 08.
Article in English | MEDLINE | ID: mdl-29890892

ABSTRACT

Robust semiparametric models for recurrent events have received increasing attention in the analysis of clinical trials in a variety of diseases including chronic heart failure. In comparison to parametric recurrent event models, robust semiparametric models are more flexible in that neither the baseline event rate nor the process inducing between-patient heterogeneity needs to be specified in terms of a specific parametric statistical model. However, implementing group sequential designs in the robust semiparametric model is complicated by the fact that the sequence of Wald statistics does not follow asymptotically the canonical joint distribution. In this manuscript, we propose two types of group sequential procedures for a robust semiparametric analysis of recurrent events. The first group sequential procedure is based on the asymptotic covariance of the sequence of Wald statistics and it guarantees asymptotic control of the type I error rate. The second procedure is based on the canonical joint distribution and does not guarantee asymptotic type I error rate control but is easy to implement and corresponds to the well-known standard approach for group sequential designs. Moreover, we describe how to determine the maximum information when planning a clinical trial with a group sequential design and a robust semiparametric analysis of recurrent events. We contrast the operating characteristics of the proposed group sequential procedures in a simulation study motivated by the ongoing phase 3 PARAGON-HF trial (ClinicalTrials.gov identifier: NCT01920711) in more than 4600 patients with chronic heart failure and a preserved ejection fraction. We found that both group sequential procedures have similar operating characteristics and that for some practically relevant scenarios, the group sequential procedure based on the canonical joint distribution has advantages with respect to the control of the type I error rate. The proposed method for calculating the maximum information results in appropriately powered trials for both procedures.


Subject(s)
Heart Failure/therapy , Models, Statistical , Randomized Controlled Trials as Topic , Research Design , Computer Simulation , Hospitalization/statistics & numerical data , Humans , Monte Carlo Method
16.
PLoS One ; 13(10): e0204503, 2018.
Article in English | MEDLINE | ID: mdl-30332419

ABSTRACT

BACKGROUND: Permanent pacemaker implantation (PPI) following TAVR is a frequent post interventional complication and its management remains controversial. OBJECTIVE: We sought to elucidate the electrophysiological, procedural, and clinical baseline parameters that are associated with and perhaps predict the need for PPI after TAVR in a heterogeneous-valve-type real-world cohort. METHODS: Overall, 494 patients receiving TAVR at our center from April 2009 to August 2015 were screened. ECG analyses and clinical parameters were collected prospectively. RESULTS: Overall, 401 patients in this all-comers real-world TAVR cohort with a PPI rate of 16% were included. The mean age was 82 years, and the mean duration to PPI was 5.5 days. A large proportion of Edwards SAPIEN valves (81%), DirectFlow, CoreValve, and Portico were implanted. The main indications for PPI were atrioventricular (AV) block III, AV-block Mobitz type II, bradycardic atrial fibrillation and persistent sinus bradycardia. Between groups with and without PPI, significant differences were noted in the prevalence of post TAVR balloon dilatation, resting heart rate, QRS interval, PR interval with a cut-off of >178 ms, left anterior fascicular block and RBBB in univariate analyses. In the subsequent multiple regression analysis, post TAVR balloon dilatation and a PR interval with a cut-off of >178 ms were significant predictors of PPI. CONCLUSION: This real-world cohort differs from others in its size and heterogeneous valve selection, and indicates for the first time that patients with post balloon dilatation or prolonged PR interval are at a higher risk for pacemaker dependency after TAVR.


Subject(s)
Pacemaker, Artificial , Postoperative Complications/diagnosis , Postoperative Complications/therapy , Transcatheter Aortic Valve Replacement , Aged, 80 and over , Aortic Valve Stenosis/diagnosis , Aortic Valve Stenosis/surgery , Biomarkers/metabolism , Electrocardiography , Female , Humans , Length of Stay , Male , Prospective Studies
SELECTION OF CITATIONS
SEARCH DETAIL