Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 52
Filter
1.
Biometrics ; 79(4): 3792-3802, 2023 12.
Article in English | MEDLINE | ID: mdl-36647690

ABSTRACT

Recurrent events are often important endpoints in randomized clinical trials. For example, the number of recurrent disease-related hospitalizations may be considered as a clinically meaningful endpoint in cardiovascular studies. In some settings, the recurrent event process may be terminated by an event such as death, which makes it more challenging to define and estimate a causal treatment effect on recurrent event endpoints. In this paper, we focus on the principal stratum estimand, where the treatment effect of interest on recurrent events is defined among subjects who would be alive regardless of the assigned treatment. For the estimation of the principal stratum effect in randomized clinical trials, we propose a Bayesian approach based on a joint model of the recurrent event and death processes with a frailty term accounting for within-subject correlation. We also present Bayesian posterior predictive check procedures for assessing the model fit. The proposed approaches are demonstrated in the randomized Phase III chronic heart failure trial PARAGON-HF (NCT01920711).


Subject(s)
Heart Failure , Humans , Bayes Theorem , Heart Failure/drug therapy , Chronic Disease
2.
Ther Adv Neurol Disord ; 15: 17562864211070449, 2022.
Article in English | MEDLINE | ID: mdl-35514529

ABSTRACT

Background: To support innovative trial designs in a regulatory setting for pediatric-onset multiple sclerosis (MS), the study aimed to perform a systematic literature review and meta-analysis of relapse rates with interferon ß (IFN ß), fingolimod, and natalizumab and thereby demonstrate potential benefits of Bayesian and non-inferiority designs in this population. Methods: We conducted a literature search in MEDLINE and EMBASE from inception until 17 June 2020 of all studies reporting annualized relapse rates (ARR) in IFN ß-, fingolimod-, or natalizumab-treated patients with pediatric-onset relapsing-remitting MS. These interventions were chosen because the literature was mainly available for these treatments, and they are currently used for the treatment of pediatric MS. Two researchers independently extracted data and assessed study quality using the Cochrane Effective Practice and Organization of Care - Quality Assessment Tool. The meta-analysis estimates were obtained by Bayesian random effects model. Data were summarized as ARR point estimates and 95% credible intervals. Results: We found 19 articles, including 2 randomized controlled trials. The baseline ARR reported was between 1.4 and 3.7. The meta-analysis-based ARR was significantly higher in IFN ß-treated patients (0.69, 95% credible interval: 0.51-0.91) versus fingolimod (0.11, 0.04-0.27) and natalizumab (0.17, 0.09-0.31). Based on the meta-analysis results, an appropriate non-inferiority margin versus fingolimod could be in the range of 2.29-2.67 and for natalizumab 1.72-2.29 on the ARR ratio scale. A Bayesian design, which uses historical information for a fingolimod or natalizumab control arm, could reduce the sample size of a new trial by 18 or 14 patients, respectively. Conclusion: This meta-analysis provides evidence that relapse rates are considerably higher with IFNs versus fingolimod or natalizumab. The results support the use of innovative Bayesian or non-inferiority designs to avoid exposing patients to less effective comparators in trials and bringing new medications to patients more efficiently.

3.
Pharm Stat ; 20(6): 1265-1277, 2021 11.
Article in English | MEDLINE | ID: mdl-34169641

ABSTRACT

Patients often discontinue from a clinical trial because their health condition is not improving or they cannot tolerate the assigned treatment. Consequently, the observed clinical outcomes in the trial are likely better on average than if every patient had completed the trial. If these differences between trial completers and non-completers cannot be explained by the observed data, then the study outcomes are missing not at random (MNAR). One way to overcome this problem-the trimmed means approach for missing data due to study discontinuation-sets missing values as the worst observed outcome and then trims away a fraction of the distribution from each treatment arm before calculating differences in treatment efficacy (Permutt T, Li F. Trimmed means for symptom trials with dropouts. Pharm Stat. 2017;16(1):20-28). In this paper, we derive sufficient and necessary conditions for when this approach can identify the average population treatment effect. Simulation studies show the trimmed means approach's ability to effectively estimate treatment efficacy when data are MNAR and missingness due to study discontinuation is strongly associated with an unfavorable outcome, but trimmed means fail when data are missing at random. If the reasons for study discontinuation in a clinical trial are known, analysts can improve estimates with a combination of multiple imputation and the trimmed means approach when the assumptions of each hold. We compare the methodology to existing approaches using data from a clinical trial for chronic pain. An R package trim implements the method. When the assumptions are justifiable, using trimmed means can help identify treatment effects notwithstanding MNAR data.


Subject(s)
Research Design , Humans , Treatment Outcome
5.
Pharm Stat ; 20(4): 737-751, 2021 07.
Article in English | MEDLINE | ID: mdl-33624407

ABSTRACT

A randomized trial allows estimation of the causal effect of an intervention compared to a control in the overall population and in subpopulations defined by baseline characteristics. Often, however, clinical questions also arise regarding the treatment effect in subpopulations of patients, which would experience clinical or disease related events post-randomization. Events that occur after treatment initiation and potentially affect the interpretation or the existence of the measurements are called intercurrent events in the ICH E9(R1) guideline. If the intercurrent event is a consequence of treatment, randomization alone is no longer sufficient to meaningfully estimate the treatment effect. Analyses comparing the subgroups of patients without the intercurrent events for intervention and control will not estimate a causal effect. This is well known, but post-hoc analyses of this kind are commonly performed in drug development. An alternative approach is the principal stratum strategy, which classifies subjects according to their potential occurrence of an intercurrent event on both study arms. We illustrate with examples that questions formulated through principal strata occur naturally in drug development and argue that approaching these questions with the ICH E9(R1) estimand framework has the potential to lead to more transparent assumptions as well as more adequate analyses and conclusions. In addition, we provide an overview of assumptions required for estimation of effects in principal strata. Most of these assumptions are unverifiable and should hence be based on solid scientific understanding. Sensitivity analyses are needed to assess robustness of conclusions.


Subject(s)
Drug Development , Research Design , Causality , Data Interpretation, Statistical , Humans
6.
Res Synth Methods ; 12(4): 448-474, 2021 Jul.
Article in English | MEDLINE | ID: mdl-33486828

ABSTRACT

The normal-normal hierarchical model (NNHM) constitutes a simple and widely used framework for meta-analysis. In the common case of only few studies contributing to the meta-analysis, standard approaches to inference tend to perform poorly, and Bayesian meta-analysis has been suggested as a potential solution. The Bayesian approach, however, requires the sensible specification of prior distributions. While noninformative priors are commonly used for the overall mean effect, the use of weakly informative priors has been suggested for the heterogeneity parameter, in particular in the setting of (very) few studies. To date, however, a consensus on how to generally specify a weakly informative heterogeneity prior is lacking. Here we investigate the problem more closely and provide some guidance on prior specification.


Subject(s)
Bayes Theorem
7.
Stat Med ; 39(27): 3968-3985, 2020 11 30.
Article in English | MEDLINE | ID: mdl-32815175

ABSTRACT

Blinded sample size re-estimation and information monitoring based on blinded data has been suggested to mitigate risks due to planning uncertainties regarding nuisance parameters. Motivated by a randomized controlled trial in pediatric multiple sclerosis (MS), a continuous monitoring procedure for overdispersed count data was proposed recently. However, this procedure assumed constant event rates, an assumption often not met in practice. Here we extend the procedure to accommodate time trends in the event rates considering two blinded approaches: (a) the mixture approach modeling the number of events by a mixture of two negative binomial distributions and (b) the lumping approach approximating the marginal distribution of the event counts by a negative binomial distribution. Through simulations the operating characteristics of the proposed procedures are investigated under decreasing event rates. We find that the type I error rate is not inflated relevantly by either of the monitoring procedures, with the exception of strong time dependencies where the procedure assuming constant rates exhibits some inflation. Furthermore, the procedure accommodating time trends has generally favorable power properties compared with the procedure based on constant rates which stops often too late. The proposed method is illustrated by the clinical trial in pediatric MS.


Subject(s)
Multiple Sclerosis , Research Design , Binomial Distribution , Child , Humans , Models, Statistical , Multiple Sclerosis/drug therapy , Sample Size , Time
9.
Biometrics ; 76(2): 578-587, 2020 06.
Article in English | MEDLINE | ID: mdl-32142163

ABSTRACT

Determining the sample size of an experiment can be challenging, even more so when incorporating external information via a prior distribution. Such information is increasingly used to reduce the size of the control group in randomized clinical trials. Knowing the amount of prior information, expressed as an equivalent prior effective sample size (ESS), clearly facilitates trial designs. Various methods to obtain a prior's ESS have been proposed recently. They have been justified by the fact that they give the standard ESS for one-parameter exponential families. However, despite being based on similar information-based metrics, they may lead to surprisingly different ESS for nonconjugate settings, which complicates many designs with prior information. We show that current methods fail a basic predictive consistency criterion, which requires the expected posterior-predictive ESS for a sample of size N to be the sum of the prior ESS and N. The expected local-information-ratio ESS is introduced and shown to be predictively consistent. It corrects the ESS of current methods, as shown for normally distributed data with a heavy-tailed Student-t prior and exponential data with a generalized Gamma prior. Finally, two applications are discussed: the prior ESS for the control group derived from historical data and the posterior ESS for hierarchical subgroup analyses.


Subject(s)
Models, Statistical , Randomized Controlled Trials as Topic/methods , Randomized Controlled Trials as Topic/statistics & numerical data , Sample Size , Analysis of Variance , Biometry , Data Interpretation, Statistical , Humans , Proof of Concept Study
10.
Clin Pharmacol Ther ; 107(4): 806-816, 2020 04.
Article in English | MEDLINE | ID: mdl-31725899

ABSTRACT

Randomized controlled trials are the gold standard to investigate efficacy and safety of new treatments. In certain settings, however, randomizing patients to control may be difficult for ethical or feasibility reasons. Borrowing strength using relevant individual patient data on control from external trials or real-world data (RWD) sources may then allow us to reduce, or even eliminate, the concurrent control group. Naive direct use of external control data is not valid due to differences in patient characteristics and other confounding factors. Instead, we suggest the rigorous application of meta-analytic and propensity score methods to use external controls in a principled way. We illustrate these methods with two case studies: (i) a single-arm trial in a rare cancer disease, using propensity score matching to construct an external control from RWD; (ii) a randomized trial in children with multiple sclerosis, borrowing strength from past trials using a Bayesian meta-analytic approach.


Subject(s)
Multiple Sclerosis/therapy , Neoplasms/therapy , Propensity Score , Randomized Controlled Trials as Topic/methods , Endpoint Determination/methods , Endpoint Determination/trends , Humans , Meta-Analysis as Topic , Multiple Sclerosis/epidemiology , Neoplasms/epidemiology
11.
Stat Med ; 38(23): 4761-4771, 2019 10 15.
Article in English | MEDLINE | ID: mdl-31386219

ABSTRACT

The treatment effect in subgroups of patients is often of interest in randomized controlled clinical trials, as this may provide useful information on how to treat which patients best. When a specific subgroup is characterized by the absence of certain events that happen postrandomization, a naive analysis on the subset of patients without these events may be misleading. The principal stratification framework allows one to define an appropriate causal estimand in such settings. Statistical inference for the principal stratum estimand hinges on scientifically justified assumptions, which can be included with Bayesian methods through prior distributions. Our motivating example is a large randomized placebo-controlled trial of siponimod in patients with secondary progressive multiple sclerosis. The primary objective of this trial was to demonstrate the efficacy of siponimod relative to placebo in delaying disability progression for the whole study population. However, the treatment effect in the subgroup of patients who would not relapse during the trial is relevant from both a scientific and patient perspective. Assessing this subgroup treatment effect is challenging as there is strong evidence that siponimod reduces relapses. We describe in detail the scientific question of interest, the principal stratum estimand, the corresponding analysis method for binary endpoints, and sensitivity analyses. Although our work is motivated by a randomized clinical trial, the approach has broader appeal and could be adapted for observational studies.


Subject(s)
Bayes Theorem , Randomized Controlled Trials as Topic/statistics & numerical data , Azetidines/therapeutic use , Benzyl Compounds/therapeutic use , Humans , Multiple Sclerosis, Chronic Progressive/drug therapy , Research Design , Sphingosine 1 Phosphate Receptor Modulators/therapeutic use
13.
Stat Methods Med Res ; 28(1): 117-133, 2019 01.
Article in English | MEDLINE | ID: mdl-28633609

ABSTRACT

We consider modelling and inference as well as sample size estimation and reestimation for clinical trials with longitudinal count data as outcomes. Our approach is general but is rooted in design and analysis of multiple sclerosis trials where lesion counts obtained by magnetic resonance imaging are important endpoints. We adopt a binomial thinning model that allows for correlated counts with marginal Poisson or negative binomial distributions. Methods for sample size planning and blinded sample size reestimation for randomised controlled clinical trials with such outcomes are developed. The models and approaches are applicable to data with incomplete observations. A simulation study is conducted to assess the effectiveness of sample size estimation and blinded sample size reestimation methods. Sample sizes attained through these procedures are shown to maintain the desired study power without inflating the type I error. Data from a recent trial in patients with secondary progressive multiple sclerosis illustrate the modelling approach.


Subject(s)
Longitudinal Studies , Models, Statistical , Sample Size , Data Interpretation, Statistical , Humans , Magnetic Resonance Imaging , Multiple Sclerosis/diagnostic imaging , Poisson Distribution , Randomized Controlled Trials as Topic/methods , Statistics as Topic , Time Factors
14.
Stat Methods Med Res ; 28(8): 2326-2347, 2019 08.
Article in English | MEDLINE | ID: mdl-29770729

ABSTRACT

Count data and recurrent events in clinical trials, such as the number of lesions in magnetic resonance imaging in multiple sclerosis, the number of relapses in multiple sclerosis, the number of hospitalizations in heart failure, and the number of exacerbations in asthma or in chronic obstructive pulmonary disease (COPD) are often modeled by negative binomial distributions. In this manuscript, we study planning and analyzing clinical trials with group sequential designs for negative binomial outcomes. We propose a group sequential testing procedure for negative binomial outcomes based on Wald statistics using maximum likelihood estimators. The asymptotic distribution of the proposed group sequential test statistics is derived. The finite sample size properties of the proposed group sequential test for negative binomial outcomes and the methods for planning the respective clinical trials are assessed in a simulation study. The simulation scenarios are motivated by clinical trials in chronic heart failure and relapsing multiple sclerosis, which cover a wide range of practically relevant settings. Our research assures that the asymptotic normal theory of group sequential designs can be applied to negative binomial outcomes when the hypotheses are tested using Wald statistics and maximum likelihood estimators. We also propose two methods, one based on Student's t-distribution and one based on resampling, to improve type I error rate control in small samples. The statistical methods studied in this manuscript are implemented in the R package gscounts, which is available for download on the Comprehensive R Archive Network (CRAN).


Subject(s)
Binomial Distribution , Clinical Trials as Topic , Research Design , Asthma/physiopathology , Chronic Disease , Heart Failure/therapy , Hospitalization/statistics & numerical data , Humans , Likelihood Functions , Magnetic Resonance Imaging , Multiple Sclerosis, Relapsing-Remitting/diagnostic imaging , Pulmonary Disease, Chronic Obstructive/physiopathology , Sample Size
15.
Stat Methods Med Res ; 28(8): 2385-2403, 2019 08.
Article in English | MEDLINE | ID: mdl-29890892

ABSTRACT

Robust semiparametric models for recurrent events have received increasing attention in the analysis of clinical trials in a variety of diseases including chronic heart failure. In comparison to parametric recurrent event models, robust semiparametric models are more flexible in that neither the baseline event rate nor the process inducing between-patient heterogeneity needs to be specified in terms of a specific parametric statistical model. However, implementing group sequential designs in the robust semiparametric model is complicated by the fact that the sequence of Wald statistics does not follow asymptotically the canonical joint distribution. In this manuscript, we propose two types of group sequential procedures for a robust semiparametric analysis of recurrent events. The first group sequential procedure is based on the asymptotic covariance of the sequence of Wald statistics and it guarantees asymptotic control of the type I error rate. The second procedure is based on the canonical joint distribution and does not guarantee asymptotic type I error rate control but is easy to implement and corresponds to the well-known standard approach for group sequential designs. Moreover, we describe how to determine the maximum information when planning a clinical trial with a group sequential design and a robust semiparametric analysis of recurrent events. We contrast the operating characteristics of the proposed group sequential procedures in a simulation study motivated by the ongoing phase 3 PARAGON-HF trial (ClinicalTrials.gov identifier: NCT01920711) in more than 4600 patients with chronic heart failure and a preserved ejection fraction. We found that both group sequential procedures have similar operating characteristics and that for some practically relevant scenarios, the group sequential procedure based on the canonical joint distribution has advantages with respect to the control of the type I error rate. The proposed method for calculating the maximum information results in appropriately powered trials for both procedures.


Subject(s)
Heart Failure/therapy , Models, Statistical , Randomized Controlled Trials as Topic , Research Design , Computer Simulation , Hospitalization/statistics & numerical data , Humans , Monte Carlo Method
16.
Pharm Stat ; 18(1): 54-64, 2019 01.
Article in English | MEDLINE | ID: mdl-30345693

ABSTRACT

In studies with recurrent event endpoints, misspecified assumptions of event rates or dispersion can lead to underpowered trials or overexposure of patients. Specification of overdispersion is often a particular problem as it is usually not reported in clinical trial publications. Changing event rates over the years have been described for some diseases, adding to the uncertainty in planning. To mitigate the risks of inadequate sample sizes, internal pilot study designs have been proposed with a preference for blinded sample size reestimation procedures, as they generally do not affect the type I error rate and maintain trial integrity. Blinded sample size reestimation procedures are available for trials with recurrent events as endpoints. However, the variance in the reestimated sample size can be considerable in particular with early sample size reviews. Motivated by a randomized controlled trial in paediatric multiple sclerosis, a rare neurological condition in children, we apply the concept of blinded continuous monitoring of information, which is known to reduce the variance in the resulting sample size. Assuming negative binomial distributions for the counts of recurrent relapses, we derive information criteria and propose blinded continuous monitoring procedures. The operating characteristics of these are assessed in Monte Carlo trial simulations demonstrating favourable properties with regard to type I error rate, power, and stopping time, ie, sample size.


Subject(s)
Biostatistics/methods , Fingolimod Hydrochloride/therapeutic use , Immunosuppressive Agents/therapeutic use , Multiple Sclerosis, Relapsing-Remitting/drug therapy , Randomized Controlled Trials as Topic/statistics & numerical data , Research Design/statistics & numerical data , Age Factors , Computer Simulation , Data Interpretation, Statistical , Endpoint Determination/statistics & numerical data , Humans , Models, Statistical , Monte Carlo Method , Multiple Sclerosis, Relapsing-Remitting/diagnosis , Randomized Controlled Trials as Topic/methods , Recurrence , Sample Size , Time Factors , Treatment Outcome
17.
Stat Med ; 38(9): 1503-1528, 2019 04 30.
Article in English | MEDLINE | ID: mdl-30575061

ABSTRACT

In some diseases, such as multiple sclerosis, lesion counts obtained from magnetic resonance imaging (MRI) are used as markers of disease progression. This leads to longitudinal, and typically overdispersed, count data outcomes in clinical trials. Models for such data invariably include a number of nuisance parameters, which can be difficult to specify at the planning stage, leading to considerable uncertainty in sample size specification. Consequently, blinded sample size re-estimation procedures are used, allowing for an adjustment of the sample size within an ongoing trial by estimating relevant nuisance parameters at an interim point, without compromising trial integrity. To date, the methods available for re-estimation have required an assumption that the mean count is time-constant within patients. We propose a new modeling approach that maintains the advantages of established procedures but allows for general underlying and treatment-specific time trends in the mean response. A simulation study is conducted to assess the effectiveness of blinded sample size re-estimation methods over fixed designs. Sample sizes attained through blinded sample size re-estimation procedures are shown to maintain the desired study power without inflating the Type I error rate and the procedure is demonstrated on MRI data from a recent study in multiple sclerosis.


Subject(s)
Binomial Distribution , Clinical Trials as Topic/methods , Sample Size , Computer Simulation , Data Interpretation, Statistical , Humans , Magnetic Resonance Imaging , Multiple Sclerosis/diagnostic imaging , Time
18.
Biom J ; 60(3): 564-582, 2018 05.
Article in English | MEDLINE | ID: mdl-29532950

ABSTRACT

For the approval of biosimilars, it is, in most cases, necessary to conduct large Phase III clinical trials in patients to convince the regulatory authorities that the product is comparable in terms of efficacy and safety to the originator product. As the originator product has already been studied in several trials beforehand, it seems natural to include this historical information into the showing of equivalent efficacy. Since all studies for the regulatory approval of biosimilars are confirmatory studies, it is required that the statistical approach has reasonable frequentist properties, most importantly, that the Type I error rate is controlled-at least in all scenarios that are realistic in practice. However, it is well known that the incorporation of historical information can lead to an inflation of the Type I error rate in the case of a conflict between the distribution of the historical data and the distribution of the trial data. We illustrate this issue and confirm, using the Bayesian robustified meta-analytic-predictive (MAP) approach as an example, that simultaneously controlling the Type I error rate over the complete parameter space and gaining power in comparison to a standard frequentist approach that only considers the data in the new study, is not possible. We propose a hybrid Bayesian-frequentist approach for binary endpoints that controls the Type I error rate in the neighborhood of the center of the prior distribution, while improving the power. We study the properties of this approach in an extensive simulation study and provide a real-world example.


Subject(s)
Biometry/methods , Biosimilar Pharmaceuticals/pharmacology , Clinical Trials as Topic , Bayes Theorem , Models, Statistical
19.
Pharm Stat ; 17(2): 126-143, 2018 03.
Article in English | MEDLINE | ID: mdl-29181869

ABSTRACT

Prior information is often incorporated informally when planning a clinical trial. Here, we present an approach on how to incorporate prior information, such as data from historical clinical trials, into the nuisance parameter-based sample size re-estimation in a design with an internal pilot study. We focus on trials with continuous endpoints in which the outcome variance is the nuisance parameter. For planning and analyzing the trial, frequentist methods are considered. Moreover, the external information on the variance is summarized by the Bayesian meta-analytic-predictive approach. To incorporate external information into the sample size re-estimation, we propose to update the meta-analytic-predictive prior based on the results of the internal pilot study and to re-estimate the sample size using an estimator from the posterior. By means of a simulation study, we compare the operating characteristics such as power and sample size distribution of the proposed procedure with the traditional sample size re-estimation approach that uses the pooled variance estimator. The simulation study shows that, if no prior-data conflict is present, incorporating external information into the sample size re-estimation improves the operating characteristics compared to the traditional approach. In the case of a prior-data conflict, that is, when the variance of the ongoing clinical trial is unequal to the prior location, the performance of the traditional sample size re-estimation procedure is in general superior, even when the prior information is robustified. When considering to include prior information in sample size re-estimation, the potential gains should be balanced against the risks.


Subject(s)
Clinical Trials as Topic/statistics & numerical data , Data Interpretation, Statistical , Models, Statistical , Clinical Trials as Topic/methods , Depression/drug therapy , Humans , Hypericum , Pilot Projects , Sample Size
20.
Stat Med ; 37(6): 867-882, 2018 03 15.
Article in English | MEDLINE | ID: mdl-29152777

ABSTRACT

Information from historical trials is important for the design, interim monitoring, analysis, and interpretation of clinical trials. Meta-analytic models can be used to synthesize the evidence from historical data, which are often only available in aggregate form. We consider evidence synthesis methods for trials with recurrent event endpoints, which are common in many therapeutic areas. Such endpoints are typically analyzed by negative binomial regression. However, the individual patient data necessary to fit such a model are usually unavailable for historical trials reported in the medical literature. We describe approaches for back-calculating model parameter estimates and their standard errors from available summary statistics with various techniques, including approximate Bayesian computation. We propose to use a quadratic approximation to the log-likelihood for each historical trial based on 2 independent terms for the log mean rate and the log of the dispersion parameter. A Bayesian hierarchical meta-analysis model then provides the posterior predictive distribution for these parameters. Simulations show this approach with back-calculated parameter estimates results in very similar inference as using parameter estimates from individual patient data as an input. We illustrate how to design and analyze a new randomized placebo-controlled exacerbation trial in severe eosinophilic asthma using data from 11 historical trials.


Subject(s)
Bayes Theorem , Clinical Trials as Topic/methods , Meta-Analysis as Topic , Regression Analysis , Asthma , Computer Simulation , Data Analysis , Data Interpretation, Statistical , Humans , Likelihood Functions , Placebos , Randomized Controlled Trials as Topic , Research Design
SELECTION OF CITATIONS
SEARCH DETAIL
...