Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 51
Filter
Add more filters

Country/Region as subject
Publication year range
1.
Circulation ; 137(9): 961-972, 2018 02 27.
Article in English | MEDLINE | ID: mdl-29483172

ABSTRACT

This publication describes uniform definitions for cardiovascular and stroke outcomes developed by the Standardized Data Collection for Cardiovascular Trials Initiative and the US Food and Drug Administration (FDA). The FDA established the Standardized Data Collection for Cardiovascular Trials Initiative in 2009 to simplify the design and conduct of clinical trials intended to support marketing applications. The writing committee recognizes that these definitions may be used in other types of clinical trials and clinical care processes where appropriate. Use of these definitions at the FDA has enhanced the ability to aggregate data within and across medical product development programs, conduct meta-analyses to evaluate cardiovascular safety, integrate data from multiple trials, and compare effectiveness of drugs and devices. Further study is needed to determine whether prospective data collection using these common definitions improves the design, conduct, and interpretability of the results of clinical trials.


Subject(s)
Cardiovascular Diseases/diagnosis , Data Collection/standards , Endpoint Determination/standards , Stroke/diagnosis , Clinical Trials as Topic , Humans , United States , United States Food and Drug Administration
2.
J Biopharm Stat ; 29(6): 1116-1129, 2019.
Article in English | MEDLINE | ID: mdl-31035859

ABSTRACT

The sequential parallel comparison designhas recently been considered to solve the problem with high placebo response and the required sample size in the psychiatric clinical trials. One feature with this design is that a difference between the placebo group and the drug group may also arise in the variance-covariance structure of the clinical outcome. Provided the heterogeneity of the second moment, the treatment effect estimation at the second stage can be biased for the entire randomized patient population that includes patient responders. Our work presented here aims at how the coverage probability of the interval estimation of treatment effect performs under the unstructured variance-covariance matrix. The interaction between the truncation after the first stage and the heterogeneity of the second moment causes a substantial coverage probability problem. The type I error probability may not be controlled under the weak null due to this bias. This bias can also cause spurious power evaluation under an alternative hypothesis. The coverage probability of the ordinary least square statistic is shown in different scenarios.


Subject(s)
Computer Simulation , Mental Disorders/drug therapy , Models, Statistical , Randomized Controlled Trials as Topic/statistics & numerical data , Humans , Placebo Effect , Probability , Random Allocation , Research Design , Sample Size , Treatment Outcome
3.
J Biopharm Stat ; 29(6): 1134-1136, 2019.
Article in English | MEDLINE | ID: mdl-31032707

ABSTRACT

In this rejoinder the authors stipulate further for two challenging issues. First, if placebo non-responders are selected simply by their response meeting a threshold, this selection may have misclassification error and consequently the treatment effect estimate may be biased, regardless of whether the estimand at the second stage is the treatment effect in the entire population or placebo non-responders. Secondly, the weak null hypothesis considered in our article Statistical Inference Problems in Sequential Parallel Comparison Design (2019) is that the expected treatment effects in placebo non-responders and in the entire set of patients entering the trial are both zero, in contrast to the strong null hypothesis that the statistical distribution of the response variable is equal in the compared treatments. The impact of violating the assumption of equal moments other than the mean parameter on statistical operating characteristics in estimation and testing of treatment effect can be substantial. As an example, the ordinary least squares based test detects a treatment difference even if the expected treatment effects in placebo non-responders and the entire population are both zero.


Subject(s)
Research Design , Humans , Least-Squares Analysis
4.
J Biopharm Stat ; 29(4): 722-727, 2019.
Article in English | MEDLINE | ID: mdl-31258011

ABSTRACT

While 2-in-1 designs give a flexibility to make a clinical trial either an information generation Phase 2 trial or a full scale confirmatory Phase 3 trial, flexible sample size designs can naturally fit into the 2-in-1 design framework. This study is to show that the CHW design can be blended into a 2-in-1 design to improve the adaptive performance of the design. Commenting on the usual 2-in-1 design, we demonstrated that the CHW design can achieve the goal of a 2-in-1 design with satisfactory statistical power and efficient average sample size for a targeted range of the treatment effect.


Subject(s)
Research Design , Sample Size
5.
J Biopharm Stat ; 26(1): 37-43, 2016.
Article in English | MEDLINE | ID: mdl-26366624

ABSTRACT

There are several challenging statistical problems identified in the regulatory review of large cardiovascular (CV) clinical outcome trials and central nervous system (CNS) trials. The problems can be common or distinct due to disease characteristics and the differences in trial design elements such as endpoints, trial duration, and trial size. In schizophrenia trials, heavy missing data is a big problem. In Alzheimer trials, the endpoints for assessing symptoms and the endpoints for assessing disease progression are essentially the same; it is difficult to construct a good trial design to evaluate a test drug for its ability to slow the disease progression. In CV trials, reliance on a composite endpoint with low event rate makes the trial size so large that it is infeasible to study multiple doses necessary to find the right dose for study patients. These are just a few typical problems. In the past decade, adaptive designs were increasingly used in these disease areas and some challenges occur with respect to that use. Based on our review experiences, group sequential designs (GSDs) have borne many successful stories in CV trials and are also increasingly used for developing treatments targeting CNS diseases. There is also a growing trend of using more advanced unblinded adaptive designs for producing efficacy evidence. Many statistical challenges with these kinds of adaptive designs have been identified through our experiences with the review of regulatory applications and are shared in this article.


Subject(s)
Cardiovascular Agents/therapeutic use , Cardiovascular Diseases/drug therapy , Central Nervous System Agents/therapeutic use , Central Nervous System Diseases/drug therapy , Cardiovascular Agents/adverse effects , Cardiovascular Agents/pharmacology , Central Nervous System Agents/adverse effects , Central Nervous System Agents/pharmacology , Clinical Trials as Topic , Humans , Research Design , Treatment Outcome
6.
Stat Med ; 34(26): 3461-80, 2015 Nov 20.
Article in English | MEDLINE | ID: mdl-26112381

ABSTRACT

An invited panel session was conducted in the 2012 Joint Statistical Meetings, San Diego, California, USA, to stimulate the discussion on multiplicity issues in confirmatory clinical trials for drug development. A total of 11 expert panel members were invited and 9 participated. Prior to the session, a case study was previously provided to the panel members to facilitate the discussion, focusing on the key components of the study design and multiplicity. The Phase 3 development program for this new experimental treatment was based on a single randomized controlled trial alone. Each panelist was asked to clarify if he or she responded as if he or she were a pharmaceutical drug sponsor, an academic panelist or a health regulatory scientist.


Subject(s)
Clinical Trials, Phase III as Topic/statistics & numerical data , Data Interpretation, Statistical , Drug Discovery/statistics & numerical data , Endpoint Determination/methods , Research Design/statistics & numerical data , Respiratory Distress Syndrome, Newborn/drug therapy , Congresses as Topic , Humans , Infant, Newborn , Treatment Outcome
7.
J Biopharm Stat ; 24(5): 1059-72, 2014.
Article in English | MEDLINE | ID: mdl-24915027

ABSTRACT

Adaptive designs have generated a great deal of attention to clinical trial communities. The literature contains many statistical methods to deal with added statistical uncertainties concerning the adaptations. Increasingly encountered in regulatory applications are adaptive statistical information designs that allow modification of sample size or related statistical information and adaptive selection designs that allow selection of doses or patient populations during the course of a clinical trial. For adaptive statistical information designs, a few statistical testing methods are mathematically equivalent, as a number of articles have stipulated, but arguably there are large differences in their practical ramifications. We pinpoint some undesirable features of these methods in this work. For adaptive selection designs, the selection based on biomarker data for testing the correlated clinical endpoints may increase statistical uncertainty in terms of type I error probability, and most importantly the increased statistical uncertainty may be impossible to assess.


Subject(s)
Clinical Trials as Topic/statistics & numerical data , Models, Statistical , Research Design , Clinical Trials as Topic/methods , Data Interpretation, Statistical , Humans , Observer Variation , Sample Size , Treatment Outcome
8.
J Biopharm Stat ; 24(1): 19-41, 2014.
Article in English | MEDLINE | ID: mdl-24392976

ABSTRACT

This regulatory research provides possible approaches for improvement to conventional subgroup analysis in a fixed design setting. The interaction-to-overall effects ratio is recommended in the planning stage for potential predictors whose prevalence is at most 50% and its observed ratio is recommended in the analysis stage for proper subgroup interpretation if sample size is only planned to target the overall effect size. We illustrate using regulatory examples and underscore the importance of striving for balance between safety and efficacy when considering a regulatory recommendation of a label restricted to a subgroup. A set of decision rules gives guidance for rigorous subgroup-specific conclusions.


Subject(s)
Research Design/legislation & jurisprudence , Biomarkers , Data Interpretation, Statistical , Forecasting , Humans , Patient Safety , Prevalence , Sample Size
9.
Biom J ; 55(3): 420-9, 2013 May.
Article in English | MEDLINE | ID: mdl-23620458

ABSTRACT

Multiple comparisons have drawn a great deal of attention in evaluation of statistical evidence in clinical trials for regulatory applications. As the clinical trial methodology is increasingly more complex to properly take into consideration many practical factors, the multiple testing paradigm widely employed for regulatory applications may not suffice to interpret the results of an individual trial and of multiple trials. In a large outcome trial, an increasing need of studying more than one dose complicates a proper application of multiple comparison procedures. Additional challenges surface when a special endpoint, such as mortality, may need to be tested with multiple clinical trials combined, especially under group sequential designs. Another interesting question is how to study mortality or morbidity endpoints together with symptomatic endpoints in an efficient way, where the former type of endpoints are often studied in only one single trial but the latter type of endpoints are usually studied in at least two independent trials. This article is devoted to discussion of insufficiency of such a widely used paradigm applying only per-trial based multiple comparison procedures and to expand the utility of the procedures to such complex trial designs. A number of viable expanded strategies are stipulated.


Subject(s)
Clinical Trials as Topic/methods , Data Interpretation, Statistical , Clinical Trials as Topic/legislation & jurisprudence , Dose-Response Relationship, Drug , Endpoint Determination/methods , Humans , Research Design
10.
Biom J ; 55(3): 275-93, 2013 May.
Article in English | MEDLINE | ID: mdl-23553537

ABSTRACT

Motivated by a complex study design aiming at a definitive evidential setting, a panel forum among academia, industry, and US regulatory statistical scientists was held at the 7th International Conference on Multiple Comparison Procedures (MCP) to comment on the multiplicity problem. It is well accepted that studywise or familywise, type I error rate control is the norm for confirmatory trials. But, it is an uncharted territory regarding the criteria beyond a single confirmatory trial. The case example describes a Phase III program consisting of two placebo-controlled multiregional clinical trials identical in design intended to support registration for treatment of a chronic condition in the lung. The case presents a sophisticated multiplicity problem in several levels: four primary endpoints, two doses, two studies, two regions with different regulatory requirements, one major protocol amendment on the original statistical analysis plan, which the panelists had a chance to study before the forum took place. There were differences in professional perspectives among the panelists laid out by sections. Nonetheless, irrespective of the amendment, it may be arguable whether the two studies are poolable for the analysis of two primary endpoints prespecified. How should the study finding be reported in a scientific journal if one health authority approves while the other does not? It is tempting to address the Phase III program level multiplicity motivated by the increasing complexity of the partial hypotheses framework posed that are across studies. A novel thinking of the MCP procedures beyond individual-study level (studywise or familywise as predefined) and across multiple-study level (experimentwise and sometimes programwise) will become an important research problem expected to face with scientific and regulatory challenges.


Subject(s)
Clinical Trials, Phase III as Topic/methods , Data Interpretation, Statistical , Multicenter Studies as Topic/methods , Randomized Controlled Trials as Topic/methods , Humans , Research Design
11.
Stat Med ; 31(25): 3011-23, 2012 Nov 10.
Article in English | MEDLINE | ID: mdl-22927234

ABSTRACT

In the last decade or so, interest in adaptive design clinical trials has gradually been directed towards their use in regulatory submissions by pharmaceutical drug sponsors to evaluate investigational new drugs. Methodological advances of adaptive designs are abundant in the statistical literature since the 1970s. The adaptive design paradigm has been enthusiastically perceived to increase the efficiency and to be more cost-effective than the fixed design paradigm for drug development. Much interest in adaptive designs is in those studies with two-stages, where stage 1 is exploratory and stage 2 depends upon stage 1 results, but where the data of both stages will be combined to yield statistical evidence for use as that of a pivotal registration trial. It was not until the recent release of the US Food and Drug Administration Draft Guidance for Industry on Adaptive Design Clinical Trials for Drugs and Biologics (2010) that the boundaries of flexibility for adaptive designs were specifically considered for regulatory purposes, including what are exploratory goals, and what are the goals of adequate and well-controlled (A&WC) trials (2002). The guidance carefully described these distinctions in an attempt to minimize the confusion between the goals of preliminary learning phases of drug development, which are inherently substantially uncertain, and the definitive inference-based phases of drug development. In this paper, in addition to discussing some aspects of adaptive designs in a confirmatory study setting, we underscore the value of adaptive designs when used in exploratory trials to improve planning of subsequent A&WC trials. One type of adaptation that is receiving attention is the re-estimation of the sample size during the course of the trial. We refer to this type of adaptation as an adaptive statistical information design. Specifically, a case example is used to illustrate how challenging it is to plan a confirmatory adaptive statistical information design. We highlight the substantial risk of planning the sample size for confirmatory trials when information is very uninformative and stipulate the advantages of adaptive statistical information designs for planning exploratory trials. Practical experiences and strategies as lessons learned from more recent adaptive design proposals will be discussed to pinpoint the improved utilities of adaptive design clinical trials and their potential to increase the chance of a successful drug development.


Subject(s)
Controlled Clinical Trials as Topic/statistics & numerical data , Drugs, Investigational , Models, Statistical , Research Design , Clinical Trials, Phase III as Topic/statistics & numerical data , Sample Size
12.
J Biopharm Stat ; 22(5): 1037-50, 2012 Sep.
Article in English | MEDLINE | ID: mdl-22946948

ABSTRACT

To speed up drug development to allow faster access to medicines for patients globally, conducting multiregional trials incorporating subjects from many countries around the world under the same protocol may be desired. Several statistical methods have been proposed for the design and evaluation of multiregional trials. However, in most of the recent approaches for sample size determination in multiregional trials, a common treatment effect of the primary endpoint across regions is usually assumed. In practice, it might be expected that there is a difference in treatment effect due to regional difference (e.g., ethnic difference). In this article, a random effect model for heterogeneous treatment effect across regions is proposed for the design and evaluation of multiregional trials. We also address consideration of the determination of the number of subjects in a specific region to establish the consistency of treatment effects between the specific region and the entire group.


Subject(s)
Multicenter Studies as Topic/methods , Research Design/statistics & numerical data , Algorithms , Clinical Trials as Topic/methods , Clinical Trials as Topic/statistics & numerical data , Humans , Models, Statistical , Multicenter Studies as Topic/statistics & numerical data , Randomized Controlled Trials as Topic/methods , Randomized Controlled Trials as Topic/statistics & numerical data , Sample Size , Treatment Outcome
13.
Stat Med ; 30(13): 1519-27, 2011 Jun 15.
Article in English | MEDLINE | ID: mdl-21344470

ABSTRACT

Adaptive designs or flexible designs in a broader sense have increasingly been considered in planning pivotal registration clinical trials. Sample size reassessment design and adaptive selection design are two of such designs that appear in regulatory applications. At the design stage, consideration of sample size reassessment at an interim time of the trial should lead to extensive discussion about how to appropriately size the trial. Additionally, careful attention needs to be paid to the issue of how the size of the trial is impacted by the requirement that the final p-value of the trial meets the specific threshold of a clinically meaningful effect. These issues are not straightforward and will be discussed in this work. In a trial design that allows selection between a pre-specified patient subgroup and the initially planned overall patient population based on the accumulating data, there is an issue of what the 'overall' population means. In addition, it is critically important to know how such selection influences the validity of statistical inferences on the potentially modified overall population. This work presents the biases that may incur under adaptive patient selection designs.


Subject(s)
Clinical Trials as Topic/methods , Clinical Trials as Topic/standards , Antihypertensive Agents/therapeutic use , Bias , Blood Pressure/drug effects , Clinical Trials as Topic/legislation & jurisprudence , Humans , Hypertension/drug therapy , Patient Selection , Sample Size , United States
15.
J Biopharm Stat ; 21(4): 846-59, 2011 Jul.
Article in English | MEDLINE | ID: mdl-21516573

ABSTRACT

A clinical research program for drug development often consists of a sequence of clinical trials that may begin with uncontrolled and nonrandomized trials, followed by randomized trials or randomized controlled trials. Adaptive designs are not infrequently proposed for use. In the regulatory setting, the success of a drug development program can be defined to be that the experimental treatment at a specific dose level including regimen and frequency is approved based on replicated evidence from at least two confirmatory trials. In the early stage of clinical research, multiplicity issues are very broad. What is the maximum tolerable dose in an adaptive dose escalation trial? What should the dose range be to consider in an adaptive dose-ranging trial? What is the minimum effective dose in an adaptive dose-response study given the tolerability and the toxicity observable in short term or premarketing trials? Is establishing the dose-response relationship important or the ability to select a superior treatment with high probability more important? In the later stage of clinical research, multiplicity problems can be formulated with better focus, depending on whether the study is for exploration to estimate or select design elements or for labeling consideration. What is the study objective for an early-phase versus a later phase adaptive clinical trial? How many doses are to be studied in the early exploratory adaptive trial versus in the confirmatory adaptive trial? Is the intended patient population well defined or is the applicable patient population yet to be adaptively selected in the trial due to the potential patient and/or disease heterogeneity? Is the primary efficacy endpoint well defined or still under discussion providing room for adaptation? What are the potential treatment indications that may adaptively lead to an intended-to-treat patient population and the primary efficacy endpoint? In this work we stipulate the multiplicity issues with adaptive designs encountered in regulatory applications. For confirmatory adaptive design clinical trials, controlling studywise type I error and type II error is of paramount importance. For exploratory adaptive trials, we define the probability of correct selection of design features, e.g., dose, effect size, and the probability of correct decision for drug development. We assert that maximizing these probabilities would be critical to determine whether the drug development program continues or how to plan the confirmatory trials if the development continues.


Subject(s)
Clinical Trials as Topic/statistics & numerical data , Drug Discovery/statistics & numerical data , Research Design/statistics & numerical data , Endpoint Determination , Probability , Sample Size
16.
Ther Innov Regul Sci ; 55(1): 197-211, 2021 01.
Article in English | MEDLINE | ID: mdl-32870460

ABSTRACT

BACKGROUND: Uncertain ascertainment of events in clinical trials has been noted for decades. To correct possible bias, Clinical Endpoint Committees (CECs) have been employed as a critical element of trials to ensure consistent and high-quality endpoint evaluation, especially for cardiovascular endpoints. However, the efficiency and usefulness of adjudication have been debated. METHODS: The multiple imputation (MI) method was proposed to incorporate endpoint event uncertainty. In a simulation conducted to explain this methodology, the dichotomous outcome was imputed each time with subject-specific event probabilities. As the final step, the desired analysis was conducted based on all imputed data. This proposed method was further applied to real trial data from PARADIGM-HF. RESULTS: Compared with the conventional Cox model with adjudicated events only, the Cox MI method had higher power, even with a small number of uncertain events. It yielded more robust inferences regarding treatment effects and required a smaller sample size to achieve the same power. CONCLUSIONS: Instead of using dichotomous endpoint data, the MI method enables incorporation of event uncertainty and eliminates the need for categorizing endpoint events. In future trials, assigning a probability of event occurrence for each event may be preferable to a CEC assigning a dichotomous outcome. Considerable resources could be saved if endpoint events can be identified more simply and in a manner that maintains study power.


Subject(s)
Research Design , Clinical Trials as Topic , Uncertainty
17.
Contemp Clin Trials ; 101: 106244, 2021 02.
Article in English | MEDLINE | ID: mdl-33309946

ABSTRACT

We investigate selection of critical boundary functions for testing the hypotheses of two time-to-event outcomes as both primary endpoints or a primary and a secondary endpoint in group-sequential clinical trials, where (1) the effect sizes of endpoints are unequal, or (2) one endpoint is for short-term evaluation and the other for long-term evaluation. Bonferroni-Holm and fixed-sequence procedures are considered. We assess the effects of the magnitudes of the hazard ratios and the correlation between the endpoints on statistical powers and provide guidance for consideration.

19.
Pharm Stat ; 9(3): 173-8, 2010.
Article in English | MEDLINE | ID: mdl-20872619

ABSTRACT

Clinical trial strategy, particularly in developing pharmaceutical products, has recently expanded to a global level in the sense that multiple geographical regions participate in the trial simultaneously under the same study protocol. The possible benefits of this strategy are obvious, at least from the cost and efficiency considerations. The challenges with this strategy are many, ranging from trial or data quality assurance to statistical methods for design and analysis of such trials. In many regulatory submissions, the presence of regional differences in the estimated treatment effect, whether they are different only in magnitude or in direction, often presents great difficulty in interpretation of the global trial results, particularly for the acceptability by the local regulatory authorities. This article presents a number of useful statistical analysis tools for exploration of regional differences and a method that may be worth consideration in designing a multi-regional clinical trial.


Subject(s)
Internationality , Multicenter Studies as Topic , Randomized Controlled Trials as Topic , Drug Approval/statistics & numerical data , Factor Analysis, Statistical , Geography , Guidelines as Topic , Humans , Multicenter Studies as Topic/economics , Multicenter Studies as Topic/standards , Multicenter Studies as Topic/statistics & numerical data , Small-Area Analysis
20.
Pharm Stat ; 9(3): 217-29, 2010.
Article in English | MEDLINE | ID: mdl-20872622

ABSTRACT

In recent years, we have seen an increasing trend of foreign data as part of clinical trial data submitted in new drug applications (NDA) to US Food and Drug Administration (FDA). To understand the design and analysis characteristics, we studied schizophrenia multi-regional clinical trials (MRCTs). The schizophrenia data set consisted of a total of 12,585 patients collected from 33 clinical trials with 63.8% patients from North America, the largest region. The data set constituted 10 schizophrenia drug programs in support of NDAs submitted to FDA from December 1993 to December 2005. Two main objectives were pursued. First, we investigated some study design issues including potential heterogeneity of treatment effect via meta analysis and placebo response pattern over time. Second, we performed empirical modeling in two ways, supervised and unsupervised, to explain potential impact of baseline covariates on treatment effect in MRCTs. Based on our analysis results, placebo response appeared to increase over time and primarily attributed to US region. On average, the observed treatment effect in the US was generally smaller than non-US region. Both supervised and unsupervised empirical modeling selected baseline Positive and Negative Syndrome Scale total score as one of the most important covariates explaining a treatment effect. Region also played a role in explaining potential treatment effect heterogeneity. When baseline body weight was considered as a covariate in an empiric model, our results indicated that it alone did not seem to be an important factor in explaining regional difference.


Subject(s)
Decision Support Techniques , Internationality , Multicenter Studies as Topic , Randomized Controlled Trials as Topic , Research Design , Adult , Antipsychotic Agents/therapeutic use , Drug Approval/statistics & numerical data , Drugs, Investigational , Female , Geography , Humans , Male , Middle Aged , Models, Statistical , Multicenter Studies as Topic/methods , Multicenter Studies as Topic/statistics & numerical data , North America , Psychiatric Status Rating Scales/statistics & numerical data , Randomized Controlled Trials as Topic/methods , Randomized Controlled Trials as Topic/statistics & numerical data , Research Design/statistics & numerical data , Schizophrenia/drug therapy , Treatment Outcome , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL