Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 20
Filter
1.
BMC Med Res Methodol ; 23(1): 2, 2023 01 03.
Article in English | MEDLINE | ID: mdl-36597042

ABSTRACT

BACKGROUND: Due to the high cost and high failure rate of Phase III trials where a classical group sequential design (GSD) is usually used, seamless Phase II/III designs are more and more popular to improve trial efficiency. A potential attraction of Phase II/III design is to allow a randomized proof-of-concept stage prior to committing to the full cost of a Phase III trial. Population selection during the trial allows a trial to adapt and focus investment where it is most likely to provide patient benefit. Previous methods have been developed for this problem when there is a single primary endpoint and two possible populations. METHODS: To find the population that potentially benefits with one or two primary endpoints (e.g., progression free survival (PFS), overall survival (OS)), we propose a gated group sequential design for a seamless Phase II/III trial design with adaptive population selection. RESULTS: The investigated design controls the familywise error rate and allows multiple interim analyses to enable early stopping for efficacy or futility. Simulations and an illustrative example suggest that the proposed gated group sequential design has more power and requires less time and resources compared to the group sequential design and adaptive design. CONCLUSIONS: Combining the group sequential design and adaptive design, the gated group sequential design has more power and higher efficiency while controlling for the familywise error rate. It has the potential to save drug development cost and more quickly fulfill unmet medical needs.


Subject(s)
Research Design , Humans
2.
Int J Cancer ; 151(9): 1602-1610, 2022 11 01.
Article in English | MEDLINE | ID: mdl-35802470

ABSTRACT

Identifying the maximum tolerated dose (MTD) and recommending a Phase II dose for an investigational treatment is crucial in cancer drug development. A suboptimal dose often leads to a failed late-stage trial, while an overly toxic dose causes harm to patients. There is a very rich literature on trial designs for dose-finding oncology clinical trials. We propose a novel hybrid design that maximizes the merits and minimizes the limitations of the existing designs. Building on two existing dose-finding designs: a model-assisted design (the modified toxicity probability interval) and a dose-toxicity model-based design, a hybrid design of the modified toxicity probability interval design and a dose-toxicity model such as the logistic regression model is proposed, incorporating optimal properties from these existing approaches. The performance of the hybrid design was tested in a real trial example and through simulation scenarios. The hybrid design controlled the overdosing toxicity well and led to a recommended dose closer to the true MTD due to its ability to calibrate for an intermediate dose. The robust performance of the proposed hybrid design is illustrated through the real trial dataset and simulations. The simulation results demonstrated that the proposed hybrid design can achieve excellent and robust operating characteristics compared to other existing designs and can be an effective model for determining the MTD and recommended Phase II dose in oncology dose-finding trials. For practical feasibility, an R-shiny tool was developed and is freely available to guide clinicians in every step of the dose finding process.


Subject(s)
Antineoplastic Agents , Neoplasms , Antineoplastic Agents/therapeutic use , Bayes Theorem , Computer Simulation , Dose-Response Relationship, Drug , Humans , Maximum Tolerated Dose , Medical Oncology/methods , Neoplasms/chemically induced , Neoplasms/drug therapy , Research Design
3.
Lifetime Data Anal ; 28(3): 356-379, 2022 07.
Article in English | MEDLINE | ID: mdl-35486260

ABSTRACT

In oncology studies, it is important to understand and characterize disease heterogeneity among patients so that patients can be classified into different risk groups and one can identify high-risk patients at the right time. This information can then be used to identify a more homogeneous patient population for developing precision medicine. In this paper, we propose a mixture survival tree approach for direct risk classification. We assume that the patients can be classified into a pre-specified number of risk groups, where each group has distinct survival profile. Our proposed tree-based methods are devised to estimate latent group membership using an EM algorithm. The observed data log-likelihood function is used as the splitting criterion in recursive partitioning. The finite sample performance is evaluated by extensive simulation studies and the proposed method is illustrated by a case study in breast cancer.


Subject(s)
Algorithms , Neoplasms , Computer Simulation , Humans , Likelihood Functions , Research Design
4.
Stat Med ; 40(13): 3181-3195, 2021 06 15.
Article in English | MEDLINE | ID: mdl-33819928

ABSTRACT

In cancer studies, it is important to understand disease heterogeneity among patients so that precision medicine can particularly target high-risk patients at the right time. Many feature variables such as demographic variables and biomarkers, combined with a patient's survival outcome, can be used to infer such latent heterogeneity. In this work, we propose a mixture model to model each patient's latent survival pattern, where the mixing probabilities for latent groups are modeled through a multinomial distribution. The Bayesian information criterion is used for selecting the number of latent groups. Furthermore, we incorporate variable selection with the adaptive lasso into inference so that only a few feature variables will be selected to characterize the latent heterogeneity. We show that our adaptive lasso estimator has oracle properties when the number of parameters diverges with the sample size. The finite sample performance is evaluated by the simulation study, and the proposed method is illustrated by two datasets.


Subject(s)
Precision Medicine , Bayes Theorem , Biomarkers , Computer Simulation , Humans , Probability
5.
Contemp Clin Trials ; 99: 106179, 2020 12.
Article in English | MEDLINE | ID: mdl-33086159

ABSTRACT

The phase III, randomized, active-controlled, multicenter, open-label KEYNOTE-183 study (NCT02576977) evaluating pomalidomide and low dose dexamethasone (standard-of-care [SOC]) with or without pembrolizumab in patients with refractory or relapsed and refractory multiple myeloma (rrMM) was placed on full clinical hold by the US FDA on July 03, 2017 due to an imbalance in the number of deaths between arms. Clinically-led subgroup analyses are typically used to shed light on clinical findings. However, this approach is not always successful. We propose a systematic approach using the artificial intelligence tools to identifying risk factors and subgroups contributing to the overall death (prognostic) or to the excess death observed in the pembrolizumab plus SOC arm (predictive) of the KEYNOTE-183 study. In KEYNOTE-183, with a data cutoff date of June 02, 2017, we identified plasmacytoma as a prognostic factor, and ECOG performance status as a predictive factor of death. In addition, a qualitative interaction was observed between ECOG performance status and the treatment arm. The subsequent subgroup analysis based on ECOG performance status confirmed that more deaths were associated with pembrolizumab plus SOC versus SOC alone in patients with ECOG performance status 1.


Subject(s)
Multiple Myeloma , Antineoplastic Combined Chemotherapy Protocols/therapeutic use , Artificial Intelligence , Dexamethasone/therapeutic use , Humans , Multiple Myeloma/drug therapy , Prognosis
6.
BMC Med Res Methodol ; 20(1): 218, 2020 08 27.
Article in English | MEDLINE | ID: mdl-32854619

ABSTRACT

BACKGROUND: The data from immuno-oncology (IO) therapy trials often show delayed effects, cure rate, crossing hazards, or some mixture of these phenomena. Thus, the proportional hazards (PH) assumption is often violated such that the commonly used log-rank test can be very underpowered. In these trials, the conventional hazard ratio for describing the treatment effect may not be a good estimand due to the lack of an easily understandable interpretation. To overcome this challenge, restricted mean survival time (RMST) has been strongly recommended for survival analysis in clinical literature due to its independence of the PH assumption as well as a more clinically meaningful interpretation. The RMST also aligns well with the estimand associated with the analysis from the recommendation in ICH E-9 (R1), and the test/estimation coherency. Currently, the Kaplan Meier (KM) curve is commonly applied to RMST related analyses. Due to some drawbacks of the KM approach such as the limitation in extrapolating to time points beyond the follow-up time, and the large variance at time points with small numbers of events, the RMST may be hindered. METHODS: The dynamic RMST curve using a mixture model is proposed in this paper to fully enhance the RMST method for survival analysis in clinical trials. It is constructed that the RMST difference or ratio is computed over a range of values to the restriction time τ which traces out an evolving treatment effect profile over time. RESULTS: This new dynamic RMST curve overcomes the drawbacks from the KM approach. The good performance of this proposal is illustrated through three real examples. CONCLUSIONS: The RMST provides a clinically meaningful and easily interpretable measure for survival clinical trials. The proposed dynamic RMST approach provides a useful tool for assessing treatment effect over different time frames for survival clinical trials. This dynamic RMST curve also allows ones for checking whether the follow-up time for a study is long enough to demonstrate a treatment difference. The prediction feature of the dynamic RMST analysis may be used for determining an appropriate time point for an interim analysis, and the data monitoring committee (DMC) can use this evaluation tool for study recommendation.


Subject(s)
Immunotherapy , Humans , Proportional Hazards Models , Survival Analysis , Survival Rate , Treatment Outcome
7.
J Biopharm Stat ; 30(5): 783-796, 2020 09 02.
Article in English | MEDLINE | ID: mdl-32589509

ABSTRACT

Cox proportional hazards (PH) model evaluates the effects of interested covariates under PH assumption without specified the baseline hazard. In clinical trial applications, however, the explicitly estimated hazard or cumulative survival function for each treatment group helps to assess and interpret the meaning of treatment difference. In this paper, we propose to use a flexible mixture model under the PH constraint to fit the underline survival functions. Simulations are conducted to evaluate its performance and show that the proposed mixture PH model is very similar to the Cox PH model in terms of estimating the hazard ratio, bias, confidence interval coverage, type-I error and testing power. Application to several real clinical trial examples demonstrates that the results from this approach are almost identical to the results from Cox PH model. The explicitly estimated hazard function for each treatment group provides additional useful information and helps the interpretation of hazard comparisons.


Subject(s)
Randomized Controlled Trials as Topic/statistics & numerical data , Research Design/statistics & numerical data , Computer Simulation , Data Interpretation, Statistical , Humans , Likelihood Functions , Models, Statistical , Neoplasms/metabolism , Neoplasms/mortality , Neoplasms/therapy , Proportional Hazards Models , Survival Analysis , Time Factors , Treatment Outcome
9.
J Biopharm Stat ; 30(2): 231-243, 2020 03.
Article in English | MEDLINE | ID: mdl-31455199

ABSTRACT

The reference scaled bioequivalence has been proposed with many successful applications for the highly variable products. The statistical properties for the reference scaled bioequivalence have been studied for the commonly used crossover design. However, a crossover design may not be feasible in a real application such as the biosimilar study, instead a parallel design is a more timely and cost-effective choice. In this paper, the approximate upper confidence interval limit for the linearized criteria in the reference scaled bioequivalence from a parallel design is derived. The performance of the approximation is evaluated through the simulation. The simulation results show that this approximation performs well and gives reasonable power and well-controlled type I error.


Subject(s)
Computer Simulation , Drugs, Generic/pharmacokinetics , Drugs, Generic/standards , Confidence Intervals , Humans , Reference Standards , Therapeutic Equivalency
10.
Pharm Stat ; 18(5): 555-567, 2019 10.
Article in English | MEDLINE | ID: mdl-31037824

ABSTRACT

Time-to-event data are common in clinical trials to evaluate survival benefit of a new drug, biological product, or device. The commonly used parametric models including exponential, Weibull, Gompertz, log-logistic, log-normal, are simply not flexible enough to capture complex survival curves observed in clinical and medical research studies. On the other hand, the nonparametric Kaplan Meier (KM) method is very flexible and successful on catching the various shapes in the survival curves but lacks ability in predicting the future events such as the time for certain number of events and the number of events at certain time and predicting the risk of events (eg, death) over time beyond the span of the available data from clinical trials. It is obvious that neither the nonparametric KM method nor the current parametric distributions can fulfill the needs in fitting survival curves with the useful characteristics for predicting. In this paper, a full parametric distribution constructed as a mixture of three components of Weibull distribution is explored and recommended to fit the survival data, which is as flexible as KM for the observed data but have the nice features beyond the trial time, such as predicting future events, survival probability, and hazard function.


Subject(s)
Clinical Trials as Topic/methods , Models, Statistical , Survival Analysis , Humans , Kaplan-Meier Estimate , Time Factors
11.
Pharm Stat ; 17(5): 570-577, 2018 09.
Article in English | MEDLINE | ID: mdl-29911346

ABSTRACT

With the increasing globalization of drug development, the multiregional clinical trial (MRCT) has gained extensive use. The data from MRCTs could be accepted by regulatory authorities across regions and countries as the primary sources of evidence to support global marketing drug approval simultaneously. The MRCT can speed up patient enrollment and drug approval, and it makes the effective therapies available to patients all over the world simultaneously. However, there are many challenges both operationally and scientifically in conducting a drug development globally. One of many important questions to answer for the design of a multiregional study is how to partition sample size into each individual region. In this paper, two systematic approaches are proposed for the sample size allocation in a multiregional equivalence trial. A numerical evaluation and a biosimilar trial are used to illustrate the characteristics of the proposed approaches.


Subject(s)
Clinical Trials as Topic/methods , Drug Development/methods , Multicenter Studies as Topic/methods , Biosimilar Pharmaceuticals/administration & dosage , Drug Approval , Humans , Internationality , Research Design , Sample Size , Therapeutic Equivalency , Time Factors
12.
Int J Biostat ; 11(1): 125-33, 2015 May.
Article in English | MEDLINE | ID: mdl-25720100

ABSTRACT

In medical and other related sciences, clinical or experimental measurements usually serve as a basis for diagnostic, prognostic, therapeutic, and performance evaluations. Examples can be assessing the reliability of multiple raters (or measurement methods), assessing the suitability for tumor evaluation of using a local laboratory or a central laboratory in a randomized clinical trial (RCT), validating surrogate endpoints in a study, determining that the important outcome measurements are interchangeable among the evaluators in an RCT. Any elegant study design cannot overcome the damage by unreliable measurement. Many methods have been developed to assess the agreement of two measurement methods. However, there is little attention to quantify how good the agreement of two measurement methods is. In this paper, similar to the type I error and the power in describing a hypothesis testing, we propose quantifying an agreement assessment using two rates: the discordance rate and the tolerance probability. This approach is demonstrated through examples.


Subject(s)
Biometry/methods , Reproducibility of Results , Research Design/standards , Humans
13.
Stat Med ; 32(3): 462-9, 2013 Feb 10.
Article in English | MEDLINE | ID: mdl-22903263

ABSTRACT

To develop a biosimilar product, it is essential to demonstrate the biosimilarity between the proposed biosimilar product and the reference product first in terms of quality in a stepwise approach that can then help inform the extent of safety and efficacy data that will be required to establish biosimilarity. These comparability studies should have direct side-by-side comparisons of the test and the reference products. In this paper, we develop a statistical method for unpaired head-to-head quality attribute comparisons. The method uses a plausibility interval derived from comparing the reference against the reference itself as the goalpost for claiming comparability. The idea behind this is that any observed difference between the reference and the reference itself should be considered as the random noise and as a part of the variability. We illustrate the performance of the proposed method by using simulation and real data sets.


Subject(s)
Biosimilar Pharmaceuticals , Drug Evaluation/methods , Algorithms , Drug Evaluation/statistics & numerical data , United States
14.
PDA J Pharm Sci Technol ; 65(1): 55-62, 2011.
Article in English | MEDLINE | ID: mdl-21414940

ABSTRACT

Due to the comparative nature of a bioassay, the relative potency is usually used to describe the potency of a sample. Only when the two samples are similar can a valid and meaningful estimate of relative potency be obtained. Thus, assessing similarity is a crucial part in developing a bioanalytical method. The current commonly used approach for assessing similarity focuses on the response parameters, such as the slope in the linear case, using either a significance test or an equivalence test. The current direct evaluation of the response parameters ignores the information about the shape of the curve and the possible variance heterogeneity. To overcome this, we propose a method based on the idea of equivalence testing that compares the shapes of the curves directly. The new method first measures the difference of the response between the standard sample and the test sample at each of the concentration (dilution) levels and then determines whether the differences are consistent by comparing them to the equivalence limits. The benefits of the new method are investigated by a simulation study. LAY ABSTRACT: Due to the comparative nature of a bioassay, the relative potency is usually used to describe the potency of a sample. Only when the two samples are similar can a valid and meaningful estimate of relative potency be obtained. Thus, assessing similarity is a crucial part in developing a bioanalytical method. The current commonly used approach for assessing similarity focuses on the response parameters, such as the slope in the linear case, which have many drawbacks To overcome this, we propose a method based on the idea of equivalence test but comparing the shape of curve directly. The new method first measures the difference of the response between the standard sample and the test sample at each of the concentration (dilution) levels and then determines whether the differences are consistent by comparing them to the equivalence limit.


Subject(s)
Biological Assay
15.
Pharm Stat ; 9(2): 125-32, 2010.
Article in English | MEDLINE | ID: mdl-19507134

ABSTRACT

It is often necessary to compare two measurement methods in medicine and other experimental sciences. This problem covers a broad range of data. Many authors have explored ways of assessing the agreement of two sets of measurements. However, there has been relatively little attention to the problem of determining sample size for designing an agreement study. In this paper, a method using the interval approach for concordance is proposed to calculate sample size in conducting an agreement study. The philosophy behind this is that the concordance is satisfied when no more than the pre-specified k discordances are found for a reasonable large sample size n since it is much easier to define a discordance pair. The goal here is to find such a reasonable large sample size n. The sample size calculation is based on two rates: the discordance rate and tolerance probability, which in turn can be used to quantify an agreement study. The proposed approach is demonstrated through a real data set.


Subject(s)
Clinical Trials as Topic/statistics & numerical data , Models, Statistical , Sample Size , Research Design
16.
J Biopharm Stat ; 17(3): 393-405, 2007.
Article in English | MEDLINE | ID: mdl-17479389

ABSTRACT

It is well known that outliers can have a significant effect on the conclusion of a bioavailability/bioequivalence study. Existing approaches for outlier detection are ANOVA type based on the assumptions on log-AUC, and they are disconnected from the pharmacokinetics (PK) literature. However, the observations from a bioavailability/bioequivalence study are the correlated concentrations, not the AUCs. Thus, the estimate of AUC and the related variance estimate may not be accurate because of the exclusion of the correlation nature. In this paper, based on the predicted concentrations from a functional linear model which takes into consideration of the correlation structure of concentrations, a residual analysis is proposed to detect the outliers. With this approach, the distributional assumption is on the observed raw concentration instead of the summarized parameter AUC, and this approach takes the repeated measurements nature of the concentration curve into consideration, which is in line with population PK concept and could result in a more accurate variance estimate. A real data set is used to demonstrate the proposed approach.


Subject(s)
Biological Availability , Data Interpretation, Statistical , Pharmacokinetics , Randomized Controlled Trials as Topic/statistics & numerical data , Therapeutic Equivalency , Analysis of Variance , Antihypertensive Agents/pharmacokinetics , Area Under Curve , Cross-Over Studies , Dosage Forms , Female , Humans , Linear Models , Male , Models, Biological , Models, Statistical , Randomized Controlled Trials as Topic/methods , Research Design
17.
J Chromatogr Sci ; 44(3): 119-22, 2006 Mar.
Article in English | MEDLINE | ID: mdl-16620506

ABSTRACT

The reproducibility of a validated analytical method may require reassessment because of various reasons, such as the transfer between laboratories or companies, changes in the instruments or software platforms (or both), or changes in critical reagents, among others. This paper is a demonstration of an assay bridging study in evaluating reproducibility. The approach is simple but very informative and offers many advantages over existing approaches.


Subject(s)
Reproducibility of Results , Bias , Chemistry Techniques, Analytical/standards , Regression Analysis
18.
J Biopharm Stat ; 15(2): 195-203, 2005.
Article in English | MEDLINE | ID: mdl-15796289

ABSTRACT

An agreement problem usually involves assessing the concordance of two sets of measurements, and the problem covers a broad range of data. In practice, the observations are often curves instead of the traditional points. In this article, the agreement problem is studied for curved data. Following the rationale in constructing a correlation coefficient curve for heterocorrelaticity, an agreement curve is proposed to measure agreement as a function of the independent variable for curved data. The agreement curve overcomes the drawback when only one index is used in assessing the agreement of two measurements, and it covers all situations including the nonconstant mean, nonhomogenous variance, and the data range. A real dataset is used to demonstrate the approach and to show accurate assessment and information gained if curved data are used.


Subject(s)
Data Interpretation, Statistical , Algorithms , Models, Statistical
19.
J Biopharm Stat ; 15(1): 3-15, 2005.
Article in English | MEDLINE | ID: mdl-15702601

ABSTRACT

In many applications, controls are used to monitor the process or experiment and to assess whether the process is in control or the experiment is valid. In this case, the traditional fixed-effects calibration is usually not adequate, but a mixed-effects model is appropriate. In this article, a linear mixed-effects calibration model is considered to qualify an experiment. Two estimating methods for the controls based on maximum likelihood and restricted maximum likelihood are proposed. The bias and mean squared error performances are studied by simulation. Five different methods to construct confidence intervals for the controls are compared. A dataset is used to demonstrate the advantages of the mixed-effects model.


Subject(s)
Linear Models , Technology, Pharmaceutical/methods , Technology, Pharmaceutical/statistics & numerical data , Calibration
20.
Stat Med ; 24(6): 883-91, 2005 Mar 30.
Article in English | MEDLINE | ID: mdl-15558699

ABSTRACT

In a traditional pharmacokinetics (PK), bioavailability (BA) /bioequivalence (BE) study, the same number of time points and sampling times are used for each subject. Often, an indirect inference is then made on some PK parameters such as area under the plasma concentration curve (AUC), maximum plasma concentration (C(max)), time to maximum plasma concentration (T(max)) or half-life. However, since these PK parameters are summarized from repeated measurements, a lot of information can be lost. The indirect inferences on some PK parameters are not always accurate. Taking the repeated measurements of the concentration curve into consideration, a functional linear model has been developed to compare concentration curves directly instead of the PK parameters. Considering the nature of repeated measurements, a multiple testing procedure is proposed to assess the equality of two concentration curves. A real data set is used to demonstrate the proposed procedure.


Subject(s)
Pharmaceutical Preparations/blood , Pharmacokinetics , Therapeutic Equivalency , Antihypertensive Agents/blood , Antihypertensive Agents/pharmacokinetics , Area Under Curve , Biological Availability , Cross-Over Studies , Half-Life , Humans , Hypertension/blood , Hypertension/drug therapy , Linear Models , Randomized Controlled Trials as Topic/methods , Receptors, Angiotensin/metabolism
SELECTION OF CITATIONS
SEARCH DETAIL
...