RESUMEN
Delayed separation in survival curves has been observed in immuno-oncology clinical trials. Under this situation, the classic log-rank test may confront high power loss. In this paper, we consider a Zmax test, which is the maximum of the log-rank test and a Fleming-Harrington test. Simulation studies indicate that the Zmax test not only controls the Type I error rate but also maintains good power under different delayed effect models. The asymptotic properties of the Zmax test are also established, which further supports its robustness. We apply the Zmax test to two data sets reported in recent immuno-oncology clinical trials, in which Zmax has exhibited remarkable improvement over the conventional log-rank test.
Asunto(s)
Antineoplásicos Inmunológicos/uso terapéutico , Ensayos Clínicos como Asunto/estadística & datos numéricos , Oncología Médica/estadística & datos numéricos , Neoplasias/tratamiento farmacológico , Antineoplásicos Inmunológicos/inmunología , Ensayos Clínicos como Asunto/métodos , Humanos , Oncología Médica/métodos , Neoplasias/epidemiología , Neoplasias/inmunología , Tasa de Supervivencia/tendencias , Resultado del TratamientoRESUMEN
In this paper, we propose a design that uses a short-term endpoint for accelerated approval at interim analysis and a long-term endpoint for full approval at final analysis with sample size adaptation based on the long-term endpoint. Two sample size adaptation rules are compared: an adaptation rule to maintain the conditional power at a prespecified level and a step function type adaptation rule to better address the bias issue. Three testing procedures are proposed: alpha splitting between the two endpoints; alpha exhaustive between the endpoints; and alpha exhaustive with improved critical value based on correlation. Family-wise error rate is proved to be strongly controlled for the two endpoints, sample size adaptation, and two analysis time points with the proposed designs. We show that using alpha exhaustive designs greatly improve the power when both endpoints are effective, and the power difference between the two adaptation rules is minimal. The proposed design can be extended to more general settings.
Asunto(s)
Ensayos Clínicos como Asunto/métodos , Determinación de Punto Final/métodos , Tamaño de la Muestra , Ensayos Clínicos como Asunto/estadística & datos numéricos , Determinación de Punto Final/estadística & datos numéricos , HumanosRESUMEN
Intravascular ultrasound (IVUS) is a clinical imaging procedure used to assess drug effects on the progression of coronary atherosclerosis in clinical trials. It is an invasive medical procedure of measuring coronary artery atheroma (plaque) volume, and leads to high missing rates (often over 30%). This paper uses an IVUS Phase II clinical trial data to explore the missing mechanism of IVUS endpoint, the percent atheroma volume (PAV). We proposed a moving-window method to examine the relationship between continuous covariates such as lipid endpoint and the probability of missing IVUS values, which provides a general approach for missing mechanism exploration. The moving-window method is more intuitive and provides a fuller picture about the relationship. In the example, some covariates such as lipid measures also have high missing rates after 12 months because of compliance issues probably caused by fatigue of blood drawing. We found that if the method of last observation carried forward (LOCF) is used to impute the lipid endpoints, it leads to biologically unexplainable results. Using the multiple imputation approach for the missing covariates results in a more reasonable conclusion about the IVUS missing mechanism. Age, race, and baseline PAV are identified as key potential contributors to the probability of missing IVUS endpoint. This finding can be used to reduce missing values in future IVUS trials by setting up appropriate inclusion and exclusion criteria at the trial design stages.
Asunto(s)
Enfermedad de la Arteria Coronaria/diagnóstico por imagen , Interpretación Estadística de Datos , Ultrasonografía Intervencional , Anciano , HDL-Colesterol/sangre , Enfermedad de la Arteria Coronaria/sangre , Enfermedad de la Arteria Coronaria/tratamiento farmacológico , Enfermedad de la Arteria Coronaria/patología , Vasos Coronarios/diagnóstico por imagen , Vasos Coronarios/efectos de los fármacos , Vasos Coronarios/patología , Progresión de la Enfermedad , Determinación de Punto Final , Femenino , Humanos , Masculino , Persona de Mediana Edad , Cooperación del Paciente , Placa Aterosclerótica/diagnóstico por imagen , Placa Aterosclerótica/etiología , Ultrasonografía/efectos adversos , Ultrasonografía/métodos , Ultrasonografía Intervencional/efectos adversosRESUMEN
Cancer treatment started with surgery at least three thousand years ago. Radiation therapy was added in 1896 with chemotherapy started 50 years later. These "cut, burn, and poison" techniques try to kill cancer cells directly and have been the main approaches in treating cancer until recently. In the past few years, immunotherapies have revolutionized cancer treatment. Instead of treating the disease, immunotherapies treat the patient with the disease; more precisely, correct the patient's immune system so that it can fight cancer in a long term, which makes the cure of metastatic cancers a real possibility. To adapt to the evolution of oncology treatment, clinical trial designs and statistical analysis methodologies are required to change accordingly in order to efficiently bring novel oncology medicines to cancer patients. For example, one of the major differences between immunotherapies and chemotherapies is that immunotherapies may take longer to have an effect but generally last longer with some patients cured. Trial design assumptions and adaptation rules (if adaptive design is used) need to take account of this delayed effect and long-term cure effect phenomenon. At the same time, more efficient statistical tests such as Fleming-Harrington test and Zmax test can be used to improve statistical power over the conventional logrank test for the analyses of time-to-event data that often exhibit non-proportional hazards. This article intends to describe how oncology drug development evolves over time and how statistical methods change accordingly.
Asunto(s)
Oncología Médica , Neoplasias , Desarrollo de Medicamentos , Humanos , Inmunoterapia , Neoplasias/tratamiento farmacológico , Proyectos de InvestigaciónRESUMEN
Reference ranges (RRs) are frequently used for interpreting laboratory values in clinical trials, assessing abnormality of laboratory results, and combining results from different laboratories. When a clinical laboratory measure must be derived from other tests, eg, the WBC differential percentage from the WBC count and WBC differential absolute count, a derivation of the RR may also be required. A naive method for determining RRs calculates the upper and lower limits of the derived test from the upper and lower limits of the measured values using the same algebraic formula used for the derived measure. This naive method and any others that do not use probability-based transformations do not maintain the distributional characteristics of the RRs. RRs derived in such a manner are deemed uninterpretable because they do not contain a specific proportion of the distribution. We propose a probability-based approach for the interconversion of RRs for ratios of 2 log-gaussian analytes. The proposed method gives a simple algebraic formula for calculating the RRs of the derived measures while preserving the probability relationships. The nonparametric method and a parametric method that takes the log transformation, estimates an RR, and then exponentiates are provided as comparators. An example that compares the commonly used naive method and the proposed method is provided on automated leukocyte count data. This provides evidence that the proposed method maintains the distributional characteristics of the transformed RR measures while the naive method does not.
Asunto(s)
Recuento de Leucocitos/normas , Leucocitos/citología , Adolescente , Adulto , Anciano , Anciano de 80 o más Años , Automatización , Niño , Femenino , Humanos , Recuento de Leucocitos/instrumentación , Recuento de Leucocitos/métodos , Modelos Logísticos , Masculino , Persona de Mediana Edad , Distribución Normal , Valor Predictivo de las Pruebas , Probabilidad , Valores de Referencia , Estadísticas no ParamétricasRESUMEN
CC chemokine receptor 2 (CCR2), expressed on the surface of circulating monocytes, and its ligand monocyte chemoattractant protein-1 (MCP-1; also known as CC-chemokine ligand 2) are present in atherosclerotic plaques and may have important roles in endothelial monocyte recruitment and activation. MLN1202 is a highly specific humanized monoclonal antibody that interacts with CCR2 and inhibits MCP-1 binding. The aim of this randomized, double-blind, placebo-controlled study was to measure reductions in circulating levels of high-sensitivity C-reactive protein, an established biomarker of inflammation associated with coronary artery disease, on MLN1202 treatment in patients at risk for atherosclerotic cardiovascular disease (≥2 risk factors for atherosclerotic cardiovascular disease and circulating high-sensitivity C-reactive protein >3 mg/L). Additionally, patients were genotyped for the 2518 AâG polymorphism in the promoter of the MCP-1 gene to investigate the correlation between this polymorphism and reduced C-reactive protein levels with MLN1202 treatment. Patients who received MLN1202 exhibited significant decreases in high-sensitivity C-reactive protein levels, beginning at 4 weeks and continuing through 12 weeks after dosing. Patients with A/G or G/G genotypes in the MCP-1 promoter had significantly greater reductions in high-sensitivity C-reactive protein levels than patients with the wild-type A/A genotype. In conclusion, MLN1202 treatment was well tolerated in this patient population and resulted in significant reductions in high-sensitivity C-reactive protein levels.
Asunto(s)
Anticuerpos Monoclonales/farmacología , Proteína C-Reactiva/metabolismo , Quimiocina CCL2/genética , Enfermedad de la Arteria Coronaria/sangre , Enfermedad de la Arteria Coronaria/genética , Polimorfismo de Nucleótido Simple , Receptores CCR2/antagonistas & inhibidores , Anticuerpos Monoclonales Humanizados , Biomarcadores/metabolismo , Método Doble Ciego , Femenino , Genotipo , Humanos , Masculino , Persona de Mediana Edad , Placebos , Regiones Promotoras Genéticas , Factores de Riesgo , Estadísticas no ParamétricasRESUMEN
OBJECTIVE: Rapid onset of therapeutic action for antidepressant medication represents a major area of unmet medical need, and any such effects have been difficult to detect using standard study designs and measurement strategies. We conducted a randomized, open-label study with blinded raters using daily process assessment vs. standard weekly assessment to answer the following study questions: 1) is it possible to detect an antidepressant response more rapidly with daily assessment than with standard assessment approaches? 2) what is the burden of daily assessment on participants relative to standard clinical assessments? and 3) does the process of completing daily assessments have any effect on clinic-based assessments such as the Hamilton Depression Rating Scale (HAM-D)? METHOD: Seventy-eight outpatients with major depressive disorder who received open-label fluoxetine were randomized to standard weekly clinic assessment or standard weekly clinic assessment plus daily assessment, and were followed for 28 days. Data were collected between September, 2002 and August, 2003. RESULTS: Daily assessment appeared to have no effect on 17-item HAM-D or MADRS scores obtained in the clinic. Survival analyses revealed that daily diaries detected therapeutic effects more quickly than did standard weekly clinic assessments, across most endpoints. Perceived burden of study participation was not significantly increased by daily diary completion, nor reflected in higher dropout rates. CONCLUSION: Daily process assessment improves the ability to detect an early antidepressant response.
Asunto(s)
Antidepresivos de Segunda Generación/uso terapéutico , Depresión/tratamiento farmacológico , Fluoxetina/uso terapéutico , Proyectos de Investigación , Adulto , Depresión/diagnóstico , Femenino , Humanos , Masculino , Persona de Mediana Edad , Modelos Psicológicos , Pruebas Neuropsicológicas , Estudios Prospectivos , Psicometría , Método Simple Ciego , Factores de Tiempo , Resultado del TratamientoRESUMEN
BACKGROUND: Cumulative meta-analysis typically involves performing an updated meta-analysis every time when new trials are added to a series of similar trials, which by definition involves multiple inspections. Neither the commonly used random effects model nor the conventional group sequential method can control the type I error for many practical situations. In our previous research, Lan et al. (Lan KKG, Hu M-X, Cappelleri JC. Applying the law of iterated logarithm to cumulative meta-analysis of a continuous endpoint. Statistica Sinica 2003; 13: 1135-45) proposed an approach based on the law of iterated logarithm (LIL) to this problem for the continuous case. PURPOSE: The study is an extension and generalization of our previous research to binary outcomes. Although it is based on the same LIL principle, we found the discrete case much more complex and the results from the continuous case do not apply to the binary case. The simulation study presented here is also more extensive. METHODS: The LIL based method ;penalizes' the Z-value of the test statistic to account for multiple tests and for the estimation of heterogeneity in treatment effects across studies. It involves an adjustment factor, which is directly related to the control of type I error and determined through extensive simulations under various conditions. RESULTS: With an adjustment factor of 2, the LIL-based test statistics controls the overall type I error when odds ratio or relative risk is the parameter of interest. For risk difference, the adjustment factor can be reduced to 1.5. More inspections may require a larger adjustment factor, but the required adjustment factor stabilizes after 25 inspections. LIMITATIONS: It will be ideal if the adjustment factor can be obtained theoretically through a statistical model. Unfortunately, real life data are too complex and we have to solve the problem through simulation. However, for large number of inspections, the adjustment factor will have a limited effect and the type I error is controlled mainly by the LIL. CONCLUSIONS: The LIL method controls the overall type I error for a very broad range of practical situations with a binary outcome, and the LIL works properly in controlling the type I error rates as the number of inspections becomes large.
Asunto(s)
Interpretación Estadística de Datos , Determinación de Punto Final/métodos , Metaanálisis como Asunto , Resultado del Tratamiento , Simulación por Computador , Técnicas de Apoyo para la Decisión , Humanos , Modelos EstadísticosRESUMEN
Many mathematical models have been proposed for establishing an in vitro/in vivo correlation (IVIVC). The traditional IVIVC model building process consists of 5 steps: deconvolution, model fitting, convolution, prediction error evaluation, and cross-validation. This is a time-consuming process and typically a few models at most are tested for any given data set. The objectives of this work were to (1) propose a statistical tool to screen models for further development of an IVIVC, (2) evaluate the performance of each model under different circumstances, and (3) investigate the effectiveness of common statistical model selection criteria for choosing IVIVC models. A computer program was developed to explore which model(s) would be most likely to work well with a random variation from the original formulation. The process used Monte Carlo simulation techniques to build IVIVC models. Data-based model selection criteria (Akaike Information Criteria [AIC], R2) and the probability of passing the Food and Drug Administration "prediction error" requirement was calculated. To illustrate this approach, several real data sets representing a broad range of release profiles are used to illustrate the process and to demonstrate the advantages of this automated process over the traditional approach. The Hixson-Crowell and Weibull models were often preferred over the linear. When evaluating whether a Level A IVIVC model was possible, the model selection criteria AIC generally selected the best model. We believe that the approach we proposed may be a rapid tool to determine which IVIVC model (if any) is the most applicable.