Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 89
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
J Biopharm Stat ; : 1-19, 2023 Oct 11.
Artículo en Inglés | MEDLINE | ID: mdl-37819021

RESUMEN

The development of next-generation sequencing (NGS) opens opportunities for new applications such as liquid biopsy, in which tumor mutation genotypes can be determined by sequencing circulating tumor DNA after blood draws. However, with highly diluted samples like those obtained with liquid biopsy, NGS invariably introduces a certain level of misclassification, even with improved technology. Recently, there has been a high demand to use mutation genotypes as biomarkers for predicting prognosis and treatment selection. Many methods have also been proposed to build classifiers based on multiple loci with machine learning algorithms as biomarkers. How the higher misclassification rate introduced by liquid biopsy will affect the performance of these biomarkers has not been thoroughly investigated. In this paper, we report the results from a simulation study focused on the clinical utility of biomarkers when misclassification is present due to the current technological limit of NGS in the liquid biopsy setting. The simulation covers a range of performance profiles for current NGS platforms with different machine learning algorithms and uses actual patient genotypes. Our results show that, at the high end of the performance spectrum, the misclassification introduced by NGS had very little effect on the clinical utility of the biomarker. However, in more challenging applications with lower accuracy, misclassification could have a notable effect on clinical utility. The pattern of this effect can be complex, especially for machine learning-based classifiers. Our results show that simulation can be an effective tool for assessing different scenarios of misclassification.

2.
Pharm Stat ; 21(3): 584-598, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-34935280

RESUMEN

New technologies for novel biomarkers have transformed the field of precision medicine. However, in applications such as liquid biopsy for early tumor detection, the misclassification rates of next generation sequencing and other technologies have become an unavoidable feature of biomarker development. Because initial experiments are usually confined to specific technology choices and application settings, a statistical method that can project the performance metrics of other scenarios with different misclassification rates would be very helpful for planning further biomarker development and future trials. In this article, we describe an approach based on an extended version of simulation extrapolation (SIMEX) to project the performance of biomarkers measured with varying misclassification rates due to different technological or application settings when experimental results are only available from one specific setting. Through simulation studies for logistic regression and proportional hazards models, we show that our proposed method can be used to project the biomarker performance with good precision when switching from one to anther technology or application setting. Similar to the original SIMEX model, the proposed method can be implemented with existing software in a straightforward manner. A data analysis example is also presented using a lung cancer data set and performance metrics for two gene panel based biomarkers. Results demonstrate that it is feasible to infer the potential implications of using a range of technologies or application scenarios for biomarkers with limited human trial data.


Asunto(s)
Medicina de Precisión , Proyectos de Investigación , Biomarcadores , Simulación por Computador , Humanos , Modelos de Riesgos Proporcionales
3.
J Biopharm Stat ; 30(2): 294-304, 2020 03.
Artículo en Inglés | MEDLINE | ID: mdl-31304864

RESUMEN

The traditional rule-based design, 3 + 3, has been shown to be less likely to achieve the objectives of dose-finding trials when compared with model-based designs. We propose a new rule-based design called i3 + 3, which is based on simple but more advanced rules that account for the variabilities in the observed data. We compare the operating characteristics for the proposed i3 + 3 design with other popular phase I designs by simulation. The i3 + 3 design is far superior than the 3 + 3 design in trial safety and the ability to identify the true MTD. Compared with model-based phase I designs, i3 + 3 also demonstrates comparable performances.


Asunto(s)
Ensayos Clínicos Fase I como Asunto/métodos , Ensayos Clínicos Fase I como Asunto/estadística & datos numéricos , Simulación por Computador/estadística & datos numéricos , Efectos Colaterales y Reacciones Adversas Relacionados con Medicamentos , Proyectos de Investigación/estadística & datos numéricos , Algoritmos , Estudios de Cohortes , Relación Dosis-Respuesta a Droga , Humanos , Preparaciones Farmacéuticas/administración & dosificación
4.
J Biopharm Stat ; 29(4): 722-727, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31258011

RESUMEN

While 2-in-1 designs give a flexibility to make a clinical trial either an information generation Phase 2 trial or a full scale confirmatory Phase 3 trial, flexible sample size designs can naturally fit into the 2-in-1 design framework. This study is to show that the CHW design can be blended into a 2-in-1 design to improve the adaptive performance of the design. Commenting on the usual 2-in-1 design, we demonstrated that the CHW design can achieve the goal of a 2-in-1 design with satisfactory statistical power and efficient average sample size for a targeted range of the treatment effect.


Asunto(s)
Proyectos de Investigación , Tamaño de la Muestra
5.
J Biopharm Stat ; 26(1): 37-43, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-26366624

RESUMEN

There are several challenging statistical problems identified in the regulatory review of large cardiovascular (CV) clinical outcome trials and central nervous system (CNS) trials. The problems can be common or distinct due to disease characteristics and the differences in trial design elements such as endpoints, trial duration, and trial size. In schizophrenia trials, heavy missing data is a big problem. In Alzheimer trials, the endpoints for assessing symptoms and the endpoints for assessing disease progression are essentially the same; it is difficult to construct a good trial design to evaluate a test drug for its ability to slow the disease progression. In CV trials, reliance on a composite endpoint with low event rate makes the trial size so large that it is infeasible to study multiple doses necessary to find the right dose for study patients. These are just a few typical problems. In the past decade, adaptive designs were increasingly used in these disease areas and some challenges occur with respect to that use. Based on our review experiences, group sequential designs (GSDs) have borne many successful stories in CV trials and are also increasingly used for developing treatments targeting CNS diseases. There is also a growing trend of using more advanced unblinded adaptive designs for producing efficacy evidence. Many statistical challenges with these kinds of adaptive designs have been identified through our experiences with the review of regulatory applications and are shared in this article.


Asunto(s)
Fármacos Cardiovasculares/uso terapéutico , Enfermedades Cardiovasculares/tratamiento farmacológico , Fármacos del Sistema Nervioso Central/uso terapéutico , Enfermedades del Sistema Nervioso Central/tratamiento farmacológico , Fármacos Cardiovasculares/efectos adversos , Fármacos Cardiovasculares/farmacología , Fármacos del Sistema Nervioso Central/efectos adversos , Fármacos del Sistema Nervioso Central/farmacología , Ensayos Clínicos como Asunto , Humanos , Proyectos de Investigación , Resultado del Tratamiento
6.
Biom J ; 58(1): 133-53, 2016 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-26485117

RESUMEN

Treatment effect heterogeneity is a well-recognized phenomenon in randomized controlled clinical trials. In this paper, we discuss subgroup analyses with prespecified subgroups of clinical or biological importance. We explore various alternatives to the naive (the traditional univariate) subgroup analyses to address the issues of multiplicity and confounding. Specifically, we consider a model-based Bayesian shrinkage (Bayes-DS) and a nonparametric, empirical Bayes shrinkage approach (Emp-Bayes) to temper the optimism of traditional univariate subgroup analyses; a standardization approach (standardization) that accounts for correlation between baseline covariates; and a model-based maximum likelihood estimation (MLE) approach. The Bayes-DS and Emp-Bayes methods model the variation in subgroup-specific treatment effect rather than testing the null hypothesis of no difference between subgroups. The standardization approach addresses the issue of confounding in subgroup analyses. The MLE approach is considered only for comparison in simulation studies as the "truth" since the data were generated from the same model. Using the characteristics of a hypothetical large outcome trial, we perform simulation studies and articulate the utilities and potential limitations of these estimators. Simulation results indicate that Bayes-DS and Emp-Bayes can protect against optimism present in the naïve approach. Due to its simplicity, the naïve approach should be the reference for reporting univariate subgroup-specific treatment effect estimates from exploratory subgroup analyses. Standardization, although it tends to have a larger variance, is suggested when it is important to address the confounding of univariate subgroup effects due to correlation between baseline covariates. The Bayes-DS approach is available as an R package (DSBayes).


Asunto(s)
Biometría/métodos , Ensayos Clínicos como Asunto/normas , Teorema de Bayes , Femenino , Humanos , Funciones de Verosimilitud , Masculino , Análisis Multivariante , Estándares de Referencia , Resultado del Tratamiento
7.
Stat Med ; 34(26): 3461-80, 2015 Nov 20.
Artículo en Inglés | MEDLINE | ID: mdl-26112381

RESUMEN

An invited panel session was conducted in the 2012 Joint Statistical Meetings, San Diego, California, USA, to stimulate the discussion on multiplicity issues in confirmatory clinical trials for drug development. A total of 11 expert panel members were invited and 9 participated. Prior to the session, a case study was previously provided to the panel members to facilitate the discussion, focusing on the key components of the study design and multiplicity. The Phase 3 development program for this new experimental treatment was based on a single randomized controlled trial alone. Each panelist was asked to clarify if he or she responded as if he or she were a pharmaceutical drug sponsor, an academic panelist or a health regulatory scientist.


Asunto(s)
Ensayos Clínicos Fase III como Asunto/estadística & datos numéricos , Interpretación Estadística de Datos , Descubrimiento de Drogas/estadística & datos numéricos , Determinación de Punto Final/métodos , Proyectos de Investigación/estadística & datos numéricos , Síndrome de Dificultad Respiratoria del Recién Nacido/tratamiento farmacológico , Congresos como Asunto , Humanos , Recién Nacido , Resultado del Tratamiento
8.
J Biopharm Stat ; 24(1): 154-67, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-24392983

RESUMEN

Randomized controlled trials (RCTs) emphasize the average or overall effect of a treatment (ATE) on the primary endpoint. Even though the ATE provides the best summary of treatment efficacy, it is of critical importance to know whether the treatment is similarly efficacious in important, predefined subgroups. This is why the RCTs, in addition to the ATE, also present the results of subgroup analysis for preestablished subgroups. Typically, these are marginal subgroup analysis in the sense that treatment effects are estimated in mutually exclusive subgroups defined by only one baseline characteristic at a time (e.g., men versus women, young versus old). Forest plot is a popular graphical approach for displaying the results of subgroup analysis. These plots were originally used in meta-analysis for displaying the treatment effects from independent studies. Treatment effect estimates of different marginal subgroups are, however, not independent. Correlation between the subgrouping variables should be addressed for proper interpretation of forest plots, especially in large effectiveness trials where one of the goals is to address concerns about the generalizability of findings to various populations. Failure to account for the correlation between the subgrouping variables can result in misleading (confounded) interpretations of subgroup effects. Here we present an approach called standardization, a commonly used technique in epidemiology, that allows for valid comparison of subgroup effects depicted in a forest plot. We present simulations results and a subgroup analysis from parallel-group, placebo-controlled randomized trials of antibiotics for acute otitis media.


Asunto(s)
Ensayos Clínicos Controlados Aleatorios como Asunto/normas , Factores de Edad , Algoritmos , Antibacterianos/uso terapéutico , Sesgo , Niño , Preescolar , Simulación por Computador , Interpretación Estadística de Datos , Humanos , Lactante , Modelos Estadísticos , Otitis Media/tratamiento farmacológico , Ensayos Clínicos Controlados Aleatorios como Asunto/estadística & datos numéricos , Proyectos de Investigación , Resultado del Tratamiento
9.
J Biopharm Stat ; 24(5): 1059-72, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-24915027

RESUMEN

Adaptive designs have generated a great deal of attention to clinical trial communities. The literature contains many statistical methods to deal with added statistical uncertainties concerning the adaptations. Increasingly encountered in regulatory applications are adaptive statistical information designs that allow modification of sample size or related statistical information and adaptive selection designs that allow selection of doses or patient populations during the course of a clinical trial. For adaptive statistical information designs, a few statistical testing methods are mathematically equivalent, as a number of articles have stipulated, but arguably there are large differences in their practical ramifications. We pinpoint some undesirable features of these methods in this work. For adaptive selection designs, the selection based on biomarker data for testing the correlated clinical endpoints may increase statistical uncertainty in terms of type I error probability, and most importantly the increased statistical uncertainty may be impossible to assess.


Asunto(s)
Ensayos Clínicos como Asunto/estadística & datos numéricos , Modelos Estadísticos , Proyectos de Investigación , Ensayos Clínicos como Asunto/métodos , Interpretación Estadística de Datos , Humanos , Variaciones Dependientes del Observador , Tamaño de la Muestra , Resultado del Tratamiento
10.
J Biopharm Stat ; 24(1): 19-41, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-24392976

RESUMEN

This regulatory research provides possible approaches for improvement to conventional subgroup analysis in a fixed design setting. The interaction-to-overall effects ratio is recommended in the planning stage for potential predictors whose prevalence is at most 50% and its observed ratio is recommended in the analysis stage for proper subgroup interpretation if sample size is only planned to target the overall effect size. We illustrate using regulatory examples and underscore the importance of striving for balance between safety and efficacy when considering a regulatory recommendation of a label restricted to a subgroup. A set of decision rules gives guidance for rigorous subgroup-specific conclusions.


Asunto(s)
Proyectos de Investigación/legislación & jurisprudencia , Biomarcadores , Interpretación Estadística de Datos , Predicción , Humanos , Seguridad del Paciente , Prevalencia , Tamaño de la Muestra
11.
J Pharmacokinet Pharmacodyn ; 41(6): 545-52, 2014 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-25288257

RESUMEN

Medical-product development has become increasingly challenging and resource-intensive. In 2004, the Food and Drug Administration (FDA) described critical challenges facing medical-product development by establishing the critical path initiative [1]. Priorities identified included the need for improved modeling and simulation tools, further emphasized in FDA's 2011 Strategic Plan for Regulatory Science [Appendix]. In an effort to support and advance model-informed medical-product development (MIMPD), the Critical Path Institute (C-Path) [www.c-path.org], FDA, and International Society of Pharmacometrics [www.go-isop.org] co-sponsored a workshop in Washington, D.C. on September 26, 2013, to examine integrated approaches to developing and applying model- MIMPD. The workshop brought together an international group of scientists from industry, academia, FDA, and the European Medicines Agency to discuss MIMPD strategies and their applications. A commentary on the proceedings of that workshop is presented here.


Asunto(s)
Descubrimiento de Drogas/métodos , Preparaciones Farmacéuticas/química , Simulación por Computador , Toma de Decisiones , Humanos , Modelos Biológicos , Modelos Teóricos , Estados Unidos , United States Food and Drug Administration
12.
Radiat Res ; 201(6): 628-646, 2024 Jun 01.
Artículo en Inglés | MEDLINE | ID: mdl-38616048

RESUMEN

There have been a number of reported human exposures to high dose radiation, resulting from accidents at nuclear power plants (e.g., Chernobyl), atomic bombings (Hiroshima and Nagasaki), and mishaps in industrial and medical settings. If absorbed radiation doses are high enough, evolution of acute radiation syndromes (ARS) will likely impact both the bone marrow as well as the gastrointestinal (GI) tract. Damage incurred in the latter can lead to nutrient malabsorption, dehydration, electrolyte imbalance, altered microbiome and metabolites, and impaired barrier function, which can lead to septicemia and death. To prepare for a medical response should such an incident arise, the National Institute of Allergy and Infectious Diseases (NIAID) funds basic and translational research to address radiation-induced GI-ARS, which remains a critical and prioritized unmet need. Areas of interest include identification of targets for damage and mitigation, animal model development, and testing of medical countermeasures (MCMs) to address GI complications resulting from radiation exposure. To appropriately model expected human responses, it is helpful to study analogous disease states in the clinic that resemble GI-ARS, to inform on best practices for diagnosis and treatment, and translate them back to inform nonclinical drug efficacy models. For these reasons, the NIAID partnered with two other U.S. government agencies (the Biomedical Advanced Research and Development Authority, and the Food and Drug Administration), to explore models, biomarkers, and diagnostics to improve understanding of the complexities of GI-ARS and investigate promising treatment approaches. A two-day workshop was convened in August 2022 that comprised presentations from academia, industry, healthcare, and government, and highlighted talks from 26 subject matter experts across five scientific sessions. This report provides an overview of information that was presented during the conference, and important discussions surrounding a broad range of topics that are critical for the research, development, licensure, and use of MCMs for GI-ARS.


Asunto(s)
Síndrome de Radiación Aguda , Biomarcadores , Contramedidas Médicas , Síndrome de Radiación Aguda/etiología , Humanos , Animales , Tracto Gastrointestinal/efectos de la radiación , Enfermedades Gastrointestinales/etiología
13.
J Nucl Med ; 65(5): 670-678, 2024 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-38514082

RESUMEN

Since the development of amyloid tracers for PET imaging, there has been interest in quantifying amyloid burden in the brains of patients with Alzheimer disease. Quantitative amyloid PET imaging is poised to become a valuable approach in disease staging, theranostics, monitoring, and as an outcome measure for interventional studies. Yet, there are significant challenges and hurdles to overcome before it can be implemented into widespread clinical practice. On November 17, 2022, the U.S. Food and Drug Administration, Society of Nuclear Medicine and Molecular Imaging, and Medical Imaging and Technology Alliance cosponsored a public workshop comprising experts from academia, industry, and government agencies to discuss the role of quantitative brain amyloid PET imaging in staging, prognosis, and longitudinal assessment of Alzheimer disease. The workshop discussed a range of topics, including available radiopharmaceuticals for amyloid imaging; the methodology, metrics, and analytic validity of quantitative amyloid PET imaging; its use in disease staging, prognosis, and monitoring of progression; and challenges facing the field. This report provides a high-level summary of the presentations and the discussion.


Asunto(s)
Amiloide , Encéfalo , Tomografía de Emisión de Positrones , Humanos , Tomografía de Emisión de Positrones/métodos , Encéfalo/diagnóstico por imagen , Encéfalo/metabolismo , Amiloide/metabolismo , Enfermedad de Alzheimer/diagnóstico por imagen , Enfermedad de Alzheimer/metabolismo
14.
Biom J ; 55(3): 420-9, 2013 May.
Artículo en Inglés | MEDLINE | ID: mdl-23620458

RESUMEN

Multiple comparisons have drawn a great deal of attention in evaluation of statistical evidence in clinical trials for regulatory applications. As the clinical trial methodology is increasingly more complex to properly take into consideration many practical factors, the multiple testing paradigm widely employed for regulatory applications may not suffice to interpret the results of an individual trial and of multiple trials. In a large outcome trial, an increasing need of studying more than one dose complicates a proper application of multiple comparison procedures. Additional challenges surface when a special endpoint, such as mortality, may need to be tested with multiple clinical trials combined, especially under group sequential designs. Another interesting question is how to study mortality or morbidity endpoints together with symptomatic endpoints in an efficient way, where the former type of endpoints are often studied in only one single trial but the latter type of endpoints are usually studied in at least two independent trials. This article is devoted to discussion of insufficiency of such a widely used paradigm applying only per-trial based multiple comparison procedures and to expand the utility of the procedures to such complex trial designs. A number of viable expanded strategies are stipulated.


Asunto(s)
Ensayos Clínicos como Asunto/métodos , Interpretación Estadística de Datos , Ensayos Clínicos como Asunto/legislación & jurisprudencia , Relación Dosis-Respuesta a Droga , Determinación de Punto Final/métodos , Humanos , Proyectos de Investigación
15.
Biom J ; 55(3): 275-93, 2013 May.
Artículo en Inglés | MEDLINE | ID: mdl-23553537

RESUMEN

Motivated by a complex study design aiming at a definitive evidential setting, a panel forum among academia, industry, and US regulatory statistical scientists was held at the 7th International Conference on Multiple Comparison Procedures (MCP) to comment on the multiplicity problem. It is well accepted that studywise or familywise, type I error rate control is the norm for confirmatory trials. But, it is an uncharted territory regarding the criteria beyond a single confirmatory trial. The case example describes a Phase III program consisting of two placebo-controlled multiregional clinical trials identical in design intended to support registration for treatment of a chronic condition in the lung. The case presents a sophisticated multiplicity problem in several levels: four primary endpoints, two doses, two studies, two regions with different regulatory requirements, one major protocol amendment on the original statistical analysis plan, which the panelists had a chance to study before the forum took place. There were differences in professional perspectives among the panelists laid out by sections. Nonetheless, irrespective of the amendment, it may be arguable whether the two studies are poolable for the analysis of two primary endpoints prespecified. How should the study finding be reported in a scientific journal if one health authority approves while the other does not? It is tempting to address the Phase III program level multiplicity motivated by the increasing complexity of the partial hypotheses framework posed that are across studies. A novel thinking of the MCP procedures beyond individual-study level (studywise or familywise as predefined) and across multiple-study level (experimentwise and sometimes programwise) will become an important research problem expected to face with scientific and regulatory challenges.


Asunto(s)
Ensayos Clínicos Fase III como Asunto/métodos , Interpretación Estadística de Datos , Estudios Multicéntricos como Asunto/métodos , Ensayos Clínicos Controlados Aleatorios como Asunto/métodos , Humanos , Proyectos de Investigación
16.
Biomark Med ; 17(11): 523-531, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-37713233

RESUMEN

The US FDA convened a virtual public workshop with the goals of obtaining feedback on the terminology needed for effective communication of multicomponent biomarkers and discussing the diverse use of biomarkers observed across the FDA and identifying common issues. The workshop included keynote and background presentations addressing the stated goals, followed by a series of case studies highlighting FDA-wide and external experience regarding the use of multicomponent biomarkers, which provided context for panel discussions focused on common themes, challenges and preferred terminology. The final panel discussion integrated the main concepts from the keynote, background presentations and case studies, laying a preliminary foundation to build consensus around the use and terminology of multicomponent biomarkers.

17.
Stat Med ; 31(25): 3011-23, 2012 Nov 10.
Artículo en Inglés | MEDLINE | ID: mdl-22927234

RESUMEN

In the last decade or so, interest in adaptive design clinical trials has gradually been directed towards their use in regulatory submissions by pharmaceutical drug sponsors to evaluate investigational new drugs. Methodological advances of adaptive designs are abundant in the statistical literature since the 1970s. The adaptive design paradigm has been enthusiastically perceived to increase the efficiency and to be more cost-effective than the fixed design paradigm for drug development. Much interest in adaptive designs is in those studies with two-stages, where stage 1 is exploratory and stage 2 depends upon stage 1 results, but where the data of both stages will be combined to yield statistical evidence for use as that of a pivotal registration trial. It was not until the recent release of the US Food and Drug Administration Draft Guidance for Industry on Adaptive Design Clinical Trials for Drugs and Biologics (2010) that the boundaries of flexibility for adaptive designs were specifically considered for regulatory purposes, including what are exploratory goals, and what are the goals of adequate and well-controlled (A&WC) trials (2002). The guidance carefully described these distinctions in an attempt to minimize the confusion between the goals of preliminary learning phases of drug development, which are inherently substantially uncertain, and the definitive inference-based phases of drug development. In this paper, in addition to discussing some aspects of adaptive designs in a confirmatory study setting, we underscore the value of adaptive designs when used in exploratory trials to improve planning of subsequent A&WC trials. One type of adaptation that is receiving attention is the re-estimation of the sample size during the course of the trial. We refer to this type of adaptation as an adaptive statistical information design. Specifically, a case example is used to illustrate how challenging it is to plan a confirmatory adaptive statistical information design. We highlight the substantial risk of planning the sample size for confirmatory trials when information is very uninformative and stipulate the advantages of adaptive statistical information designs for planning exploratory trials. Practical experiences and strategies as lessons learned from more recent adaptive design proposals will be discussed to pinpoint the improved utilities of adaptive design clinical trials and their potential to increase the chance of a successful drug development.


Asunto(s)
Ensayos Clínicos Controlados como Asunto/estadística & datos numéricos , Drogas en Investigación , Modelos Estadísticos , Proyectos de Investigación , Ensayos Clínicos Fase III como Asunto/estadística & datos numéricos , Tamaño de la Muestra
18.
J Biopharm Stat ; 22(5): 879-93, 2012 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-22946937

RESUMEN

For decades, clinical trials have been the primary mechanism for medical products to enter the marketplace. Over more than a decade, globalization of medical product development via a multiregional clinical trial (MRCT) approach has generated greater enthusiasm because of tangible benefits in terms of cost and time for drug development. There are, however, many challenges including and not limited to design issues, statistical analysis methods, interpretation of extreme region performance, and in-process quality assurance issues. This article presents a number of examples to exemplify regional variability expected versus precision of treatment effect estimates that are generally impacted by the type of primary efficacy endpoint evaluated. We explore region-driven intrinsic and extrinsic ethnic factors for potential explanation of regional heterogeneity caused by differences in medical practice and / or disease etiology. Bayesian credible interval may be considered as a viable approach to assess the robustness of region-specific treatment effect. Ethnic-sensitive or molecular-sensitive region-driven designs may be explored to prospectively address the potential regional heterogeneity versus the potential predictiveness of causal genetic variants or molecular target biomarkers on treatment effect.


Asunto(s)
Etnicidad , Estudios Multicéntricos como Asunto/métodos , Farmacogenética , Algoritmos , Teorema de Bayes , Biomarcadores , Ensayos Clínicos como Asunto , Interpretación Estadística de Datos , Quimioterapia/métodos , Determinación de Punto Final , Variación Genética , Humanos , Reproducibilidad de los Resultados , Proyectos de Investigación
19.
J Biopharm Stat ; 22(4): 679-86, 2012.
Artículo en Inglés | MEDLINE | ID: mdl-22651108

RESUMEN

Statistical testing in clinical trials can be complex when the statistical distribution of the test statistic involves a nuisance parameter. Some type of nuisance parameters such as standard deviation of a continuous response variable can be handled without too much difficulty. Other type of nuisance parameters, specifically associated with the main parameter under testing, can be difficult to handle. Without knowledge of the possible value of such a nuisance parameter, the maximum type I error associated with testing the main parameter may occur at an extreme value of the nuisance parameter. A well known example is the intersection-union test for comparing a combination drug with its two component drugs where the nuisance parameter is the mean difference between the two components. Knowledge of the possible range of value of this mean difference may help enhance the clinical trial design. For instance, if the interim internal data suggest that this mean difference falls into a possible range of value, then the sample size may be reallocated after the interim look to possibly improve the efficiency of statistical testing. This research sheds some light into possible power advantage from such a sample size reallocation at the interim look.


Asunto(s)
Ensayos Clínicos como Asunto/estadística & datos numéricos , Quimioterapia Combinada/estadística & datos numéricos , Análisis Factorial , Algoritmos , Biometría , Ensayos Clínicos como Asunto/métodos , Humanos , Modelos Estadísticos , Proyectos de Investigación/estadística & datos numéricos , Tamaño de la Muestra
20.
JCO Precis Oncol ; 6: e2200046, 2022 08.
Artículo en Inglés | MEDLINE | ID: mdl-36001859

RESUMEN

PURPOSE: Through Bayesian inference, we propose a method called BayeSize as a reference tool for investigators to assess the sample size and its associated scientific property for phase I clinical trials. METHODS: BayeSize applies the concept of effect size in dose finding, assuming that the maximum tolerated dose can be identified on the basis of an interval surrounding its true value because of statistical uncertainty. Leveraging a decision framework that involves composite hypotheses, BayeSize uses two types of priors, the fitting prior (for model fitting) and sampling prior (for data generation), to conduct sample size calculation under the constraints of statistical power and type I error. RESULTS: Simulation results showed that BayeSize can provide reliable sample size estimation under the constraints of type I/II error rates. CONCLUSION: BayeSize could facilitate phase I trial planning by providing appropriate sample size estimation. Look-up tables and R Shiny app are provided for practical applications.


Asunto(s)
Ensayos Clínicos Fase I como Asunto , Proyectos de Investigación , Teorema de Bayes , Humanos , Dosis Máxima Tolerada , Tamaño de la Muestra
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA