Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
1.
Stat Med ; 2024 Jul 25.
Artículo en Inglés | MEDLINE | ID: mdl-39054669

RESUMEN

In this paper, we review recent advances in statistical methods for the evaluation of the heterogeneity of treatment effects (HTE), including subgroup identification and estimation of individualized treatment regimens, from randomized clinical trials and observational studies. We identify several types of approaches using the features introduced in Lipkovich et al (Stat Med 2017;36: 136-196) that distinguish the recommended principled methods from basic methods for HTE evaluation that typically rely on rules of thumb and general guidelines (the methods are often referred to as common practices). We discuss the advantages and disadvantages of various principled methods as well as common measures for evaluating their performance. We use simulated data and a case study based on a historical clinical trial to illustrate several new approaches to HTE evaluation.

2.
Clin Trials ; 20(4): 380-393, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37203150

RESUMEN

There has been much interest in the evaluation of heterogeneous treatment effects (HTE) and multiple statistical methods have emerged under the heading of personalized/precision medicine combining ideas from hypothesis testing, causal inference, and machine learning over the past 10-15 years. We discuss new ideas and approaches for evaluating HTE in randomized clinical trials and observational studies using the features introduced earlier by Lipkovich, Dmitrienko, and D'Agostino that distinguish principled methods from simplistic approaches to data-driven subgroup identification and estimating individual treatment effects and use a case study to illustrate these approaches. We identified and provided a high-level overview of several classes of modern statistical approaches for personalized/precision medicine, elucidated the underlying principles and challenges, and compared findings for a case study across different methods. Different approaches to evaluating HTEs may produce (and actually produced) highly disparate results when applied to a specific data set. Evaluating HTE with machine learning methods presents special challenges since most of machine learning algorithms are optimized for prediction rather than for estimating causal effects. An additional challenge is in that the output of machine learning methods is typically a "black box" that needs to be transformed into interpretable personalized solutions in order to gain acceptance and usability.


Asunto(s)
Medicina de Precisión , Proyectos de Investigación , Humanos , Causalidad , Aprendizaje Automático , Algoritmos
3.
Stat Med ; 41(19): 3837-3877, 2022 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-35851717

RESUMEN

The ICH E9(R1) addendum (2019) proposed principal stratification (PS) as one of five strategies for dealing with intercurrent events. Therefore, understanding the strengths, limitations, and assumptions of PS is important for the broad community of clinical trialists. Many approaches have been developed under the general framework of PS in different areas of research, including experimental and observational studies. These diverse applications have utilized a diverse set of tools and assumptions. Thus, need exists to present these approaches in a unifying manner. The goal of this tutorial is threefold. First, we provide a coherent and unifying description of PS. Second, we emphasize that estimation of effects within PS relies on strong assumptions and we thoroughly examine the consequences of these assumptions to understand in which situations certain assumptions are reasonable. Finally, we provide an overview of a variety of key methods for PS analysis and use a real clinical trial example to illustrate them. Examples of code for implementation of some of these approaches are given in Supplemental Materials.

4.
J Med Internet Res ; 24(3): e27934, 2022 03 01.
Artículo en Inglés | MEDLINE | ID: mdl-35230244

RESUMEN

BACKGROUND: Monitoring eating is central to the care of many conditions such as diabetes, eating disorders, heart diseases, and dementia. However, automatic tracking of eating in a free-living environment remains a challenge because of the lack of a mature system and large-scale, reliable training set. OBJECTIVE: This study aims to fill in this gap by an integrative engineering and machine learning effort and conducting a large-scale study in terms of monitoring hours on wearable-based eating detection. METHODS: This prospective, longitudinal, passively collected study, covering 3828 hours of records, was made possible by programming a digital system that streams diary, accelerometer, and gyroscope data from Apple Watches to iPhones and then transfers the data to the cloud. RESULTS: On the basis of this data collection, we developed deep learning models leveraging spatial and time augmentation and inferring eating at an area under the curve (AUC) of 0.825 within 5 minutes in the general population. In addition, the longitudinal follow-up of the study design encouraged us to develop personalized models that detect eating behavior at an AUC of 0.872. When aggregated to individual meals, the AUC is 0.951. We then prospectively collected an independent validation cohort in a different season of the year and validated the robustness of the models (0.941 for meal-level aggregation). CONCLUSIONS: The accuracy of this model and the data streaming platform promises immediate deployment for monitoring eating in applications such as diabetic integrative care.


Asunto(s)
Aprendizaje Automático , Comidas , Área Bajo la Curva , Conducta Alimentaria , Humanos , Estudios Prospectivos
6.
J Biopharm Stat ; 28(1): 63-81, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-29173045

RESUMEN

The general topic of subgroup identification has attracted much attention in the clinical trial literature due to its important role in the development of tailored therapies and personalized medicine. Subgroup search methods are commonly used in late-phase clinical trials to identify subsets of the trial population with certain desirable characteristics. Post-hoc or exploratory subgroup exploration has been criticized for being extremely unreliable. Principled approaches to exploratory subgroup analysis based on recent advances in machine learning and data mining have been developed to address this criticism. These approaches emphasize fundamental statistical principles, including the importance of performing multiplicity adjustments to account for selection bias inherent in subgroup search. This article provides a detailed review of multiplicity issues arising in exploratory subgroup analysis. Multiplicity corrections in the context of principled subgroup search will be illustrated using the family of SIDES (subgroup identification based on differential effect search) methods. A case study based on a Phase III oncology trial will be presented to discuss the details of subgroup search algorithms with resampling-based multiplicity adjustment procedures.


Asunto(s)
Ensayos Clínicos Fase III como Asunto/estadística & datos numéricos , Determinación de Punto Final/métodos , Selección de Paciente , Medicina de Precisión/estadística & datos numéricos , Ensayos Clínicos Controlados Aleatorios como Asunto/estadística & datos numéricos , Algoritmos , Sesgo , Biomarcadores/análisis , Interpretación Estadística de Datos , Guías como Asunto , Humanos
7.
Pharm Stat ; 17(6): 685-700, 2018 11.
Artículo en Inglés | MEDLINE | ID: mdl-30051580

RESUMEN

This article focuses on 2 objectives in the analysis of efficacy in long-term extension studies of chronic diseases: (1) defining and discussing estimands of interest in such studies and (2) evaluating the performance of several multiple imputation methods that may be useful in estimating some of these estimands. Specifically, 4 estimands are defined and their clinical utility and inferential ramifications discussed. The performance of several multiple imputation methods and approaches were evaluated using simulated data. Results suggested that when interest is in a binary outcome derived from an underlying continuous measurement, it is preferable to impute the underlying continuous value that is subsequently dichotomized rather than to directly impute the binary outcome. Results also demonstrated that multivariate Gaussian models with Markov chain Monte Carlo imputation and sequential regression have minimal bias and the anticipated confidence interval coverage, even in settings with ordinal data where departures from normality are a concern. These approaches are further illustrated using a long-term extension study in psoriasis.


Asunto(s)
Ensayos Clínicos como Asunto , Anticuerpos Monoclonales Humanizados/uso terapéutico , Simulación por Computador , Interpretación Estadística de Datos , Humanos , Cadenas de Markov , Método de Montecarlo , Psoriasis/tratamiento farmacológico
8.
Pharm Stat ; 15(3): 216-29, 2016 05.
Artículo en Inglés | MEDLINE | ID: mdl-26997353

RESUMEN

Over the past years, significant progress has been made in developing statistically rigorous methods to implement clinically interpretable sensitivity analyses for assumptions about the missingness mechanism in clinical trials for continuous and (to a lesser extent) for binary or categorical endpoints. Studies with time-to-event outcomes have received much less attention. However, such studies can be similarly challenged with respect to the robustness and integrity of primary analysis conclusions when a substantial number of subjects withdraw from treatment prematurely prior to experiencing an event of interest. We discuss how the methods that are widely used for primary analyses of time-to-event outcomes could be extended in a clinically meaningful and interpretable way to stress-test the assumption of ignorable censoring. We focus on a 'tipping point' approach, the objective of which is to postulate sensitivity parameters with a clear clinical interpretation and to identify a setting of these parameters unfavorable enough towards the experimental treatment to nullify a conclusion that was favorable to that treatment. Robustness of primary analysis results can then be assessed based on clinical plausibility of the scenario represented by the tipping point. We study several approaches for conducting such analyses based on multiple imputation using parametric, semi-parametric, and non-parametric imputation models and evaluate their operating characteristics via simulation. We argue that these methods are valuable tools for sensitivity analyses of time-to-event data and conclude that the method based on piecewise exponential imputation model of survival has some advantages over other methods studied here. Copyright © 2016 John Wiley & Sons, Ltd.


Asunto(s)
Ensayos Clínicos como Asunto/métodos , Determinación de Punto Final/métodos , Modelos Estadísticos , Simulación por Computador , Interpretación Estadística de Datos , Humanos , Proyectos de Investigación , Análisis de Supervivencia , Factores de Tiempo
9.
Pharm Stat ; 12(6): 337-47, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-23292975

RESUMEN

The need to use rigorous, transparent, clearly interpretable, and scientifically justified methodology for preventing and dealing with missing data in clinical trials has been a focus of much attention from regulators, practitioners, and academicians over the past years. New guidelines and recommendations emphasize the importance of minimizing the amount of missing data and carefully selecting primary analysis methods on the basis of assumptions regarding the missingness mechanism suitable for the study at hand, as well as the need to stress-test the results of the primary analysis under different sets of assumptions through a range of sensitivity analyses. Some methods that could be effectively used for dealing with missing data have not yet gained widespread usage, partly because of their underlying complexity and partly because of lack of relatively easy approaches to their implementation. In this paper, we explore several strategies for missing data on the basis of pattern mixture models that embody clear and realistic clinical assumptions. Pattern mixture models provide a statistically reasonable yet transparent framework for translating clinical assumptions into statistical analyses. Implementation details for some specific strategies are provided in an Appendix (available online as Supporting Information), whereas the general principles of the approach discussed in this paper can be used to implement various other analyses with different sets of assumptions regarding missing data.


Asunto(s)
Ensayos Clínicos como Asunto/métodos , Modelos Estadísticos , Proyectos de Investigación , Recolección de Datos/métodos , Interpretación Estadística de Datos , Guías como Asunto , Humanos
10.
Digit Biomark ; 7(1): 74-91, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37588480

RESUMEN

Background: Assessment of reliability is one of the key components of the validation process designed to demonstrate that a novel clinical measure assessed by a digital health technology tool is fit-for-purpose in clinical research, care, and decision-making. Reliability assessment contributes to characterization of the signal-to-noise ratio and measurement error and is the first indicator of potential usefulness of the proposed clinical measure. Summary: Methodologies for reliability analyses are scattered across literature on validation of PROs, wet biomarkers, etc., yet are equally useful for digital clinical measures. We review a general modeling framework and statistical metrics typically used for reliability assessments as part of the clinical validation. We also present methods for the assessment of agreement and measurement error, alongside modified approaches for categorical measures. We illustrate the discussed techniques using physical activity data from a wearable device with an accelerometer sensor collected in clinical trial participants. Key Messages: This paper provides statisticians and data scientists, involved in development and validation of novel digital clinical measures, an overview of the statistical methodologies and analytical tools for reliability assessment.

11.
Digit Biomark ; 6(3): 83-97, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36466953

RESUMEN

Background: The proliferation and increasing maturity of biometric monitoring technologies allow clinical investigators to measure the health status of trial participants in a more holistic manner, especially outside of traditional clinical settings. This includes capturing meaningful aspects of health in daily living and a more granular and objective manner compared to traditional tools in clinical settings. Summary: Within multidisciplinary teams, statisticians and data scientists are increasingly involved in clinical trials that incorporate digital clinical measures. They are called upon to provide input into trial planning, generation of evidence on the clinical validity of novel clinical measures, and evaluation of the adequacy of existing evidence. Analysis objectives related to demonstrating clinical validity of novel clinical measures differ from typical objectives related to demonstrating safety and efficacy of therapeutic interventions using established measures which statisticians are most familiar with. Key Messages: This paper discusses key considerations for generating evidence for clinical validity through the lens of the type and intended use of a clinical measure. This paper also briefly discusses the regulatory pathways through which clinical validity evidence may be reviewed and highlights challenges that investigators may encounter while dealing with data from biometric monitoring technologies.

12.
Ther Innov Regul Sci ; 54(2): 324-341, 2020 03.
Artículo en Inglés | MEDLINE | ID: mdl-32072573

RESUMEN

The National Research Council (NRC) Expert Panel Report on Prevention and Treatment of Missing Data in Clinical Trials highlighted the need for clearly defining objectives and estimands. That report sparked considerable discussion and literature on estimands and how to choose them. Importantly, consideration moved beyond missing data to include all postrandomization events that have implications for estimating quantities of interest (intercurrent events, aka ICEs). The ICH E9(R1) draft addendum builds on that research to outline key principles in choosing estimands for clinical trials, primarily with focus on confirmatory trials. This paper provides additional insights, perspectives, details, and examples to help put ICH E9(R1) into practice. Specific areas of focus include how the perspectives of different stakeholders influence the choice of estimands; the role of randomization and the intention-to-treat principle; defining the causal effects of a clearly defined treatment regimen, along with the implications this has for trial design and the generalizability of conclusions; detailed discussion of strategies for handling ICEs along with their implications and assumptions; estimands for safety objectives, time-to-event endpoints, early-phase and one-arm trials, and quality of life endpoints; and realistic examples of the thought process involved in defining estimands in specific clinical contexts.


Asunto(s)
Modelos Estadísticos , Proyectos de Investigación , Interpretación Estadística de Datos , Calidad de Vida
13.
Ther Innov Regul Sci ; 54(2): 370-384, 2020 03.
Artículo en Inglés | MEDLINE | ID: mdl-32072586

RESUMEN

This paper provides examples of defining estimands in real-world scenarios following ICH E9(R1) guidelines. Detailed discussions on choosing the estimands and estimators can be found in our companion papers. Three scenarios of increasing complexity are illustrated. The first example is a proof-of-concept trial in major depressive disorder where the estimand is chosen to support the sponsor decision on whether to continue development. The second and third examples are confirmatory trials in severe asthma and rheumatoid arthritis respectively. We discuss the intercurrent events expected during each trial and how they can be handled so as to be consistent with the study objectives. The estimands discussed in these examples are not the only acceptable choices for their respective scenarios. The intent is to illustrate the key concepts rather than focus on specific choices. Emphasis is placed on following a study development process where estimands link the study objectives with data collection and analysis in a coherent manner, thereby avoiding disconnect between objectives, estimands, and analyses.


Asunto(s)
Asma , Trastorno Depresivo Mayor , Asma/tratamiento farmacológico , Interpretación Estadística de Datos , Trastorno Depresivo Mayor/tratamiento farmacológico , Humanos , Proyectos de Investigación
14.
Stat Biopharm Res ; 12(4): 399-411, 2020 Jul 06.
Artículo en Inglés | MEDLINE | ID: mdl-34191971

RESUMEN

Abstract-The COVID-19 pandemic has had and continues to have major impacts on planned and ongoing clinical trials. Its effects on trial data create multiple potential statistical issues. The scale of impact is unprecedented, but when viewed individually, many of the issues are well defined and feasible to address. A number of strategies and recommendations are put forward to assess and address issues related to estimands, missing data, validity and modifications of statistical analysis methods, need for additional analyses, ability to meet objectives and overall trial interpretability.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA