Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 68
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
J Biopharm Stat ; : 1-14, 2023 May 10.
Artículo en Inglés | MEDLINE | ID: mdl-37162278

RESUMEN

A critical task in single-cell RNA sequencing (scRNA-Seq) data analysis is to identify cell types from heterogeneous tissues. While the majority of classification methods demonstrated high performance in scRNA-Seq annotation problems, a robust and accurate solution is desired to generate reliable outcomes for downstream analyses, for instance, marker genes identification, differentially expressed genes, and pathway analysis. It is hard to establish a universally good metric. Thus, a universally good classification method for all kinds of scenarios does not exist. In addition, reference and query data in cell classification are usually from different experimental batches, and failure to consider batch effects may result in misleading conclusions. To overcome this bottleneck, we propose a robust ensemble approach to classify cells and utilize a batch correction method between reference and query data. We simulated four scenarios that comprise simple to complex batch effect and account for varying cell-type proportions. We further tested our approach on both lung and pancreas data. We found improved prediction accuracy and robust performance across simulation scenarios and real data. The incorporation of batch effect correction between reference and query, and the ensemble approach improve cell-type prediction accuracy while maintaining robustness. We demonstrated these through simulated and real scRNA-Seq data.

2.
Biometrics ; 2022 Dec 31.
Artículo en Inglés | MEDLINE | ID: mdl-36585916

RESUMEN

In recent years, the field of precision medicine has seen many advancements. Significant focus has been placed on creating algorithms to estimate individualized treatment rules (ITRs), which map from patient covariates to the space of available treatments with the goal of maximizing patient outcome. Direct learning (D-Learning) is a recent one-step method which estimates the ITR by directly modeling the treatment-covariate interaction. However, when the variance of the outcome is heterogeneous with respect to treatment and covariates, D-Learning does not leverage this structure. Stabilized direct learning (SD-Learning), proposed in this paper, utilizes potential heteroscedasticity in the error term through a residual reweighting which models the residual variance via flexible machine learning algorithms such as XGBoost and random forests. We also develop an internal cross-validation scheme which determines the best residual model among competing models. SD-Learning improves the efficiency of D-Learning estimates in binary and multi-arm treatment scenarios. The method is simple to implement and an easy way to improve existing algorithms within the D-Learning family, including original D-Learning, Angle-based D-Learning (AD-Learning), and Robust D-learning (RD-Learning). We provide theoretical properties and justification of the optimality of SD-Learning. Head-to-head performance comparisons with D-Learning methods are provided through simulations, which demonstrate improvement in terms of average prediction error (APE), misclassification rate, and empirical value, along with a data analysis of an acquired immunodeficiency syndrome (AIDS) randomized clinical trial.

3.
Stat Med ; 41(4): 719-735, 2022 02 20.
Artículo en Inglés | MEDLINE | ID: mdl-34786731

RESUMEN

Statistical methods generating individualized treatment rules (ITRs) often focus on maximizing expected benefit, but these rules may expose patients to excess risk. For instance, aggressive treatment of type 2 diabetes (T2D) with insulin therapies may result in an ITR which controls blood glucose levels but increases rates of hypoglycemia, diminishing the appeal of the ITR. This work proposes two methods to identify risk-controlled ITRs (rcITR), a class of ITR which maximizes a benefit while controlling risk at a prespecified threshold. A novel penalized recursive partitioning algorithm is developed which optimizes an unconstrained, penalized value function. The final rule is a risk-controlled decision tree (rcDT) that is easily interpretable. A natural extension of the rcDT model, risk controlled random forests (rcRF), is also proposed. Simulation studies demonstrate the robustness of rcRF modeling. Three variable importance measures are proposed to further guide clinical decision-making. Both rcDT and rcRF procedures can be applied to data from randomized controlled trials or observational studies. An extensive simulation study interrogates the performance of the proposed methods. A data analysis of the DURABLE diabetes trial in which two therapeutics were compared is additionally presented. An R package implements the proposed methods ( https://github.com/kdoub5ha/rcITR).


Asunto(s)
Diabetes Mellitus Tipo 2 , Medicina de Precisión , Algoritmos , Simulación por Computador , Árboles de Decisión , Diabetes Mellitus Tipo 2/tratamiento farmacológico , Humanos , Medicina de Precisión/métodos
4.
J Med Internet Res ; 24(3): e27934, 2022 03 01.
Artículo en Inglés | MEDLINE | ID: mdl-35230244

RESUMEN

BACKGROUND: Monitoring eating is central to the care of many conditions such as diabetes, eating disorders, heart diseases, and dementia. However, automatic tracking of eating in a free-living environment remains a challenge because of the lack of a mature system and large-scale, reliable training set. OBJECTIVE: This study aims to fill in this gap by an integrative engineering and machine learning effort and conducting a large-scale study in terms of monitoring hours on wearable-based eating detection. METHODS: This prospective, longitudinal, passively collected study, covering 3828 hours of records, was made possible by programming a digital system that streams diary, accelerometer, and gyroscope data from Apple Watches to iPhones and then transfers the data to the cloud. RESULTS: On the basis of this data collection, we developed deep learning models leveraging spatial and time augmentation and inferring eating at an area under the curve (AUC) of 0.825 within 5 minutes in the general population. In addition, the longitudinal follow-up of the study design encouraged us to develop personalized models that detect eating behavior at an AUC of 0.872. When aggregated to individual meals, the AUC is 0.951. We then prospectively collected an independent validation cohort in a different season of the year and validated the robustness of the models (0.941 for meal-level aggregation). CONCLUSIONS: The accuracy of this model and the data streaming platform promises immediate deployment for monitoring eating in applications such as diabetic integrative care.


Asunto(s)
Aprendizaje Automático , Comidas , Área Bajo la Curva , Conducta Alimentaria , Humanos , Estudios Prospectivos
5.
Biometrics ; 77(4): 1254-1264, 2021 12.
Artículo en Inglés | MEDLINE | ID: mdl-32918486

RESUMEN

One central task in precision medicine is to establish individualized treatment rules (ITRs) for patients with heterogeneous responses to different therapies. Motivated from a randomized clinical trial for Type 2 diabetic patients on a comparison of two drugs, that is, pioglitazone and gliclazide, we consider a problem: utilizing promising candidate biomarkers to improve an existing ITR. This calls for a biomarker evaluation procedure that enables to gauge added values of individual biomarkers. We propose an assessment analytic, termed as net benefit index (NBI), that quantifies a contrast between the resulting gain and loss of treatment benefits when a biomarker enters ITR to reallocate patients in treatments. We optimize reallocation schemes via outcome weighted learning (OWL), from which the optimal treatment group labels are generated by weighted support vector machine (SVM). To account for sampling uncertainty in assessing a biomarker, we propose an NBI-based test for a significant improvement over the existing ITR, where the empirical null distribution is constructed via the method of stratified permutation by treatment arms. Applying NBI to the motivating diabetes trial, we found that baseline fasting insulin is an important biomarker that leads to an improvement over an existing ITR based only on patient's baseline fasting plasma glucose (FPG), age, and body mass index (BMI) to reduce FPG over a period of 52 weeks.


Asunto(s)
Diabetes Mellitus Tipo 2 , Medicina de Precisión , Biomarcadores , Diabetes Mellitus Tipo 2/tratamiento farmacológico , Humanos , Hipoglucemiantes/uso terapéutico , Aprendizaje , Aprendizaje Automático , Medicina de Precisión/métodos , Proyectos de Investigación
6.
J Biopharm Stat ; 31(1): 5-13, 2021 01 02.
Artículo en Inglés | MEDLINE | ID: mdl-32419590

RESUMEN

Hypoglycemia is a major safety concern for diabetic patients. Hypoglycemic events can be modeled based on time to recurrent events or count data. In this article, we evaluated a gamma frailty model with variance estimated by the inverse of observed Fisher information matrix, a gamma frailty model with the sandwich variance estimator, and a piecewise negative binomial regression model. Simulations showed that the sandwich variance estimator performed better when the frailty model is mis-specified, and the piecewise negative binomial regression sometimes fails to converge. All three methods were applied to a dataset from a clinical trial evaluating insulin treatments.


Asunto(s)
Hipoglucemia , Humanos , Hipoglucemia/epidemiología , Modelos Estadísticos , Recurrencia
7.
Biometrics ; 76(4): 1075-1086, 2020 12.
Artículo en Inglés | MEDLINE | ID: mdl-32365232

RESUMEN

Individualized treatment rules (ITRs) tailor medical treatments according to patient-specific characteristics in order to optimize patient outcomes. Data from randomized controlled trials (RCTs) are used to infer valid ITRs using statistical and machine learning methods. However, RCTs are usually conducted under specific inclusion/exclusion criteria, thus limiting their generalizability to a broader patient population in real-world practice settings. Because electronic health records (EHRs) document treatment prescriptions in the real world, transferring information in EHRs to RCTs, if done appropriately, could potentially improve the performance of ITRs, in terms of precision and generalizability. In this work, we propose a new domain adaptation method to learn ITRs by incorporating information from EHRs. Unless we assume that there is no unmeasured confounding in EHRs, we cannot directly learn the optimal ITR from the combined EHR and RCT data. Instead, we first pretrain "super" features from EHRs that summarize physician treatment decisions and patient observed benefits in the real world, as these are likely to be informative of the optimal ITRs. We then augment the feature space of the RCT and learn the optimal ITRs by stratifying by super features using subjects enrolled in RCT. We adopt Q-learning and a modified matched-learning algorithm for estimation. We present heuristic justification of our method and conduct simulation studies to demonstrate the performance of super features. Finally, we apply our method to transfer information learned from EHRs of patients with type 2 diabetes to learn individualized insulin therapies from RCT data.


Asunto(s)
Registros Electrónicos de Salud , Aprendizaje Automático , Algoritmos , Humanos , Ensayos Clínicos Controlados Aleatorios como Asunto , Proyectos de Investigación
8.
Stat Sin ; 30: 1857-1879, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33311956

RESUMEN

Due to heterogeneity for many chronic diseases, precise personalized medicine, also known as precision medicine, has drawn increasing attentions in the scientific community. One main goal of precision medicine is to develop the most effective tailored therapy for each individual patient. To that end, one needs to incorporate individual characteristics to detect a proper individual treatment rule (ITR), by which suitable decisions on treatment assignments can be made to optimize patients' clinical outcome. For binary treatment settings, outcome weighted learning (OWL) and several of its variations have been proposed recently to estimate the ITR by optimizing the conditional expected outcome given patients' information. However, for multiple treatment scenarios, it remains unclear how to use OWL effectively. It can be shown that some direct extensions of OWL for multiple treatments, such as one-versus-one and one-versus-rest methods, can yield suboptimal performance. In this paper, we propose a new learning method, named Multicategory Outcome weighted Margin-based Learning (MOML), for estimating ITR with multiple treatments. Our proposed method is very general and covers OWL as a special case. We show Fisher consistency for the estimated ITR, and establish convergence rate properties. Variable selection using the sparse l 1 penalty is also considered. Analysis of simulated examples and a type 2 diabetes mellitus observational study are used to demonstrate competitive performance of the proposed method.

9.
Stat Med ; 38(3): 315-325, 2019 02 10.
Artículo en Inglés | MEDLINE | ID: mdl-30302780

RESUMEN

The weighted average treatment effect is a causal measure for the comparison of interventions in a specific target population, which may be different from the population where data are sampled from. For instance, when the goal is to introduce a new treatment to a target population, the question is what efficacy (or effectiveness) can be gained by switching patients from a standard of care (control) to this new treatment, for which the average treatment effect for the control estimand can be applied. In this paper, we propose two estimators based on augmented inverse probability weighting to estimate the weighted average treatment effect for a well-defined target population (ie, there exists a predefined target function of covariates that characterizes the population of interest, for example, a function of age to focus on elderly diabetic patients using samples from the US population). The first proposed estimator is doubly robust if the target function is known or can be correctly specified. The second proposed estimator is doubly robust if the target function has a linear dependence on the propensity score, which can be used to estimate the average treatment effect for the treated and the average treatment effect for the control. We demonstrate the properties of the proposed estimators through theoretical proof and simulation studies. We also apply our proposed methods in a comparison of glucagon-like peptide-1 receptor agonists therapy and insulin therapy among patients with type 2 diabetes, using the UK Clinical Practice Research Datalink data.


Asunto(s)
Interpretación Estadística de Datos , Resultado del Tratamiento , Adulto , Factores de Edad , Anciano , Diabetes Mellitus Tipo 2/tratamiento farmacológico , Femenino , Humanos , Hipoglucemiantes/uso terapéutico , Masculino , Persona de Mediana Edad , Modelos Estadísticos , Probabilidad , Puntaje de Propensión
10.
J Biopharm Stat ; 29(2): 287-305, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-30359554

RESUMEN

Dose titration becomes more and more common in improving drug tolerability as well as determining individualized treatment doses, thereby maximizing the benefit to patients. Dose titration starting from a lower dose and gradually increasing to a higher dose enables improved tolerability in patients as the human body may gradually adapt to adverse gastrointestinal effects. Current statistical analyses mostly focus on the outcome at the end-of-study follow-up without considering the longitudinal impact of dose titration on the outcome. Better understanding of the dynamic effect of dose titration over time is important in early-phase clinical development as it could allow to model the longitudinal trend and predict the longer term outcome more accurately. We propose a parametric model with two empirical methods of modeling the error terms for a continuous outcome with dose titrations. Simulations show that both approaches of modeling the error terms work well. We applied this method to analyze data from a few clinical studies and achieved satisfactory results.


Asunto(s)
Efectos Colaterales y Reacciones Adversas Relacionados con Medicamentos/prevención & control , Péptidos Similares al Glucagón/administración & dosificación , Hipoglucemiantes/administración & dosificación , Modelos Estadísticos , Ensayos Clínicos Controlados Aleatorios como Asunto/métodos , Simulación por Computador , Relación Dosis-Respuesta a Droga , Esquema de Medicación , Efectos Colaterales y Reacciones Adversas Relacionados con Medicamentos/epidemiología , Péptido 1 Similar al Glucagón/agonistas , Péptidos Similares al Glucagón/efectos adversos , Péptidos Similares al Glucagón/uso terapéutico , Humanos , Hipoglucemiantes/efectos adversos , Hipoglucemiantes/uso terapéutico , Ensayos Clínicos Controlados Aleatorios como Asunto/estadística & datos numéricos , Resultado del Tratamiento
11.
Biometrics ; 74(3): 924-933, 2018 09.
Artículo en Inglés | MEDLINE | ID: mdl-29534296

RESUMEN

Precision medicine is an emerging scientific topic for disease treatment and prevention that takes into account individual patient characteristics. It is an important direction for clinical research, and many statistical methods have been proposed recently. One of the primary goals of precision medicine is to obtain an optimal individual treatment rule (ITR), which can help make decisions on treatment selection according to each patient's specific characteristics. Recently, outcome weighted learning (OWL) has been proposed to estimate such an optimal ITR in a binary treatment setting by maximizing the expected clinical outcome. However, for ordinal treatment settings, such as individualized dose finding, it is unclear how to use OWL. In this article, we propose a new technique for estimating ITR with ordinal treatments. In particular, we propose a data duplication technique with a piecewise convex loss function. We establish Fisher consistency for the resulting estimated ITR under certain conditions, and obtain the convergence and risk bound properties. Simulated examples and an application to a dataset from a type 2 diabetes mellitus observational study demonstrate the highly competitive performance of the proposed method compared to existing alternatives.


Asunto(s)
Modelos Estadísticos , Medicina de Precisión/métodos , Técnicas de Apoyo para la Decisión , Diabetes Mellitus Tipo 2/terapia , Humanos , Estudios Observacionales como Asunto , Resultado del Tratamiento
12.
Biometrics ; 74(2): 694-702, 2018 06.
Artículo en Inglés | MEDLINE | ID: mdl-28901017

RESUMEN

In comparing two treatments with the event time observations, the hazard ratio (HR) estimate is routinely used to quantify the treatment difference. However, this model dependent estimate may be difficult to interpret clinically especially when the proportional hazards (PH) assumption is violated. An alternative estimation procedure for treatment efficacy based on the restricted means survival time or t-year mean survival time (t-MST) has been discussed extensively in the statistical and clinical literature. On the other hand, a statistical test via the HR or its asymptotically equivalent counterpart, the logrank test, is asymptotically distribution-free. In this article, we assess the relative efficiency of the hazard ratio and t-MST tests with respect to the statistical power under various PH and non-PH models theoretically and empirically. When the PH assumption is valid, the t-MST test performs almost as well as the HR test. For non-PH models, the t-MST test can substantially outperform its HR counterpart. On the other hand, the HR test can be powerful when the true difference of two survival functions is quite large at end but not the beginning of the study. Unfortunately, for this case, the HR estimate may not have a simple clinical interpretation for the treatment effect due to the violation of the PH assumption.


Asunto(s)
Modelos de Riesgos Proporcionales , Análisis de Supervivencia , Humanos , Observación , Factores de Tiempo
13.
Stat Med ; 37(27): 3869-3886, 2018 11 30.
Artículo en Inglés | MEDLINE | ID: mdl-30014497

RESUMEN

With the advancement in drug development, multiple treatments are available for a single disease. Patients can often benefit from taking multiple treatments simultaneously. For example, patients in Clinical Practice Research Datalink with chronic diseases such as type 2 diabetes can receive multiple treatments simultaneously. Therefore, it is important to estimate what combination therapy from which patients can benefit the most. However, to recommend the best treatment combination is not a single label but a multilabel classification problem. In this paper, we propose a novel outcome weighted deep learning algorithm to estimate individualized optimal combination therapy. The Fisher consistency of the proposed loss function under certain conditions is also provided. In addition, we extend our method to a family of loss functions, which allows adaptive changes based on treatment interactions. We demonstrate the performance of our methods through simulations and real data analysis.


Asunto(s)
Algoritmos , Quimioterapia Combinada , Aprendizaje Automático , Medicina de Precisión , Estadística como Asunto/métodos , Resultado del Tratamiento , Técnicas de Apoyo para la Decisión , Quimioterapia Combinada/métodos , Humanos , Modelos Estadísticos , Medicina de Precisión/métodos , Procesos Estocásticos
14.
Stat Med ; 37(25): 3589-3598, 2018 11 10.
Artículo en Inglés | MEDLINE | ID: mdl-30047148

RESUMEN

To evaluate the totality of one treatment's benefit/risk profile relative to an alternative treatment via a longitudinal comparative clinical study, the timing and occurrence of multiple clinical events are typically collected during the patient's follow-up. These multiple observations reflect the patient's disease progression/burden over time. The standard practice is to create a composite endpoint from the multiple outcomes, the timing of the occurrence of the first clinical event, to evaluate the treatment via the standard survival analysis techniques. By ignoring all events after the composite outcome, this type of assessment may not be ideal. Various parametric or semiparametric procedures have been extensively discussed in the literature for the purposes of analyzing multiple event-time data. Many existing methods were developed based on extensive model assumptions. When the model assumptions are not plausible, the resulting inferences for the treatment effect may be misleading. In this article, we propose a simple, nonparametric inference procedure to quantify the treatment effect, which has an intuitive clinically meaningful interpretation. We use the data from a cardiovascular clinical trial for heart failure to illustrate the procedure. A simulation study is also conducted to evaluate the performance of the new proposal.


Asunto(s)
Interpretación Estadística de Datos , Estudios Longitudinales , Resultado del Tratamiento , Área Bajo la Curva , Humanos , Modelos Estadísticos , Modelos de Riesgos Proporcionales , Ensayos Clínicos Controlados Aleatorios como Asunto , Análisis de Supervivencia , Factores de Tiempo
15.
J Biopharm Stat ; 27(5): 824-833, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28001483

RESUMEN

Understanding dose-response relationship is a crucial step in drug development. There are a few parametric methods to estimate dose-response curves, such as the Emax model and the logistic model. These parametric models are easy to interpret and, hence, are widely used. However, these models often require the inclusion of patients on high-dose levels; otherwise, the model parameters cannot be reliably estimated. To have robust estimation, nonparametric models are used. However, these models are not able to estimate certain important clinical parameters, such as ED50 and Emax. Furthermore, in many therapeutic areas, dose-response curves can be assumed as nondecreasing functions. This creates an additional challenge for nonparametric methods. In this paper, we propose a new Bayesian isotonic regression dose-response (BIRD) which features advantages from both parametric and nonparametric models. The ED50 and Emax can be derived from this model. Simulations are provided to evaluate the BIRD model performance against two parametric models. We apply this model to a dataset from a diabetes dose-finding study.


Asunto(s)
Teorema de Bayes , Simulación por Computador/estadística & datos numéricos , Preparaciones Farmacéuticas/administración & dosificación , Relación Dosis-Respuesta a Droga , Humanos , Modelos Logísticos
16.
Stat Med ; 35(19): 3285-302, 2016 08 30.
Artículo en Inglés | MEDLINE | ID: mdl-26892174

RESUMEN

With new treatments and novel technology available, personalized medicine has become an important piece in the new era of medical product development. Traditional statistics methods for personalized medicine and subgroup identification primarily focus on single treatment or two arm randomized control trials. Motivated by the recent development of outcome weighted learning framework, we propose an alternative algorithm to search treatment assignments which has a connection with subgroup identification problems. Our method focuses on applications from clinical trials to generate easy to interpret results. This framework is able to handle two or more than two treatments from both randomized control trials and observational studies. We implement our algorithm in C++ and connect it with R. Its performance is evaluated by simulations, and we apply our method to a dataset from a diabetes study. Copyright © 2016 John Wiley & Sons, Ltd.


Asunto(s)
Estudios Observacionales como Asunto , Medicina de Precisión , Ensayos Clínicos Controlados Aleatorios como Asunto , Humanos , Proyectos de Investigación
17.
Stat Med ; 35(26): 4837-4855, 2016 11 20.
Artículo en Inglés | MEDLINE | ID: mdl-27346729

RESUMEN

We describe and evaluate a regression tree algorithm for finding subgroups with differential treatments effects in randomized trials with multivariate outcomes. The data may contain missing values in the outcomes and covariates, and the treatment variable is not limited to two levels. Simulation results show that the regression tree models have unbiased variable selection and the estimates of subgroup treatment effects are approximately unbiased. A bootstrap calibration technique is proposed for constructing confidence intervals for the treatment effects. The method is illustrated with data from a longitudinal study comparing two diabetes drugs and a mammography screening trial comparing two treatments and a control. Copyright © 2016 John Wiley & Sons, Ltd.


Asunto(s)
Algoritmos , Ensayos Clínicos Controlados Aleatorios como Asunto , Interpretación Estadística de Datos , Humanos , Estudios Longitudinales , Resultado del Tratamiento
18.
J Biopharm Stat ; 26(2): 280-98, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-25437847

RESUMEN

Diabetes affects an estimated 25.8 million people in the United States and is one of the leading causes of death. A major safety concern in treating diabetes is the occurrence of hypoglycemic events. Despite this concern, the current methods of analyzing hypoglycemic events, including the Wilcoxon rank sum test and negative binomial regression, are not satisfactory. The aim of this article is to propose a new model to analyze hypoglycemic events with the goal of making this model a standard method in industry. Our method is based on a gamma frailty recurrent event model. To make this method broadly accessible to practitioners, this article provides many details of how this method works and discusses practical issues with supporting theoretical proofs. In particular, we make efforts to translate conditions and theorems from abstract counting process and martingale theories to intuitive and clinical meaningful explanations. For example, we provide a simple proof and illustration of the coarsening at random condition so that the practitioner can easily verify this condition. Connections and differences with traditional methods are discussed, and we demonstrate that under certain scenarios the widely used Wilcoxon rank sum test and negative binomial regression cannot control type 1 error rates while our proposed method is robust in all these situations. The usefulness of our method is demonstrated through a diabetes dataset which provides new clinical insights on the hypoglycemic data.


Asunto(s)
Simulación por Computador , Hipoglucemia/inducido químicamente , Hipoglucemiantes/efectos adversos , Modelos Estadísticos , Ensayos Clínicos Controlados Aleatorios como Asunto/estadística & datos numéricos , Interpretación Estadística de Datos , Humanos , Hipoglucemia/epidemiología , Hipoglucemiantes/administración & dosificación , Hipoglucemiantes/uso terapéutico , Funciones de Verosimilitud , Recurrencia
19.
Ann Intern Med ; 163(2): 127-34, 2015 Jul 21.
Artículo en Inglés | MEDLINE | ID: mdl-26054047

RESUMEN

A noninferiority study is often used to investigate whether a treatment's efficacy or safety profile is acceptable compared with an alternative therapy regarding the time to a clinical event. The empirical quantification of the treatment difference for such a study is routinely based on the hazard ratio (HR) estimate. The HR, which is not a relative risk, may be difficult to interpret clinically, especially when the underlying proportional hazards assumption is violated. The precision of the HR estimate depends primarily on the number of observed events but not directly on exposure times or sample size of the study population. If the event rate is low, the study may require an impractically large number of events to ensure that the prespecified noninferiority criterion for the HR is attainable. This article discusses deficiencies in the current approach for the design and analysis of a noninferiority study. Alternative procedures are provided, which do not depend on any model assumption, to compare 2 treatments. For a noninferiority safety study, the patients' exposure times are more clinically important than the observed number of events. If the patients' exposure times are long enough to evaluate safety reliably, then these alternative procedures can effectively provide clinically interpretable evidence on safety, even with relatively few observed events. These procedures are illustrated with data from 2 studies. One explores the cardiovascular safety of a pain medicine; the second examines the cardiovascular safety of a new treatment for diabetes. These alternative strategies to evaluate safety or efficacy of an intervention lead to more meaningful interpretations of the analysis results than the conventional strategy that uses the HR estimate.


Asunto(s)
Ensayos Clínicos como Asunto , Quimioterapia , Efectos Colaterales y Reacciones Adversas Relacionados con Medicamentos , Modelos de Riesgos Proporcionales , Resultado del Tratamiento , Humanos , Proyectos de Investigación , Tamaño de la Muestra
20.
Biometrics ; 71(1): 178-187, 2015 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-25223432

RESUMEN

Joint models of longitudinal and survival outcomes have been used with increasing frequency in clinical investigations. Correct specification of fixed and random effects is essential for practical data analysis. Simultaneous selection of variables in both longitudinal and survival components functions as a necessary safeguard against model misspecification. However, variable selection in such models has not been studied. No existing computational tools, to the best of our knowledge, have been made available to practitioners. In this article, we describe a penalized likelihood method with adaptive least absolute shrinkage and selection operator (ALASSO) penalty functions for simultaneous selection of fixed and random effects in joint models. To perform selection in variance components of random effects, we reparameterize the variance components using a Cholesky decomposition; in doing so, a penalty function of group shrinkage is introduced. To reduce the estimation bias resulted from penalization, we propose a two-stage selection procedure in which the magnitude of the bias is ameliorated in the second stage. The penalized likelihood is approximated by Gaussian quadrature and optimized by an EM algorithm. Simulation study showed excellent selection results in the first stage and small estimation biases in the second stage. To illustrate, we analyzed a longitudinally observed clinical marker and patient survival in a cohort of patients with heart failure.


Asunto(s)
Interpretación Estadística de Datos , Insuficiencia Cardíaca/sangre , Insuficiencia Cardíaca/mortalidad , Estudios Longitudinales , Evaluación de Resultado en la Atención de Salud/métodos , Análisis de Supervivencia , Algoritmos , Biomarcadores/sangre , Simulación por Computador , Métodos Epidemiológicos , Humanos , Modelos Estadísticos , Péptido Natriurético Encefálico/sangre , Análisis Numérico Asistido por Computador , Reproducibilidad de los Resultados , Medición de Riesgo/métodos , Sensibilidad y Especificidad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA