Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 57
Filtrar
1.
Biometrics ; 79(2): 975-987, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-34825704

RESUMEN

In many randomized clinical trials of therapeutics for COVID-19, the primary outcome is an ordinal categorical variable, and interest focuses on the odds ratio (OR; active agent vs control) under the assumption of a proportional odds model. Although at the final analysis the outcome will be determined for all subjects, at an interim analysis, the status of some participants may not yet be determined, for example, because ascertainment of the outcome may not be possible until some prespecified follow-up time. Accordingly, the outcome from these subjects can be viewed as censored. A valid interim analysis can be based on data only from those subjects with full follow-up; however, this approach is inefficient, as it does not exploit additional information that may be available on those for whom the outcome is not yet available at the time of the interim analysis. Appealing to the theory of semiparametrics, we propose an estimator for the OR in a proportional odds model with censored, time-lagged categorical outcome that incorporates additional baseline and time-dependent covariate information and demonstrate that it can result in considerable gains in efficiency relative to simpler approaches. A byproduct of the approach is a covariate-adjusted estimator for the OR based on the full data that would be available at a final analysis.


Asunto(s)
COVID-19 , Humanos , Oportunidad Relativa , Resultado del Tratamiento
2.
NEJM Evid ; 2(3): EVIDctcs2200301, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38320019

RESUMEN

Monitoring U.S. Government-Supported Covid-19 Vaccine TrialsOperation Warp Speed was a partnership created to accelerate the development of Covid-19 vaccines. The National Institutes of Health oversaw one data and safety monitoring board to review/monitor all Operation Warp Speed trials. This article describes the challenges faced in monitoring these trials and provides ideas for future similar endeavors.


Asunto(s)
Vacunas contra la COVID-19 , COVID-19 , Estados Unidos , Humanos , Comités de Monitoreo de Datos de Ensayos Clínicos , National Institutes of Health (U.S.)
3.
Stat Med ; 41(28): 5517-5536, 2022 12 10.
Artículo en Inglés | MEDLINE | ID: mdl-36117235

RESUMEN

The primary analysis in two-arm clinical trials usually involves inference on a scalar treatment effect parameter; for example, depending on the outcome, the difference of treatment-specific means, risk difference, risk ratio, or odds ratio. Most clinical trials are monitored for the possibility of early stopping. Because ordinarily the outcome on any given subject can be ascertained only after some time lag, at the time of an interim analysis, among the subjects already enrolled, the outcome is known for only a subset and is effectively censored for those who have not been enrolled sufficiently long for it to be observed. Typically, the interim analysis is based only on the data from subjects for whom the outcome has been ascertained. A goal of an interim analysis is to stop the trial as soon as the evidence is strong enough to do so, suggesting that the analysis ideally should make the most efficient use of all available data, thus including information on censoring as well as other baseline and time-dependent covariates in a principled way. A general group sequential framework is proposed for clinical trials with a time-lagged outcome. Treatment effect estimators that take account of censoring and incorporate covariate information at an interim analysis are derived using semiparametric theory and are demonstrated to lead to stronger evidence for early stopping than standard approaches. The associated test statistics are shown to have the independent increments structure, so that standard software can be used to obtain stopping boundaries.


Asunto(s)
Proyectos de Investigación , Humanos , Ensayos Clínicos Controlados Aleatorios como Asunto , Oportunidad Relativa
4.
Biometrics ; 78(3): 825-838, 2022 09.
Artículo en Inglés | MEDLINE | ID: mdl-34174097

RESUMEN

The COVID-19 pandemic due to the novel coronavirus SARS CoV-2 has inspired remarkable breakthroughs in the development of vaccines against the virus and the launch of several phase 3 vaccine trials in Summer 2020 to evaluate vaccine efficacy (VE). Trials of vaccine candidates using mRNA delivery systems developed by Pfizer-BioNTech and Moderna have shown substantial VEs of 94-95%, leading the US Food and Drug Administration to issue Emergency Use Authorizations and subsequent widespread administration of the vaccines. As the trials continue, a key issue is the possibility that VE may wane over time. Ethical considerations dictate that trial participants be unblinded and those randomized to placebo be offered study vaccine, leading to trial protocol amendments specifying unblinding strategies. Crossover of placebo subjects to vaccine complicates inference on waning of VE. We focus on the particular features of the Moderna trial and propose a statistical framework based on a potential outcomes formulation within which we develop methods for inference on potential waning of VE over time and estimation of VE at any postvaccination time. The framework clarifies assumptions made regarding individual- and population-level phenomena and acknowledges the possibility that subjects who are more or less likely to become infected may be crossed over to vaccine differentially over time. The principles of the framework can be adapted straightforwardly to other trials.


Asunto(s)
Vacunas contra la COVID-19 , COVID-19 , COVID-19/prevención & control , Humanos , Pandemias/prevención & control , Ensayos Clínicos Controlados Aleatorios como Asunto , Proyectos de Investigación , SARS-CoV-2 , Eficacia de las Vacunas
6.
J Thromb Haemost ; 19(9): 2322-2334, 2021 09.
Artículo en Inglés | MEDLINE | ID: mdl-34060704

RESUMEN

BACKGROUND: Oral anticoagulation (OAC) in atrial fibrillation (AF) reduces the risk of stroke/systemic embolism (SE). The impact of OAC discontinuation is less well documented. OBJECTIVE: Investigate outcomes of patients prospectively enrolled in the Global Anticoagulant Registry in the Field-Atrial Fibrillation study who discontinued OAC. METHODS: Oral anticoagulation discontinuation was defined as cessation of treatment for ≥7 consecutive days. Adjusted outcome risks were assessed in 23 882 patients with 511 days of median follow-up after discontinuation. RESULTS: Patients who discontinued (n = 3114, 13.0%) had a higher risk (hazard ratio [95% CI]) of all-cause death (1.62 [1.25-2.09]), stroke/systemic embolism (SE) (2.21 [1.42-3.44]) and myocardial infarction (MI) (1.85 [1.09-3.13]) than patients who did not, whether OAC was restarted or not. This higher risk of outcomes after discontinuation was similar for patients treated with vitamin K antagonists (VKAs) and direct oral anticoagulants (DOACs) (p for interactions range = 0.145-0.778). Bleeding history (1.43 [1.14-1.80]), paroxysmal vs. persistent AF (1.15 [1.02-1.29]), emergency room care setting vs. office (1.37 [1.18-1.59]), major, clinically relevant nonmajor, and minor bleeding (10.02 [7.19-13.98], 2.70 [2.24-3.25] and 1.90 [1.61-2.23]), stroke/SE (4.09 [2.55-6.56]), MI (2.74 [1.69-4.43]), and left atrial appendage procedures (4.99 [1.82-13.70]) were predictors of discontinuation. Age (0.84 [0.81-0.88], per 10-year increase), history of stroke/transient ischemic attack (0.81 [0.71-0.93]), diabetes (0.88 [0.80-0.97]), weeks from AF onset to treatment (0.96 [0.93-0.99] per week), and permanent vs. persistent AF (0.73 [0.63-0.86]) were predictors of lower discontinuation rates. CONCLUSIONS: In GARFIELD-AF, the rate of discontinuation was 13.0%. Discontinuation for ≥7 consecutive days was associated with significantly higher all-cause mortality, stroke/SE, and MI risk. Caution should be exerted when considering any OAC discontinuation beyond 7 days.


Asunto(s)
Fibrilación Atrial , Accidente Cerebrovascular , Administración Oral , Anticoagulantes/efectos adversos , Fibrilación Atrial/complicaciones , Fibrilación Atrial/diagnóstico , Fibrilación Atrial/tratamiento farmacológico , Humanos , Sistema de Registros , Factores de Riesgo , Accidente Cerebrovascular/diagnóstico , Accidente Cerebrovascular/etiología , Accidente Cerebrovascular/prevención & control
7.
J Infect Dis ; 224(12): 1995-2000, 2021 12 15.
Artículo en Inglés | MEDLINE | ID: mdl-34008027

RESUMEN

To speed the development of vaccines against SARS-CoV-2, the United States Federal Government has funded multiple phase 3 trials of candidate vaccines. A single 11-member data and safety monitoring board (DSMB) monitors all government-funded trials to ensure coordinated oversight, promote harmonized designs, and allow shared insights related to safety across trials. DSMB reviews encompass 3 domains: (1) the conduct of trials, including overall and subgroup accrual and data quality and completeness; (2) safety, including individual events of concern and comparisons by randomized group; and (3) interim analyses of efficacy when event-driven milestones are met. Challenges have included the scale and pace of the trials, the frequency of safety events related to the combined enrollment of over 100 000 participants, many of whom are older adults or have comorbid conditions that place them at independent risk of serious health events, and the politicized environment in which the trials have taken place.


Asunto(s)
Vacunas contra la COVID-19/efectos adversos , Vacunas contra la COVID-19/inmunología , COVID-19/prevención & control , Anciano , Vacunas contra la COVID-19/administración & dosificación , Humanos , SARS-CoV-2 , Estados Unidos , Vacunas
8.
Biometrics ; 74(4): 1180-1192, 2018 12.
Artículo en Inglés | MEDLINE | ID: mdl-29775203

RESUMEN

Clinicians often make multiple treatment decisions at key points over the course of a patient's disease. A dynamic treatment regime is a sequence of decision rules, each mapping a patient's observed history to the set of available, feasible treatment options at each decision point, and thus formalizes this process. An optimal regime is one leading to the most beneficial outcome on average if used to select treatment for the patient population. We propose a method for estimation of an optimal regime involving two decision points when the outcome of interest is a censored survival time, which is based on maximizing a locally efficient, doubly robust, augmented inverse probability weighted estimator for average outcome over a class of regimes. By casting this optimization as a classification problem, we exploit well-studied classification techniques such as support vector machines to characterize the class of regimes and facilitate implementation via a backward iterative algorithm. Simulation studies of performance and application of the method to data from a sequential, multiple assignment randomized clinical trial in acute leukemia are presented.


Asunto(s)
Biometría/métodos , Técnicas de Apoyo para la Decisión , Evaluación de Resultado en la Atención de Salud/métodos , Máquina de Vectores de Soporte , Análisis de Supervivencia , Enfermedad Aguda , Algoritmos , Simulación por Computador , Humanos , Leucemia , Evaluación de Resultado en la Atención de Salud/normas , Ensayos Clínicos Controlados Aleatorios como Asunto
9.
Biometrics ; 74(3): 900-909, 2018 09.
Artículo en Inglés | MEDLINE | ID: mdl-29359317

RESUMEN

We consider estimating the effect that discontinuing a beneficial treatment will have on the distribution of a time to event clinical outcome, and in particular assessing whether there is a period of time over which the beneficial effect may continue after discontinuation. There are two major challenges. The first is to make a distinction between mandatory discontinuation, where by necessity treatment has to be terminated and optional discontinuation which is decided by the preference of the patient or physician. The innovation in this article is to cast the intervention in the form of a dynamic regime "terminate treatment optionally at time v unless a mandatory treatment-terminating event occurs prior to v" and consider estimating the distribution of time to event as a function of treatment regime v. The second challenge arises from biases associated with the nonrandom assignment of treatment regimes, because, naturally, optional treatment discontinuation is left to the patient and physician, and so time to discontinuation may depend on the patient's disease status. To address this issue, we develop dynamic-regime Marginal Structural Models and use inverse probability of treatment weighting to estimate the impact of time to treatment discontinuation on a time to event outcome, compared to the effect of not discontinuing treatment. We illustrate our methods using the IMPROVE-IT data on cardiovascular disease.


Asunto(s)
Análisis de Supervivencia , Privación de Tratamiento/estadística & datos numéricos , Enfermedades Cardiovasculares/terapia , Simulación por Computador , Humanos , Estimación de Kaplan-Meier , Modelos Estadísticos , Tiempo de Tratamiento
10.
J Am Stat Assoc ; 113(524): 1541-1549, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30774169

RESUMEN

Precision medicine is currently a topic of great interest in clinical and intervention science. A key component of precision medicine is that it is evidence-based, i.e., data-driven, and consequently there has been tremendous interest in estimation of precision medicine strategies using observational or randomized study data. One way to formalize precision medicine is through a treatment regime, which is a sequence of decision rules, one per stage of clinical intervention, that map up-to-date patient information to a recommended treatment. An optimal treatment regime is defined as maximizing the mean of some cumulative clinical outcome if applied to a population of interest. It is well-known that even under simple generative models an optimal treatment regime can be a highly nonlinear function of patient information. Consequently, a focal point of recent methodological research has been the development of flexible models for estimating optimal treatment regimes. However, in many settings, estimation of an optimal treatment regime is an exploratory analysis intended to generate new hypotheses for subsequent research and not to directly dictate treatment to new patients. In such settings, an estimated treatment regime that is interpretable in a domain context may be of greater value than an unintelligible treatment regime built using 'black-box' estimation methods. We propose an estimator of an optimal treatment regime composed of a sequence of decision rules, each expressible as a list of "if-then" statements that can be presented as either a paragraph or as a simple flowchart that is immediately interpretable to domain experts. The discreteness of these lists precludes smooth, i.e., gradient-based, methods of estimation and leads to non-standard asymptotics. Nevertheless, we provide a computationally efficient estimation algorithm, prove consistency of the proposed estimator, and derive rates of convergence. We illustrate the proposed methods using a series of simulation examples and application to data from a sequential clinical trial on bipolar disorder.

11.
Lifetime Data Anal ; 23(4): 585-604, 2017 10.
Artículo en Inglés | MEDLINE | ID: mdl-27480339

RESUMEN

A treatment regime at a single decision point is a rule that assigns a treatment, among the available options, to a patient based on the patient's baseline characteristics. The value of a treatment regime is the average outcome of a population of patients if they were all treated in accordance to the treatment regime, where large values are desirable. The optimal treatment regime is a regime which results in the greatest value. Typically, the optimal treatment regime is estimated by positing a regression relationship for the outcome of interest as a function of treatment and baseline characteristics. However, this can lead to suboptimal treatment regimes when the regression model is misspecified. We instead consider value search estimators for the optimal treatment regime where we directly estimate the value for any treatment regime and then maximize this estimator over a class of regimes. For many studies the primary outcome of interest is survival time which is often censored. We derive a locally efficient, doubly robust, augmented inverse probability weighted complete case estimator for the value function with censored survival data and study the large sample properties of this estimator. The optimization is realized from a weighted classification perspective that allows us to use available off the shelf software. In some studies one treatment may have greater toxicity or side effects, thus we also consider estimating a quality adjusted optimal treatment regime that allows a patient to trade some additional risk of death in order to avoid the more invasive treatment.


Asunto(s)
Modelos Estadísticos , Análisis de Supervivencia , Simulación por Computador , Puente de Arteria Coronaria , Enfermedad de la Arteria Coronaria/mortalidad , Enfermedad de la Arteria Coronaria/terapia , Toma de Decisiones , Humanos , Tablas de Vida , Método de Montecarlo , Intervención Coronaria Percutánea , Resultado del Tratamiento
12.
Ann Am Thorac Soc ; 14(2): 172-181, 2017 02.
Artículo en Inglés | MEDLINE | ID: mdl-27779905

RESUMEN

RATIONALE: Lung transplantation is an accepted and increasingly employed treatment for advanced lung diseases, but the anticipated survival benefit of lung transplantation is poorly understood. OBJECTIVES: To determine whether and for which patients lung transplantation confers a survival benefit in the modern era of U.S. lung allocation. METHODS: Data on 13,040 adults listed for lung transplantation between May 2005 and September 2011 were obtained from the United Network for Organ Sharing. A structural nested accelerated failure time model was used to model the survival benefit of lung transplantation over time. The effects of patient, donor, and transplant center characteristics on the relative survival benefit of transplantation were examined. MEASUREMENTS AND MAIN RESULTS: Overall, 73.8% of transplant recipients were predicted to achieve a 2-year survival benefit with lung transplantation. The survival benefit of transplantation varied by native disease group (P = 0.062), with 2-year expected benefit in 39.2 and 98.9% of transplants occurring in those with obstructive lung disease and cystic fibrosis, respectively, and by lung allocation score at the time of transplantation (P < 0.001), with net 2-year benefit in only 6.8% of transplants occurring for lung allocation score less than 32.5 and in 99.9% of transplants for lung allocation score exceeding 40. CONCLUSIONS: A majority of adults undergoing transplantation experience a survival benefit, with the greatest potential benefit in those with higher lung allocation scores or restrictive native lung disease or cystic fibrosis. These results provide novel information to assess the expected benefit of lung transplantation at an individual level and to enhance lung allocation policy.


Asunto(s)
Fibrosis Quística/mortalidad , Enfermedades Pulmonares Obstructivas/mortalidad , Trasplante de Pulmón/mortalidad , Donantes de Tejidos/estadística & datos numéricos , Obtención de Tejidos y Órganos , Listas de Espera/mortalidad , Adulto , Fibrosis Quística/cirugía , Femenino , Asignación de Recursos para la Atención de Salud/normas , Humanos , Enfermedades Pulmonares Obstructivas/cirugía , Masculino , Persona de Mediana Edad , Selección de Paciente , Sistema de Registros , Estudios Retrospectivos , Tasa de Supervivencia , Factores de Tiempo , Estados Unidos/epidemiología , Adulto Joven
13.
J Biom Biostat ; 7(1)2016 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-27175309

RESUMEN

Often, sample size is not fixed by design. A key example is a sequential trial with a stopping rule, where stopping is based on what has been observed at an interim look. While such designs are used for time and cost efficiency, and hypothesis testing theory has been well developed, estimation following a sequential trial is a challenging, still controversial problem. Progress has been made in the literature, predominantly for normal outcomes and/or for a deterministic stopping rule. Here, we place these settings in a broader context of outcomes following an exponential family distribution and, with a stochastic stopping rule that includes a deterministic rule and completely random sample size as special cases. It is shown that the estimation problem is usually simpler than often thought. In particular, it is established that the ordinary sample average is a very sensible choice, contrary to commonly encountered statements. We study (1) The so-called incompleteness property of the sufficient statistics, (2) A general class of linear estimators, and (3) Joint and conditional likelihood estimation. Apart from the general exponential family setting, normal and binary outcomes are considered as key examples. While our results hold for a general number of looks, for ease of exposition, we focus on the simple yet generic setting of two possible sample sizes, N=n or N=2n.

14.
Stat Med ; 35(8): 1245-56, 2016 Apr 15.
Artículo en Inglés | MEDLINE | ID: mdl-26506890

RESUMEN

A personalized treatment strategy formalizes evidence-based treatment selection by mapping patient information to a recommended treatment. Personalized treatment strategies can produce better patient outcomes while reducing cost and treatment burden. Thus, among clinical and intervention scientists, there is a growing interest in conducting randomized clinical trials when one of the primary aims is estimation of a personalized treatment strategy. However, at present, there are no appropriate sample size formulae to assist in the design of such a trial. Furthermore, because the sampling distribution of the estimated outcome under an estimated optimal treatment strategy can be highly sensitive to small perturbations in the underlying generative model, sample size calculations based on standard (uncorrected) asymptotic approximations or computer simulations may not be reliable. We offer a simple and robust method for powering a single stage, two-armed randomized clinical trial when the primary aim is estimating the optimal single stage personalized treatment strategy. The proposed method is based on inverting a plugin projection confidence interval and is thereby regular and robust to small perturbations of the underlying generative model. The proposed method requires elicitation of two clinically meaningful parameters from clinical scientists and uses data from a small pilot study to estimate nuisance parameters, which are not easily elicited. The method performs well in simulated experiments and is illustrated using data from a pilot study of time to conception and fertility awareness.


Asunto(s)
Ensayos Clínicos Controlados Aleatorios como Asunto/estadística & datos numéricos , Bioestadística , Simulación por Computador , Intervalos de Confianza , Interpretación Estadística de Datos , Práctica Clínica Basada en la Evidencia/estadística & datos numéricos , Femenino , Fertilidad , Humanos , Masculino , Modelos Estadísticos , Proyectos Piloto , Medicina de Precisión/estadística & datos numéricos , Embarazo , Análisis de Regresión , Tamaño de la Muestra
15.
Lifetime Data Anal ; 22(2): 280-98, 2016 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-26025499

RESUMEN

In randomized clinical trials, the log rank test is often used to test the null hypothesis of the equality of treatment-specific survival distributions. In observational studies, however, the ordinary log rank test is no longer guaranteed to be valid. In such studies we must be cautious about potential confounders; that is, the covariates that affect both the treatment assignment and the survival distribution. In this paper, two cases were considered: the first is when it is believed that all the potential confounders are captured in the primary database, and the second case where a substudy is conducted to capture additional confounding covariates. We generalize the augmented inverse probability weighted complete case estimators for treatment-specific survival distribution proposed in Bai et al. (Biometrics 69:830-839, 2013) and develop the log rank type test in both cases. The consistency and double robustness of the proposed test statistics are shown in simulation studies. These statistics are then applied to the data from the observational study that motivated this research.


Asunto(s)
Estudios Observacionales como Asunto/estadística & datos numéricos , Análisis de Supervivencia , Simulación por Computador , Enfermedad de la Arteria Coronaria/mortalidad , Enfermedad de la Arteria Coronaria/terapia , Humanos , Modelos Estadísticos , Probabilidad , Modelos de Riesgos Proporcionales , Muestreo
16.
Stat Biosci ; 7(2): 187-205, 2015 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-26478751

RESUMEN

Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2n. In this paper, we consider the more practically useful setting of sample sizes in a the finite set {n1, n2, …, nL }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why simulations can give the false impression of bias in the sample average when considered conditional upon the sample size. The consequence is that no corrections need to be made to estimators following sequential trials. When small-sample bias is of concern, the conditional likelihood estimator provides a relatively straightforward modification to the sample average. Finally, it is shown that classical likelihood-based standard errors and confidence intervals can be applied, obviating the need for technical corrections.

17.
Biometrics ; 71(4): 895-904, 2015 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-26193819

RESUMEN

A treatment regime formalizes personalized medicine as a function from individual patient characteristics to a recommended treatment. A high-quality treatment regime can improve patient outcomes while reducing cost, resource consumption, and treatment burden. Thus, there is tremendous interest in estimating treatment regimes from observational and randomized studies. However, the development of treatment regimes for application in clinical practice requires the long-term, joint effort of statisticians and clinical scientists. In this collaborative process, the statistician must integrate clinical science into the statistical models underlying a treatment regime and the clinician must scrutinize the estimated treatment regime for scientific validity. To facilitate meaningful information exchange, it is important that estimated treatment regimes be interpretable in a subject-matter context. We propose a simple, yet flexible class of treatment regimes whose members are representable as a short list of if-then statements. Regimes in this class are immediately interpretable and are therefore an appealing choice for broad application in practice. We derive a robust estimator of the optimal regime within this class and demonstrate its finite sample performance using simulation experiments. The proposed method is illustrated with data from two clinical trials.


Asunto(s)
Protocolos Clínicos , Árboles de Decisión , Biometría/métodos , Neoplasias de la Mama/tratamiento farmacológico , Ensayos Clínicos como Asunto/estadística & datos numéricos , Simulación por Computador , Depresión/terapia , Medicina Basada en la Evidencia/estadística & datos numéricos , Femenino , Humanos , Modelos Estadísticos , Medicina de Precisión/estadística & datos numéricos
19.
J Stat Softw ; 56: 2, 2014 Jan 01.
Artículo en Inglés | MEDLINE | ID: mdl-24688453

RESUMEN

Generalized linear and nonlinear mixed models (GMMMs and NLMMs) are commonly used to represent non-Gaussian or nonlinear longitudinal or clustered data. A common assumption is that the random effects are Gaussian. However, this assumption may be unrealistic in some applications, and misspecification of the random effects density may lead to maximum likelihood parameter estimators that are inconsistent, biased, and inefficient. Because testing if the random effects are Gaussian is difficult, previous research has recommended using a flexible random effects density. However, computational limitations have precluded widespread use of flexible random effects densities for GLMMs and NLMMs. We develop a SAS macro, SNP_NLMM, that overcomes the computational challenges to fit GLMMs and NLMMs where the random effects are assumed to follow a smooth density that can be represented by the seminonparametric formulation proposed by Gallant and Nychka (1987). The macro is flexible enough to allow for any density of the response conditional on the random effects and any nonlinear mean trajectory. We demonstrate the SNP_NLMM macro on a GLMM of the disease progression of toenail infection and on a NLMM of intravenous drug concentration over time.

20.
Stat Methods Med Res ; 23(1): 11-41, 2014 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-22514029

RESUMEN

The vast majority of settings for which frequentist statistical properties are derived assume a fixed, a priori known sample size. Familiar properties then follow, such as, for example, the consistency, asymptotic normality, and efficiency of the sample average for the mean parameter, under a wide range of conditions. We are concerned here with the alternative situation in which the sample size is itself a random variable which may depend on the data being collected. Further, the rule governing this may be deterministic or probabilistic. There are many important practical examples of such settings, including missing data, sequential trials, and informative cluster size. It is well known that special issues can arise when evaluating the properties of statistical procedures under such sampling schemes, and much has been written about specific areas (Grambsch P. Sequential sampling based on the observed Fisher information to guarantee the accuracy of the maximum likelihood estimator. Ann Stat 1983; 11: 68-77; Barndorff-Nielsen O and Cox DR. The effect of sampling rules on likelihood statistics. Int Stat Rev 1984; 52: 309-326). Our aim is to place these various related examples into a single framework derived from the joint modeling of the outcomes and sampling process and so derive generic results that in turn provide insight, and in some cases practical consequences, for different settings. It is shown that, even in the simplest case of estimating a mean, some of the results appear counterintuitive. In many examples, the sample average may exhibit small sample bias and, even when it is unbiased, may not be optimal. Indeed, there may be no minimum variance unbiased estimator for the mean. Such results follow directly from key attributes such as non-ancillarity of the sample size and incompleteness of the minimal sufficient statistic of the sample size and sample sum. Although our results have direct and obvious implications for estimation following group sequential trials, there are also ramifications for a range of other settings, such as random cluster sizes, censored time-to-event data, and the joint modeling of longitudinal and time-to-event data. Here, we use the simplest group sequential setting to develop and explicate the main results. Some implications for random sample sizes and missing data are also considered. Consequences for other related settings will be considered elsewhere.


Asunto(s)
Modelos Estadísticos , Tamaño de la Muestra , Funciones de Verosimilitud , Probabilidad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA