Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 106
Filtrar
Más filtros

País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Biostatistics ; 24(2): 502-517, 2023 04 14.
Artículo en Inglés | MEDLINE | ID: mdl-34939083

RESUMEN

Cluster randomized trials (CRTs) randomly assign an intervention to groups of individuals (e.g., clinics or communities) and measure outcomes on individuals in those groups. While offering many advantages, this experimental design introduces challenges that are only partially addressed by existing analytic approaches. First, outcomes are often missing for some individuals within clusters. Failing to appropriately adjust for differential outcome measurement can result in biased estimates and inference. Second, CRTs often randomize limited numbers of clusters, resulting in chance imbalances on baseline outcome predictors between arms. Failing to adaptively adjust for these imbalances and other predictive covariates can result in efficiency losses. To address these methodological gaps, we propose and evaluate a novel two-stage targeted minimum loss-based estimator to adjust for baseline covariates in a manner that optimizes precision, after controlling for baseline and postbaseline causes of missing outcomes. Finite sample simulations illustrate that our approach can nearly eliminate bias due to differential outcome measurement, while existing CRT estimators yield misleading results and inferences. Application to real data from the SEARCH community randomized trial demonstrates the gains in efficiency afforded through adaptive adjustment for baseline covariates, after controlling for missingness on individual-level outcomes.


Asunto(s)
Evaluación de Resultado en la Atención de Salud , Proyectos de Investigación , Humanos , Ensayos Clínicos Controlados Aleatorios como Asunto , Probabilidad , Sesgo , Análisis por Conglomerados , Simulación por Computador
2.
Biometrics ; 80(2)2024 Mar 27.
Artículo en Inglés | MEDLINE | ID: mdl-38563531

RESUMEN

A crossover trial is an efficient trial design when there is no carry-over effect. To reduce the impact of the biological carry-over effect, a washout period is often designed. However, the carry-over effect remains an outstanding concern when a washout period is unethical or cannot sufficiently diminish the impact of the carry-over effect. The latter can occur in comparative effectiveness research, where the carry-over effect is often non-biological but behavioral. In this paper, we investigate the crossover design under a potential outcomes framework with and without the carry-over effect. We find that when the carry-over effect exists and satisfies a sign condition, the basic estimator underestimates the treatment effect, which does not inflate the type I error of one-sided tests but negatively impacts the power. This leads to a power trade-off between the crossover design and the parallel-group design, and we derive the condition under which the crossover design does not lead to type I error inflation and is still more powerful than the parallel-group design. We also develop covariate adjustment methods for crossover trials. We evaluate the performance of cross-over design and covariate adjustment using data from the MTN-034/REACH study.


Asunto(s)
Proyectos de Investigación , Estudios Cruzados
3.
Biometrics ; 80(2)2024 Mar 27.
Artículo en Inglés | MEDLINE | ID: mdl-38837900

RESUMEN

Randomization-based inference using the Fisher randomization test allows for the computation of Fisher-exact P-values, making it an attractive option for the analysis of small, randomized experiments with non-normal outcomes. Two common test statistics used to perform Fisher randomization tests are the difference-in-means between the treatment and control groups and the covariate-adjusted version of the difference-in-means using analysis of covariance. Modern computing allows for fast computation of the Fisher-exact P-value, but confidence intervals have typically been obtained by inverting the Fisher randomization test over a range of possible effect sizes. The test inversion procedure is computationally expensive, limiting the usage of randomization-based inference in applied work. A recent paper by Zhu and Liu developed a closed form expression for the randomization-based confidence interval using the difference-in-means statistic. We develop an important extension of Zhu and Liu to obtain a closed form expression for the randomization-based covariate-adjusted confidence interval and give practitioners a sufficiency condition that can be checked using observed data and that guarantees that these confidence intervals have correct coverage. Simulations show that our procedure generates randomization-based covariate-adjusted confidence intervals that are robust to non-normality and that can be calculated in nearly the same time as it takes to calculate the Fisher-exact P-value, thus removing the computational barrier to performing randomization-based inference when adjusting for covariates. We also demonstrate our method on a re-analysis of phase I clinical trial data.


Asunto(s)
Simulación por Computador , Intervalos de Confianza , Humanos , Biometría/métodos , Modelos Estadísticos , Interpretación Estadística de Datos , Distribución Aleatoria , Ensayos Clínicos Controlados Aleatorios como Asunto/estadística & datos numéricos , Ensayos Clínicos Controlados Aleatorios como Asunto/métodos
4.
Biometrics ; 80(1)2024 Jan 29.
Artículo en Inglés | MEDLINE | ID: mdl-38446441

RESUMEN

Benkeser et al. demonstrate how adjustment for baseline covariates in randomized trials can meaningfully improve precision for a variety of outcome types. Their findings build on a long history, starting in 1932 with R.A. Fisher and including more recent endorsements by the U.S. Food and Drug Administration and the European Medicines Agency. Here, we address an important practical consideration: how to select the adjustment approach-which variables and in which form-to maximize precision, while maintaining Type-I error control. Balzer et al. previously proposed Adaptive Pre-specification within TMLE to flexibly and automatically select, from a prespecified set, the approach that maximizes empirical efficiency in small trials (N < 40). To avoid overfitting with few randomized units, selection was previously limited to working generalized linear models, adjusting for a single covariate. Now, we tailor Adaptive Pre-specification to trials with many randomized units. Using V-fold cross-validation and the estimated influence curve-squared as the loss function, we select from an expanded set of candidates, including modern machine learning methods adjusting for multiple covariates. As assessed in simulations exploring a variety of data-generating processes, our approach maintains Type-I error control (under the null) and offers substantial gains in precision-equivalent to 20%-43% reductions in sample size for the same statistical power. When applied to real data from ACTG Study 175, we also see meaningful efficiency improvements overall and within subgroups.


Asunto(s)
Aprendizaje Automático , Proyectos de Investigación , Estados Unidos , Ensayos Clínicos Controlados Aleatorios como Asunto , Modelos Lineales , Tamaño de la Muestra
5.
Biometrics ; 80(3)2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-39271117

RESUMEN

In randomized controlled trials, adjusting for baseline covariates is commonly used to improve the precision of treatment effect estimation. However, covariates often have missing values. Recently, Zhao and Ding studied two simple strategies, the single imputation method and missingness-indicator method (MIM), to handle missing covariates and showed that both methods can provide an efficiency gain compared to not adjusting for covariates. To better understand and compare these two strategies, we propose and investigate a novel theoretical imputation framework termed cross-world imputation (CWI). This framework includes both single imputation and MIM as special cases, facilitating the comparison of their efficiency. Through the lens of CWI, we show that MIM implicitly searches for the optimal CWI values and thus achieves optimal efficiency. We also derive conditions under which the single imputation method, by searching for the optimal single imputation values, can achieve the same efficiency as the MIM. We illustrate our findings through simulation studies and a real data analysis based on the Childhood Adenotonsillectomy Trial. We conclude by discussing the practical implications of our findings.


Asunto(s)
Simulación por Computador , Modelos Estadísticos , Ensayos Clínicos Controlados Aleatorios como Asunto , Ensayos Clínicos Controlados Aleatorios como Asunto/estadística & datos numéricos , Ensayos Clínicos Controlados Aleatorios como Asunto/métodos , Humanos , Interpretación Estadística de Datos , Niño , Biometría/métodos , Adenoidectomía/estadística & datos numéricos , Tonsilectomía/estadística & datos numéricos
6.
Stat Med ; 43(11): 2083-2095, 2024 May 20.
Artículo en Inglés | MEDLINE | ID: mdl-38487976

RESUMEN

To obtain valid inference following stratified randomisation, treatment effects should be estimated with adjustment for stratification variables. Stratification sometimes requires categorisation of a continuous prognostic variable (eg, age), which raises the question: should adjustment be based on randomisation categories or underlying continuous values? In practice, adjustment for randomisation categories is more common. We reviewed trials published in general medical journals and found none of the 32 trials that stratified randomisation based on a continuous variable adjusted for continuous values in the primary analysis. Using data simulation, this article evaluates the performance of different adjustment strategies for continuous and binary outcomes where the covariate-outcome relationship (via the link function) was either linear or non-linear. Given the utility of covariate adjustment for addressing missing data, we also considered settings with complete or missing outcome data. Analysis methods included linear or logistic regression with no adjustment for the stratification variable, adjustment for randomisation categories, or adjustment for continuous values assuming a linear covariate-outcome relationship or allowing for non-linearity using fractional polynomials or restricted cubic splines. Unadjusted analysis performed poorly throughout. Adjustment approaches that misspecified the underlying covariate-outcome relationship were less powerful and, alarmingly, biased in settings where the stratification variable predicted missing outcome data. Adjustment for randomisation categories tends to involve the highest degree of misspecification, and so should be avoided in practice. To guard against misspecification, we recommend use of flexible approaches such as fractional polynomials and restricted cubic splines when adjusting for continuous stratification variables in randomised trials.


Asunto(s)
Ensayos Clínicos Controlados Aleatorios como Asunto , Humanos , Ensayos Clínicos Controlados Aleatorios como Asunto/estadística & datos numéricos , Simulación por Computador , Modelos Lineales , Interpretación Estadística de Datos , Modelos Logísticos , Distribución Aleatoria
7.
Stat Med ; 43(2): 201-215, 2024 01 30.
Artículo en Inglés | MEDLINE | ID: mdl-37933766

RESUMEN

Generalized linear mixed models (GLMM) are commonly used to analyze clustered data, but when the number of clusters is small to moderate, standard statistical tests may produce elevated type I error rates. Small-sample corrections have been proposed for continuous or binary outcomes without covariate adjustment. However, appropriate tests to use for count outcomes or under covariate-adjusted models remains unknown. An important setting in which this issue arises is in cluster-randomized trials (CRTs). Because many CRTs have just a few clusters (eg, clinics or health systems), covariate adjustment is particularly critical to address potential chance imbalance and/or low power (eg, adjustment following stratified randomization or for the baseline value of the outcome). We conducted simulations to evaluate GLMM-based tests of the treatment effect that account for the small (10) or moderate (20) number of clusters under a parallel-group CRT setting across scenarios of covariate adjustment (including adjustment for one or more person-level or cluster-level covariates) for both binary and count outcomes. We find that when the intraclass correlation is non-negligible ( ≥ $$ \ge $$ 0.01) and the number of covariates is small ( ≤ $$ \le $$ 2), likelihood ratio tests with a between-within denominator degree of freedom have type I error rates close to the nominal level. When the number of covariates is moderate ( ≥ $$ \ge $$ 5), across our simulation scenarios, the relative performance of the tests varied considerably and no method performed uniformly well. Therefore, we recommend adjusting for no more than a few covariates and using likelihood ratio tests with a between-within denominator degree of freedom.


Asunto(s)
Proyectos de Investigación , Humanos , Análisis por Conglomerados , Ensayos Clínicos Controlados Aleatorios como Asunto , Simulación por Computador , Modelos Lineales , Tamaño de la Muestra
8.
BMC Med Res Methodol ; 24(1): 32, 2024 Feb 10.
Artículo en Inglés | MEDLINE | ID: mdl-38341552

RESUMEN

BACKGROUND: When studying the association between treatment and a clinical outcome, a parametric multivariable model of the conditional outcome expectation is often used to adjust for covariates. The treatment coefficient of the outcome model targets a conditional treatment effect. Model-based standardization is typically applied to average the model predictions over the target covariate distribution, and generate a covariate-adjusted estimate of the marginal treatment effect. METHODS: The standard approach to model-based standardization involves maximum-likelihood estimation and use of the non-parametric bootstrap. We introduce a novel, general-purpose, model-based standardization method based on multiple imputation that is easily applicable when the outcome model is a generalized linear model. We term our proposed approach multiple imputation marginalization (MIM). MIM consists of two main stages: the generation of synthetic datasets and their analysis. MIM accommodates a Bayesian statistical framework, which naturally allows for the principled propagation of uncertainty, integrates the analysis into a probabilistic framework, and allows for the incorporation of prior evidence. RESULTS: We conduct a simulation study to benchmark the finite-sample performance of MIM in conjunction with a parametric outcome model. The simulations provide proof-of-principle in scenarios with binary outcomes, continuous-valued covariates, a logistic outcome model and the marginal log odds ratio as the target effect measure. When parametric modeling assumptions hold, MIM yields unbiased estimation in the target covariate distribution, valid coverage rates, and similar precision and efficiency than the standard approach to model-based standardization. CONCLUSION: We demonstrate that multiple imputation can be used to marginalize over a target covariate distribution, providing appropriate inference with a correctly specified parametric outcome model and offering statistical performance comparable to that of the standard approach to model-based standardization.


Asunto(s)
Modelos Estadísticos , Humanos , Teorema de Bayes , Modelos Lineales , Simulación por Computador , Modelos Logísticos , Estándares de Referencia
9.
BMC Med Res Methodol ; 24(1): 186, 2024 Aug 26.
Artículo en Inglés | MEDLINE | ID: mdl-39187791

RESUMEN

BACKGROUND: According to long-term follow-up data of malignant tumor patients, assessing treatment effects requires careful consideration of competing risks. The commonly used cause-specific hazard ratio (CHR) and sub-distribution hazard ratio (SHR) are relative indicators and may present challenges in terms of proportional hazards assumption and clinical interpretation. Recently, the restricted mean time lost (RMTL) has been recommended as a supplementary measure for better clinical interpretation. Moreover, for observational study data in epidemiological and clinical settings, due to the influence of confounding factors, covariate adjustment is crucial for determining the causal effect of treatment. METHODS: We construct an RMTL estimator after adjusting for covariates based on the inverse probability weighting method, and derive the variance to construct interval estimates based on the large sample properties. We use simulation studies to study the statistical performance of this estimator in various scenarios. In addition, we further consider the changes in treatment effects over time, constructing a dynamic RMTL difference curve and corresponding confidence bands for the curve. RESULTS: The simulation results demonstrate that the adjusted RMTL estimator exhibits smaller biases compared with unadjusted RMTL and provides robust interval estimates in all scenarios. This method was applied to a real-world cervical cancer patient data, revealing improvements in the prognosis of patients with small cell carcinoma of the cervix. The results showed that the protective effect of surgery was significant only in the first 20 months, but the long-term effect was not obvious. Radiotherapy significantly improved patient outcomes during the follow-up period from 17 to 57 months, while radiotherapy combined with chemotherapy significantly improved patient outcomes throughout the entire period. CONCLUSIONS: We propose the approach that is easy to interpret and implement for assessing treatment effects in observational competing risk data.


Asunto(s)
Modelos de Riesgos Proporcionales , Neoplasias del Cuello Uterino , Humanos , Femenino , Neoplasias del Cuello Uterino/terapia , Estudios Observacionales como Asunto/métodos , Simulación por Computador , Resultado del Tratamiento , Medición de Riesgo/métodos , Medición de Riesgo/estadística & datos numéricos
10.
Clin Trials ; 21(4): 399-411, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38825841

RESUMEN

There has been a growing interest in covariate adjustment in the analysis of randomized controlled trials in past years. For instance, the US Food and Drug Administration recently issued guidance that emphasizes the importance of distinguishing between conditional and marginal treatment effects. Although these effects may sometimes coincide in the context of linear models, this is not typically the case in other settings, and this distinction is often overlooked in clinical trial practice. Considering these developments, this article provides a review of when and how to use covariate adjustment to enhance precision in randomized controlled trials. We describe the differences between conditional and marginal estimands and stress the necessity of aligning statistical analysis methods with the chosen estimand. In addition, we highlight the potential misalignment of commonly used methods in estimating marginal treatment effects. We hereby advocate for the use of the standardization approach, as it can improve efficiency by leveraging the information contained in baseline covariates while remaining robust to model misspecification. Finally, we present practical considerations that have arisen in our respective consultations to further clarify the advantages and limitations of covariate adjustment.


Asunto(s)
Ensayos Clínicos Controlados Aleatorios como Asunto , Ensayos Clínicos Controlados Aleatorios como Asunto/métodos , Humanos , Interpretación Estadística de Datos , Modelos Estadísticos , Proyectos de Investigación , Estados Unidos , Modelos Lineales
11.
Clin Trials ; 21(2): 199-210, 2024 04.
Artículo en Inglés | MEDLINE | ID: mdl-37990575

RESUMEN

BACKGROUND/AIMS: The stepped-wedge cluster randomized trial (SW-CRT), in which clusters are randomized to a time at which they will transition to the intervention condition - rather than a trial arm - is a relatively new design. SW-CRTs have additional design and analytical considerations compared to conventional parallel arm trials. To inform future methodological development, including guidance for trialists and the selection of parameters for statistical simulation studies, we conducted a review of recently published SW-CRTs. Specific objectives were to describe (1) the types of designs used in practice, (2) adherence to key requirements for statistical analysis, and (3) practices around covariate adjustment. We also examined changes in adherence over time and by journal impact factor. METHODS: We used electronic searches to identify primary reports of SW-CRTs published 2016-2022. Two reviewers extracted information from each trial report and its protocol, if available, and resolved disagreements through discussion. RESULTS: We identified 160 eligible trials, randomizing a median (Q1-Q3) of 11 (8-18) clusters to 5 (4-7) sequences. The majority (122, 76%) were cross-sectional (almost all with continuous recruitment), 23 (14%) were closed cohorts and 15 (9%) open cohorts. Many trials had complex design features such as multiple or multivariate primary outcomes (50, 31%) or time-dependent repeated measures (27, 22%). The most common type of primary outcome was binary (51%); continuous outcomes were less common (26%). The most frequently used method of analysis was a generalized linear mixed model (112, 70%); generalized estimating equations were used less frequently (12, 8%). Among 142 trials with fewer than 40 clusters, only 9 (6%) reported using methods appropriate for a small number of clusters. Statistical analyses clearly adjusted for time effects in 119 (74%), for within-cluster correlations in 132 (83%), and for distinct between-period correlations in 13 (8%). Covariates were included in the primary analysis of the primary outcome in 82 (51%) and were most often individual-level covariates; however, clear and complete pre-specification of covariates was uncommon. Adherence to some key methodological requirements (adjusting for time effects, accounting for within-period correlation) was higher among trials published in higher versus lower impact factor journals. Substantial improvements over time were not observed although a slight improvement was observed in the proportion accounting for a distinct between-period correlation. CONCLUSIONS: Future methods development should prioritize methods for SW-CRTs with binary or time-to-event outcomes, small numbers of clusters, continuous recruitment designs, multivariate outcomes, or time-dependent repeated measures. Trialists, journal editors, and peer reviewers should be aware that SW-CRTs have additional methodological requirements over parallel arm designs including the need to account for period effects as well as complex intracluster correlations.


Asunto(s)
Proyectos de Investigación , Humanos , Análisis por Conglomerados , Ensayos Clínicos Controlados Aleatorios como Asunto , Simulación por Computador , Modelos Lineales , Tamaño de la Muestra
12.
Clin Trials ; : 17407745231222448, 2024 Feb 02.
Artículo en Inglés | MEDLINE | ID: mdl-38305269

RESUMEN

In randomized clinical trials, analyses of time-to-event data without risk stratification, or with stratification based on pre-selected factors revealed at the end of the trial to be at most weakly associated with risk, are quite common. We caution that such analyses are likely delivering hazard ratio estimates that unwittingly dilute the evidence of benefit for the test relative to the control treatment. To make our case, first, we use a hypothetical scenario to contrast risk-unstratified and risk-stratified hazard ratios. Thereafter, we draw attention to the previously published 5-step stratified testing and amalgamation routine (5-STAR) approach in which a pre-specified treatment-blinded algorithm is applied to survival times from the trial to partition patients into well-separated risk strata using baseline covariates determined to be jointly strongly prognostic for event risk. After treatment unblinding, a treatment comparison is done within each risk stratum and stratum-level results are averaged for overall inference. For illustration, we use 5-STAR to reanalyze data for the primary and key secondary time-to-event endpoints from three published cardiovascular outcomes trials. The results show that the 5-STAR estimate is typically smaller (i.e. more in favor of the test treatment) than the originally reported (traditional) estimate. This is not surprising because 5-STAR mitigates the presumed dilution bias in the traditional hazard ratio estimate caused by no or inadequate risk stratification, as evidenced by two detailed examples. Pre-selection of stratification factors at the trial design stage to achieve adequate risk stratification for the analysis will often be challenging. In such settings, an objective risk stratification approach such as 5-STAR, which is partly aligned with guidance from the US Food and Drug Administration on covariate-adjustment in clinical trials, is worthy of consideration.

13.
Pharm Stat ; 2024 May 19.
Artículo en Inglés | MEDLINE | ID: mdl-38763917

RESUMEN

Difference in proportions is frequently used to measure treatment effect for binary outcomes in randomized clinical trials. The estimation of difference in proportions can be assisted by adjusting for prognostic baseline covariates to enhance precision and bolster statistical power. Standardization or g-computation is a widely used method for covariate adjustment in estimating unconditional difference in proportions, because of its robustness to model misspecification. Various inference methods have been proposed to quantify the uncertainty and confidence intervals based on large-sample theories. However, their performances under small sample sizes and model misspecification have not been comprehensively evaluated. We propose an alternative approach to estimate the unconditional variance of the standardization estimator based on the robust sandwich estimator to further enhance the finite sample performance. Extensive simulations are provided to demonstrate the performances of the proposed method, spanning a wide range of sample sizes, randomization ratios, and model specification. We apply the proposed method in a real data example to illustrate the practical utility.

14.
Biom J ; 66(1): e2200135, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37035941

RESUMEN

Cluster-randomized trials (CRTs) involve randomizing entire groups of participants-called clusters-to treatment arms but are often comprised of a limited or fixed number of available clusters. While covariate adjustment can account for chance imbalances between treatment arms and increase statistical efficiency in individually randomized trials, analytical methods for individual-level covariate adjustment in small CRTs have received little attention to date. In this paper, we systematically investigate, through extensive simulations, the operating characteristics of propensity score weighting and multivariable regression as two individual-level covariate adjustment strategies for estimating the participant-average causal effect in small CRTs with a rare binary outcome and identify scenarios where each adjustment strategy has a relative efficiency advantage over the other to make practical recommendations. We also examine the finite-sample performance of the bias-corrected sandwich variance estimators associated with propensity score weighting and multivariable regression for quantifying the uncertainty in estimating the participant-average treatment effect. To illustrate the methods for individual-level covariate adjustment, we reanalyze a recent CRT testing a sedation protocol in 31 pediatric intensive care units.


Asunto(s)
Simulación por Computador , Niño , Humanos , Análisis por Conglomerados , Ensayos Clínicos Controlados Aleatorios como Asunto , Tamaño de la Muestra , Sesgo
15.
BMC Genomics ; 24(1): 687, 2023 Nov 16.
Artículo en Inglés | MEDLINE | ID: mdl-37974076

RESUMEN

BACKGROUND: Advances in sequencing technology and cost reduction have enabled an emergence of various statistical methods used in RNA-sequencing data, including the differential co-expression network analysis (or differential network analysis). A key benefit of this method is that it takes into consideration the interactions between or among genes and do not require an established knowledge in biological pathways. As of now, none of existing softwares can incorporate covariates that should be adjusted if they are confounding factors while performing the differential network analysis. RESULTS: We develop an R package PRANA which a user can easily include multiple covariates. The main R function in this package leverages a novel pseudo-value regression approach for a differential network analysis in RNA-sequencing data. This software is also enclosed with complementary R functions for extracting adjusted p-values and coefficient estimates of all or specific variable for each gene, as well as for identifying the names of genes that are differentially connected (DC, hereafter) between subjects under biologically different conditions from the output. CONCLUSION: Herewith, we demonstrate the application of this package in a real data on chronic obstructive pulmonary disease. PRANA is available through the CRAN repositories under the GPL-3 license: https://cran.r-project.org/web/packages/PRANA/index.html .


Asunto(s)
ARN , Programas Informáticos , Humanos , Secuencia de Bases , Análisis de Secuencia de ARN
16.
Biometrics ; 79(2): 975-987, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-34825704

RESUMEN

In many randomized clinical trials of therapeutics for COVID-19, the primary outcome is an ordinal categorical variable, and interest focuses on the odds ratio (OR; active agent vs control) under the assumption of a proportional odds model. Although at the final analysis the outcome will be determined for all subjects, at an interim analysis, the status of some participants may not yet be determined, for example, because ascertainment of the outcome may not be possible until some prespecified follow-up time. Accordingly, the outcome from these subjects can be viewed as censored. A valid interim analysis can be based on data only from those subjects with full follow-up; however, this approach is inefficient, as it does not exploit additional information that may be available on those for whom the outcome is not yet available at the time of the interim analysis. Appealing to the theory of semiparametrics, we propose an estimator for the OR in a proportional odds model with censored, time-lagged categorical outcome that incorporates additional baseline and time-dependent covariate information and demonstrate that it can result in considerable gains in efficiency relative to simpler approaches. A byproduct of the approach is a covariate-adjusted estimator for the OR based on the full data that would be available at a final analysis.


Asunto(s)
COVID-19 , Humanos , Oportunidad Relativa , Resultado del Tratamiento
17.
Stat Med ; 42(22): 4015-4027, 2023 09 30.
Artículo en Inglés | MEDLINE | ID: mdl-37455675

RESUMEN

Receiver operating characteristic (ROC) curve is a popular tool to describe and compare the diagnostic accuracy of biomarkers when a binary-scale gold standard is available. However, there are many examples of diagnostic tests whose gold standards are continuous. Hence, Several extensions of receiver operating characteristic (ROC) curve are proposed to evaluate the diagnostic potential of biomarkers when the gold standard is continuous-scale. Moreover, in evaluating these biomarkers, it is often necessary to consider the effects of covariates on the diagnostic accuracy of the biomarker of interest. Covariates may include subject characteristics, expertise of the test operator, test procedures or aspects of specimen handling. Applying the covariate adjustment to the case that the gold standard is continuous is challenging and has not been addressed in the literature. To fill the gap, we propose two general testing frameworks to account for the covariates effect on diagnostic accuracy. Simulation studies are conducted to compare the proposed tests. Data from a study that assessed three types of imaging modalities with the purpose of detecting neoplastic colon polyps and cancers are used to illustrate the proposed methods.


Asunto(s)
Pruebas Diagnósticas de Rutina , Humanos , Simulación por Computador , Curva ROC , Biomarcadores
18.
J R Stat Soc Series B Stat Methodol ; 85(2): 356-377, 2023 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-37593690

RESUMEN

We present a framework for using existing external data to identify and estimate the relative efficiency of a covariate-adjusted estimator compared to an unadjusted estimator in a future randomized trial. Under conditions, these relative efficiencies approximate the ratio of sample sizes needed to achieve a desired power. We develop semiparametrically efficient estimators of the relative efficiencies for several treatment effect estimands of interest with either fully or partially observed outcomes, allowing for the application of flexible statistical learning tools to estimate the nuisance functions. We propose an analytic Wald-type confidence interval and a double bootstrap scheme for statistical inference. We demonstrate the performance of the proposed methods through simulation studies and apply these methods to estimate the efficiency gain of covariate adjustment in Covid-19 therapeutic trials.

19.
J Biopharm Stat ; : 1-14, 2023 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-37526447

RESUMEN

Determining clinically meaningful change (CMC) in a patient-reported (PRO) measure is central to its existence in gauging how patients feel and function, especially for evaluating a treatment effect. Anchor-based approaches are recommended to estimate a CMC threshold on a PRO measure. Determination of CMC involves linking changes or differences in the target PRO measure to that in an external (anchor) measure that is easier to interpret than and appreciably associated with the PRO measure. One type of anchor-based approach for CMC is the "mean change method" where the mean change in score of the target PRO measure within a particular anchor transition level (e.g. one-category improvement) is subtracted from the mean change in score of within an adjacent anchor category (e.g. no change category). In the literature, the mean change method has been applied with and without an adjustment for the baseline scores for the PRO of interest. This article provides the analytic rationale and conceptual justification for keeping the analysis unadjusted and not controlling for baseline PRO scores. Two illustrative examples are highlighted. The current research is essentially a variation of Lord's paradox (where whether to adjust for a baseline variable depends on the research question) placed in a new context. Once the adjustment is made, the resulting CMC estimate reflects an artificial case where the anchor transition levels are forced to have the same average baseline PRO score. The unadjusted estimate acknowledges that the anchor transition levels are naturally occurring (not randomized) groups and thus maintains external validity.

20.
Pharm Stat ; 22(2): 396-407, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36504179

RESUMEN

In a randomized controlled trial (RCT), it is possible to improve precision and power and reduce sample size by appropriately adjusting for baseline covariates. There are multiple statistical methods to adjust for prognostic baseline covariates, such as an ANCOVA method. In this paper, we propose a clustering-based stratification method for adjusting for the prognostic baseline covariates. Clusters (strata) are formed only based on prognostic baseline covariates, not outcome data nor treatment assignment. Therefore, the clustering procedure can be completed prior to the availability of outcome data. The treatment effect is estimated in each cluster, and the overall treatment effect is derived by combining all cluster-specific treatment effect estimates. The proposed implementation of the procedure is described. Simulations studies and an example are presented.


Asunto(s)
Proyectos de Investigación , Humanos , Interpretación Estadística de Datos , Ensayos Clínicos Controlados Aleatorios como Asunto , Tamaño de la Muestra , Análisis por Conglomerados
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA