Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 6.082
Filtrar
1.
Biometrics ; 80(2)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38837900

RESUMO

Randomization-based inference using the Fisher randomization test allows for the computation of Fisher-exact P-values, making it an attractive option for the analysis of small, randomized experiments with non-normal outcomes. Two common test statistics used to perform Fisher randomization tests are the difference-in-means between the treatment and control groups and the covariate-adjusted version of the difference-in-means using analysis of covariance. Modern computing allows for fast computation of the Fisher-exact P-value, but confidence intervals have typically been obtained by inverting the Fisher randomization test over a range of possible effect sizes. The test inversion procedure is computationally expensive, limiting the usage of randomization-based inference in applied work. A recent paper by Zhu and Liu developed a closed form expression for the randomization-based confidence interval using the difference-in-means statistic. We develop an important extension of Zhu and Liu to obtain a closed form expression for the randomization-based covariate-adjusted confidence interval and give practitioners a sufficiency condition that can be checked using observed data and that guarantees that these confidence intervals have correct coverage. Simulations show that our procedure generates randomization-based covariate-adjusted confidence intervals that are robust to non-normality and that can be calculated in nearly the same time as it takes to calculate the Fisher-exact P-value, thus removing the computational barrier to performing randomization-based inference when adjusting for covariates. We also demonstrate our method on a re-analysis of phase I clinical trial data.


Assuntos
Simulação por Computador , Intervalos de Confiança , Humanos , Biometria/métodos , Modelos Estatísticos , Interpretação Estatística de Dados , Distribuição Aleatória , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Ensaios Clínicos Controlados Aleatórios como Assunto/métodos
2.
Stat Med ; 43(19): 3613-3632, 2024 Aug 30.
Artigo em Inglês | MEDLINE | ID: mdl-38880949

RESUMO

There is growing interest in platform trials that allow for adding of new treatment arms as the trial progresses as well as being able to stop treatments part way through the trial for either lack of benefit/futility or for superiority. In some situations, platform trials need to guarantee that error rates are controlled. This paper presents a multi-stage design, that allows additional arms to be added in a platform trial in a preplanned fashion, while still controlling the family-wise error rate, under the assumption of known number and timing of treatments to be added, and no time trends. A method is given to compute the sample size required to achieve a desired level of power and we show how the distribution of the sample size and the expected sample size can be found. We focus on power under the least favorable configuration which is the power of finding the treatment with a clinically relevant effect out of a set of treatments while the rest have an uninteresting treatment effect. A motivating trial is presented which focuses on two settings, with the first being a set number of stages per active treatment arm and the second being a set total number of stages, with treatments that are added later getting fewer stages. Compared to Bonferroni, the savings in the total maximum sample size are modest in a trial with three arms, <1% of the total sample size. However, the savings are more substantial in trials with more arms.


Assuntos
Projetos de Pesquisa , Humanos , Tamanho da Amostra , Simulação por Computador , Modelos Estatísticos , Ensaios Clínicos como Assunto/métodos , Ensaios Clínicos Controlados Aleatórios como Assunto/métodos , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos
3.
Stat Med ; 43(18): 3364-3382, 2024 Aug 15.
Artigo em Inglês | MEDLINE | ID: mdl-38844988

RESUMO

Adaptive randomized clinical trials are of major interest when dealing with a time-to-event outcome in a prolonged observation window. No consensus exists either to define stopping boundaries or to combine p $$ p $$ values or test statistics in the terminal analysis in the case of a frequentist design and sample size adaptation. In a one-sided setting, we compared three frequentist approaches using stopping boundaries relying on α $$ \alpha $$ -spending functions and a Bayesian monitoring setting with boundaries based on the posterior distribution of the log-hazard ratio. All designs comprised a single interim analysis with an efficacy stopping rule and the possibility of sample size adaptation at this interim step. Three frequentist approaches were defined based on the terminal analysis: combination of stagewise statistics (Wassmer) or of p $$ p $$ values (Desseaux), or on patientwise splitting (Jörgens), and we compared the results with those of the Bayesian monitoring approach (Freedman). These different approaches were evaluated in a simulation study and then illustrated on a real dataset from a randomized clinical trial conducted in elderly patients with chronic lymphocytic leukemia. All approaches controlled for the type I error rate, except for the Bayesian monitoring approach, and yielded satisfactory power. It appears that the frequentist approaches are the best in underpowered trials. The power of all the approaches was affected by the violation of the proportional hazards (PH) assumption. For adaptive designs with a survival endpoint and a one-sided alternative hypothesis, the Wassmer and Jörgens approaches after sample size adaptation should be preferred, unless violation of PH is suspected.


Assuntos
Teorema de Bayes , Simulação por Computador , Ensaios Clínicos Controlados Aleatórios como Assunto , Humanos , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Tamanho da Amostra , Projetos de Pesquisa , Determinação de Ponto Final , Leucemia Linfocítica Crônica de Células B/tratamento farmacológico , Modelos Estatísticos
4.
Biometrics ; 80(2)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38861372

RESUMO

In many randomized placebo-controlled trials with a biomarker defined subgroup, it is believed that this subgroup has the same or higher treatment effect compared with its complement. These subgroups are often referred to as the biomarker positive and negative subgroups. Most biomarker-stratified pivotal trials are aimed at demonstrating a significant treatment effect either in the biomarker positive subgroup or in the overall population. A major shortcoming of this approach is that the treatment can be declared effective in the overall population even though it has no effect in the biomarker negative subgroup. We use the isotonic assumption about the treatment effects in the two subgroups to construct an efficient way to test for a treatment effect in both the biomarker positive and negative subgroups. A substantial reduction in the required sample size for such a trial compared with existing methods makes evaluating the treatment effect in both the biomarker positive and negative subgroups feasible in pivotal trials especially when the prevalence of the biomarker positive subgroup is less than 0.5.


Assuntos
Biomarcadores , Ensaios Clínicos Controlados Aleatórios como Assunto , Humanos , Biomarcadores/análise , Biomarcadores/sangue , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Tamanho da Amostra , Resultado do Tratamento , Biometria/métodos , Simulação por Computador , Modelos Estatísticos
5.
BMC Med Res Methodol ; 24(1): 130, 2024 Jun 05.
Artigo em Inglês | MEDLINE | ID: mdl-38840047

RESUMO

BACKGROUND: Faced with the high cost and limited efficiency of classical randomized controlled trials, researchers are increasingly applying adaptive designs to speed up the development of new drugs. However, the application of adaptive design to drug randomized controlled trials (RCTs) and whether the reporting is adequate are unclear. Thus, this study aimed to summarize the epidemiological characteristics of the relevant trials and assess their reporting quality by the Adaptive designs CONSORT Extension (ACE) checklist. METHODS: We searched MEDLINE, EMBASE, Cochrane Central Register of Controlled Trials (CENTRAL) and ClinicalTrials.gov from inception to January 2020. We included drug RCTs that explicitly claimed to be adaptive trials or used any type of adaptative design. We extracted the epidemiological characteristics of included studies to summarize their adaptive design application. We assessed the reporting quality of the trials by Adaptive designs CONSORT Extension (ACE) checklist. Univariable and multivariable linear regression models were used to the association of four prespecified factors with the quality of reporting. RESULTS: Our survey included 108 adaptive trials. We found that adaptive design has been increasingly applied over the years, and was commonly used in phase II trials (n = 45, 41.7%). The primary reasons for using adaptive design were to speed the trial and facilitate decision-making (n = 24, 22.2%), maximize the benefit of participants (n = 21, 19.4%), and reduce the total sample size (n = 15, 13.9%). Group sequential design (n = 63, 58.3%) was the most frequently applied method, followed by adaptive randomization design (n = 26, 24.1%), and adaptive dose-finding design (n = 24, 22.2%). The proportion of adherence to the ACE checklist of 26 topics ranged from 7.4 to 99.1%, with eight topics being adequately reported (i.e., level of adherence ≥ 80%), and eight others being poorly reported (i.e., level of adherence ≤ 30%). In addition, among the seven items specific for adaptive trials, three were poorly reported: accessibility to statistical analysis plan (n = 8, 7.4%), measures for confidentiality (n = 14, 13.0%), and assessments of similarity between interim stages (n = 25, 23.1%). The mean score of the ACE checklist was 13.9 (standard deviation [SD], 3.5) out of 26. According to our multivariable regression analysis, later published trials (estimated ß = 0.14, p < 0.01) and the multicenter trials (estimated ß = 2.22, p < 0.01) were associated with better reporting. CONCLUSION: Adaptive design has shown an increasing use over the years, and was primarily applied to early phase drug trials. However, the reporting quality of adaptive trials is suboptimal, and substantial efforts are needed to improve the reporting.


Assuntos
Ensaios Clínicos Controlados Aleatórios como Assunto , Projetos de Pesquisa , Humanos , Projetos de Pesquisa/normas , Ensaios Clínicos Controlados Aleatórios como Assunto/métodos , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Ensaios Clínicos Controlados Aleatórios como Assunto/normas , Lista de Checagem/métodos , Lista de Checagem/normas , Ensaios Clínicos Fase II como Assunto/métodos , Ensaios Clínicos Fase II como Assunto/estatística & dados numéricos , Ensaios Clínicos Fase II como Assunto/normas
6.
Stat Med ; 43(17): 3313-3325, 2024 Jul 30.
Artigo em Inglês | MEDLINE | ID: mdl-38831520

RESUMO

In a multi-center randomized controlled trial (RCT) with competitive recruitment, eligible patients are enrolled sequentially by different study centers and are randomized to treatment groups using the chosen randomization method. Given the stochastic nature of the recruitment process, some centers may enroll more patients than others, and in some instances, a center may enroll multiple patients in a row, for example, on a given day. If the study is open-label, the investigators might be able to make intelligent guesses on upcoming treatment assignments in the randomization sequence, even if the trial is centrally randomized and not stratified by center. In this paper, we use enrollment data inspired by a real multi-center RCT to quantify the susceptibility of two restricted randomization procedures, the permuted block design and the big stick design, to selection bias under the convergence strategy of Blackwell and Hodges (1957) applied at the center level. We provide simulation evidence that the expected proportion of correct guesses may be greater than 50% (i.e., an increased risk of selection bias) and depends on the chosen randomization method and the number of study patients recruited by a given center that takes consecutive positions on the central allocation schedule. We propose some strategies for ensuring stronger encryption of the randomization sequence to mitigate the risk of selection bias.


Assuntos
Estudos Multicêntricos como Assunto , Seleção de Pacientes , Ensaios Clínicos Controlados Aleatórios como Assunto , Humanos , Ensaios Clínicos Controlados Aleatórios como Assunto/métodos , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Simulação por Computador , Viés de Seleção , Modelos Estatísticos
7.
Stat Med ; 43(17): 3326-3352, 2024 Jul 30.
Artigo em Inglês | MEDLINE | ID: mdl-38837431

RESUMO

Stepped wedge trials (SWTs) are a type of cluster randomized trial that involve repeated measures on clusters and design-induced confounding between time and treatment. Although mixed models are commonly used to analyze SWTs, they are susceptible to misspecification particularly for cluster-longitudinal designs such as SWTs. Mixed model estimation leverages both "horizontal" or within-cluster information and "vertical" or between-cluster information. To use horizontal information in a mixed model, both the mean model and correlation structure must be correctly specified or accounted for, since time is confounded with treatment and measurements are likely correlated within clusters. Alternative non-parametric methods have been proposed that use only vertical information; these are more robust because between-cluster comparisons in a SWT preserve randomization, but these non-parametric methods are not very efficient. We propose a composite likelihood method that focuses on vertical information, but has the flexibility to recover efficiency by using additional horizontal information. We compare the properties and performance of various methods, using simulations based on COVID-19 data and a demonstration of application to the LIRE trial. We found that a vertical composite likelihood model that leverages baseline data is more robust than traditional methods, and more efficient than methods that use only vertical information. We hope that these results demonstrate the potential value of model-based vertical methods for SWTs with a large number of clusters, and that these new tools are useful to researchers who are concerned about misspecification of traditional models.


Assuntos
Ensaios Clínicos Controlados Aleatórios como Assunto , Humanos , Funções Verossimilhança , Ensaios Clínicos Controlados Aleatórios como Assunto/métodos , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Análise por Conglomerados , Simulação por Computador , Modelos Estatísticos , COVID-19 , Projetos de Pesquisa
8.
J Surg Res ; 300: 33-42, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38795671

RESUMO

INTRODUCTION: Loss to follow-up (LTFU) distorts results of randomized controlled trials (RCTs). Understanding trial characteristics that contribute to LTFU may enable investigators to anticipate the extent of LTFU and plan retention strategies. The objective of this systematic review and meta-analysis was to investigate the extent of LTFU in surgical RCTs and evaluate associations between trial characteristics and LTFU. METHODS: MEDLINE, Embase, and PubMed Central were searched for surgical RCTs published between January 2002 and December 2021 in the 30 highest impact factor surgical journals. Two-hundred eligible RCTs were randomly selected. The pooled LTFU rate was estimated using random intercept Poisson regression. Associations between trial characteristics and LTFU were assessed using metaregression. RESULTS: The 200 RCTs included 37,914 participants and 1307 LTFU events. The pooled LTFU rate was 3.10 participants per 100 patient-years (95% confidence interval [CI] 1.85-5.17). Trial characteristics associated with reduced LTFU were standard-of-care outcome assessments (rate ratio [RR] 0.17; 95% CI 0.06-0.48), surgery for transplantation (RR 0.08; 95% CI 0.01-0.43), and surgery for cancer (RR 0.10; 95% CI 0.02-0.53). Increased LTFU was associated with patient-reported outcomes (RR 14.21; 95% CI 4.82-41.91) and follow-up duration ≥ three months (odds ratio 10.09; 95% CI 4.79-21.28). CONCLUSIONS: LTFU in surgical RCTs is uncommon. Participants may be at increased risk of LTFU in trials with outcomes assessed beyond the standard of care, surgical indications other than cancer or transplant, patient-reported outcomes, and longer follow-up. Investigators should consider the impact of design on LTFU and plan retention strategies accordingly.


Assuntos
Perda de Seguimento , Ensaios Clínicos Controlados Aleatórios como Assunto , Humanos , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Procedimentos Cirúrgicos Operatórios/estatística & dados numéricos
9.
Stat Med ; 43(15): 2928-2943, 2024 Jul 10.
Artigo em Inglês | MEDLINE | ID: mdl-38742595

RESUMO

In clinical trials, multiple comparisons arising from various treatments/doses, subgroups, or endpoints are common. Typically, trial teams focus on the comparison showing the largest observed treatment effect, often involving a specific treatment pair and endpoint within a subgroup. These findings frequently lead to follow-up pivotal studies, many of which do not confirm the initial positive results. Selection bias occurs when the most promising treatment, subgroup, or endpoint is chosen for further development, potentially skewing subsequent investigations. Such bias can be defined as the deviation in the observed treatment effects from the underlying truth. In this article, we propose a general and unified Bayesian framework to address selection bias in clinical trials with multiple comparisons. Our approach does not require a priori specification of a parametric distribution for the prior, offering a more flexible and generalized solution. The proposed method facilitates a more accurate interpretation of clinical trial results by adjusting for such selection bias. Through simulation studies, we compared several methods and demonstrated their superior performance over the normal shrinkage estimator. We recommended the use of Bayesian Model Averaging estimator averaging over Gaussian Mixture Models as the prior distribution based on its performance and flexibility. We applied the method to a multicenter, randomized, double-blind, placebo-controlled study investigating the cardiovascular effects of dulaglutide.


Assuntos
Teorema de Bayes , Simulação por Computador , Ensaios Clínicos Controlados Aleatórios como Assunto , Humanos , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Modelos Estatísticos , Método Duplo-Cego , Viés de Seleção , Viés , Estudos Multicêntricos como Assunto , Ensaios Clínicos como Assunto/estatística & dados numéricos
10.
Stat Med ; 43(16): 3109-3123, 2024 Jul 20.
Artigo em Inglês | MEDLINE | ID: mdl-38780538

RESUMO

When designing a randomized clinical trial to compare two treatments, the sample size required to have desired power with a specified type 1 error depends on the hypothesis testing procedure. With a binary endpoint (e.g., response), the trial results can be displayed in a 2 × 2 table. If one does the analysis conditional on the number of positive responses, then using Fisher's exact test has an actual type 1 error less than or equal to the specified nominal type 1 error. Alternatively, one can use one of many unconditional "exact" tests that also preserve the type 1 error and are less conservative than Fisher's exact test. In particular, the unconditional test of Boschloo is always at least as powerful as Fisher's exact test, leading to smaller required sample sizes for clinical trials. However, many statisticians have argued over the years that the conditional analysis with Fisher's exact test is the only appropriate procedure. Since having smaller clinical trials is an extremely important consideration, we review the general arguments given for the conditional analysis of a 2 × 2 table in the context of a randomized clinical trial. We find the arguments not relevant in this context, or, if relevant, not completely convincing, suggesting the sample-size advantage of the unconditional tests should lead to their recommended use. We also briefly suggest that since designers of clinical trials practically always have target null and alternative response rates, there is the possibility of using this information to improve the power of the unconditional tests.


Assuntos
Determinação de Ponto Final , Ensaios Clínicos Controlados Aleatórios como Assunto , Projetos de Pesquisa , Humanos , Ensaios Clínicos Controlados Aleatórios como Assunto/métodos , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Tamanho da Amostra , Determinação de Ponto Final/métodos , Modelos Estatísticos , Interpretação Estatística de Dados
11.
Biometrics ; 80(2)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38801258

RESUMO

In comparative studies, covariate balance and sequential allocation schemes have attracted growing academic interest. Although many theoretically justified adaptive randomization methods achieve the covariate balance, they often allocate patients in pairs or groups. To better meet the practical requirements where the clinicians cannot wait for other participants to assign the current patient for some economic or ethical reasons, we propose a method that randomizes patients individually and sequentially. The proposed method conceptually separates the covariate imbalance, measured by the newly proposed modified Mahalanobis distance, and the marginal imbalance, that is the sample size difference between the 2 groups, and it minimizes them with an explicit priority order. Compared with the existing sequential randomization methods, the proposed method achieves the best possible covariate balance while maintaining the marginal balance directly, offering us more control of the randomization process. We demonstrate the superior performance of the proposed method through a wide range of simulation studies and real data analysis, and also establish theoretical guarantees for the proposed method in terms of both the convergence of the imbalance measure and the subsequent treatment effect estimation.


Assuntos
Simulação por Computador , Ensaios Clínicos Controlados Aleatórios como Assunto , Humanos , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Ensaios Clínicos Controlados Aleatórios como Assunto/métodos , Biometria/métodos , Modelos Estatísticos , Interpretação Estatística de Dados , Distribuição Aleatória , Tamanho da Amostra , Algoritmos
12.
Biometrics ; 80(2)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38742906

RESUMO

Semicompeting risks refer to the phenomenon that the terminal event (such as death) can censor the nonterminal event (such as disease progression) but not vice versa. The treatment effect on the terminal event can be delivered either directly following the treatment or indirectly through the nonterminal event. We consider 2 strategies to decompose the total effect into a direct effect and an indirect effect under the framework of mediation analysis in completely randomized experiments by adjusting the prevalence and hazard of nonterminal events, respectively. They require slightly different assumptions on cross-world quantities to achieve identifiability. We establish asymptotic properties for the estimated counterfactual cumulative incidences and decomposed treatment effects. We illustrate the subtle difference between these 2 decompositions through simulation studies and two real-data applications in the Supplementary Materials.


Assuntos
Simulação por Computador , Humanos , Modelos Estatísticos , Risco , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Análise de Mediação , Resultado do Tratamento , Biometria/métodos
13.
BMC Med Res Methodol ; 24(1): 121, 2024 May 31.
Artigo em Inglês | MEDLINE | ID: mdl-38822242

RESUMO

BACKGROUND: Inequities in health access and outcomes exist between Indigenous and non-Indigenous populations. Embedded pragmatic randomized, controlled trials (ePCTs) can test the real-world effectiveness of health care interventions. Assessing readiness for ePCT, with tools such as the Readiness Assessment for Pragmatic Trials (RAPT) model, is an important component. Although equity must be explicitly incorporated in the design, testing, and widespread implementation of any health care intervention to achieve equity, RAPT does not explicitly consider equity. This study aimed to identify adaptions necessary for the application of the 'Readiness Assessment for Pragmatic Trials' (RAPT) tool in embedded pragmatic randomized, controlled trials (ePCTs) with Indigenous communities. METHODS: We surveyed and interviewed participants (researchers with experience in research involving Indigenous communities) over three phases (July-December 2022) in this mixed-methods study to explore the appropriateness and recommended adaptions of current RAPT domains and to identify new domains that would be appropriate to include. We thematically analyzed responses and used an iterative process to modify RAPT. RESULTS: The 21 participants identified that RAPT needed to be modified to strengthen readiness assessment in Indigenous research. In addition, five new domains were proposed to support Indigenous communities' power within the research processes: Indigenous Data Sovereignty; Acceptability - Indigenous Communities; Risk of Research; Research Team Experience; Established Partnership). We propose a modified tool, RAPT-Indigenous (RAPT-I) for use in research with Indigenous communities to increase the robustness and cultural appropriateness of readiness assessment for ePCT. In addition to producing a tool for use, it outlines a methodological approach to adopting research tools for use in and with Indigenous communities by drawing on the experience of researchers who are part of, and/or working with, Indigenous communities to undertake interventional research, as well as those with expertise in health equity, implementation science, and public health. CONCLUSION: RAPT-I has the potential to provide a useful framework for readiness assessment prior to ePCT in Indigenous communities. RAPT-I also has potential use by bodies charged with critically reviewing proposed pragmatic research including funding and ethics review boards.


Assuntos
Povos Indígenas , Ensaios Clínicos Pragmáticos como Assunto , Humanos , Povos Indígenas/estatística & dados numéricos , Ensaios Clínicos Pragmáticos como Assunto/métodos , Serviços de Saúde do Indígena/normas , Inquéritos e Questionários , Projetos de Pesquisa , Acessibilidade aos Serviços de Saúde/estatística & dados numéricos , Ensaios Clínicos Controlados Aleatórios como Assunto/métodos , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos
14.
Biometrics ; 80(2)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38819309

RESUMO

Doubly adaptive biased coin design (DBCD), a response-adaptive randomization scheme, aims to skew subject assignment probabilities based on accrued responses for ethical considerations. Recent years have seen substantial advances in understanding DBCD's theoretical properties, assuming correct model specification for the responses. However, concerns have been raised about the impact of model misspecification on its design and analysis. In this paper, we assess the robustness to both design model misspecification and analysis model misspecification under DBCD. On one hand, we confirm that the consistency and asymptotic normality of the allocation proportions can be preserved, even when the responses follow a distribution other than the one imposed by the design model during the implementation of DBCD. On the other hand, we extensively investigate three commonly used linear regression models for estimating and inferring the treatment effect, namely difference-in-means, analysis of covariance (ANCOVA) I, and ANCOVA II. By allowing these regression models to be arbitrarily misspecified, thereby not reflecting the true data generating process, we derive the consistency and asymptotic normality of the treatment effect estimators evaluated from the three models. The asymptotic properties show that the ANCOVA II model, which takes covariate-by-treatment interaction terms into account, yields the most efficient estimator. These results can provide theoretical support for using DBCD in scenarios involving model misspecification, thereby promoting the widespread application of this randomization procedure.


Assuntos
Modelos Estatísticos , Distribuição Aleatória , Humanos , Simulação por Computador , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Modelos Lineares , Biometria/métodos , Interpretação Estatística de Dados , Viés , Análise de Variância , Projetos de Pesquisa
15.
Crit Care ; 28(1): 184, 2024 05 28.
Artigo em Inglês | MEDLINE | ID: mdl-38807143

RESUMO

BACKGROUND: The use of composite outcome measures (COM) in clinical trials is increasing. Whilst their use is associated with benefits, several limitations have been highlighted and there is limited literature exploring their use within critical care. The primary aim of this study was to evaluate the use of COM in high-impact critical care trials, and compare study parameters (including sample size, statistical significance, and consistency of effect estimates) in trials using composite versus non-composite outcomes. METHODS: A systematic review of 16 high-impact journals was conducted. Randomised controlled trials published between 2012 and 2022 reporting a patient important outcome and involving critical care patients, were included. RESULTS: 8271 trials were screened, and 194 included. 39.1% of all trials used a COM and this increased over time. Of those using a COM, only 52.6% explicitly described the outcome as composite. The median number of components was 2 (IQR 2-3). Trials using a COM recruited fewer participants (409 (198.8-851.5) vs 584 (300-1566, p = 0.004), and their use was not associated with increased rates of statistical significance (19.7% vs 17.8%, p = 0.380). Predicted effect sizes were overestimated in all but 6 trials. For studies using a COM the effect estimates were consistent across all components in 43.4% of trials. 93% of COM included components that were not patient important. CONCLUSIONS: COM are increasingly used in critical care trials; however effect estimates are frequently inconsistent across COM components confounding outcome interpretations. The use of COM was associated with smaller sample sizes, and no increased likelihood of statistically significant results. Many of the limitations inherent to the use of COM are relevant to critical care research.


Assuntos
Cuidados Críticos , Avaliação de Resultados em Cuidados de Saúde , Ensaios Clínicos Controlados Aleatórios como Assunto , Humanos , Ensaios Clínicos Controlados Aleatórios como Assunto/métodos , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Cuidados Críticos/métodos , Cuidados Críticos/estatística & dados numéricos , Cuidados Críticos/normas , Avaliação de Resultados em Cuidados de Saúde/estatística & dados numéricos , Avaliação de Resultados em Cuidados de Saúde/métodos , Avaliação de Resultados em Cuidados de Saúde/normas , Fator de Impacto de Revistas
16.
Biometrics ; 80(2)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38804219

RESUMO

Sequential multiple assignment randomized trials (SMARTs) are the gold standard for estimating optimal dynamic treatment regimes (DTRs), but are costly and require a large sample size. We introduce the multi-stage augmented Q-learning estimator (MAQE) to improve efficiency of estimation of optimal DTRs by augmenting SMART data with observational data. Our motivating example comes from the Back Pain Consortium, where one of the overarching aims is to learn how to tailor treatments for chronic low back pain to individual patient phenotypes, knowledge which is lacking clinically. The Consortium-wide collaborative SMART and observational studies within the Consortium collect data on the same participant phenotypes, treatments, and outcomes at multiple time points, which can easily be integrated. Previously published single-stage augmentation methods for integration of trial and observational study (OS) data were adapted to estimate optimal DTRs from SMARTs using Q-learning. Simulation studies show the MAQE, which integrates phenotype, treatment, and outcome information from multiple studies over multiple time points, more accurately estimates the optimal DTR, and has a higher average value than a comparable Q-learning estimator without augmentation. We demonstrate this improvement is robust to a wide range of trial and OS sample sizes, addition of noise variables, and effect sizes.


Assuntos
Simulação por Computador , Dor Lombar , Estudos Observacionais como Assunto , Ensaios Clínicos Controlados Aleatórios como Assunto , Humanos , Estudos Observacionais como Assunto/estatística & dados numéricos , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Dor Lombar/terapia , Tamanho da Amostra , Resultado do Tratamento , Modelos Estatísticos , Biometria/métodos
17.
Stat Methods Med Res ; 33(6): 1021-1042, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38676367

RESUMO

We propose a novel framework based on the RuleFit method to estimate heterogeneous treatment effect in randomized clinical trials. The proposed method estimates a rule ensemble comprising a set of prognostic rules, a set of prescriptive rules, as well as the linear effects of the original predictor variables. The prescriptive rules provide an interpretable description of the heterogeneous treatment effect. By including a prognostic term in the proposed model, the selected rule is represented as an heterogeneous treatment effect that excludes other effects. We confirmed that the performance of the proposed method was equivalent to that of other ensemble learning methods through numerical simulations and demonstrated the interpretation of the proposed method using a real data application.


Assuntos
Modelos Estatísticos , Ensaios Clínicos Controlados Aleatórios como Assunto , Humanos , Prognóstico , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Simulação por Computador , Resultado do Tratamento , Algoritmos , Causalidade , Heterogeneidade da Eficácia do Tratamento
18.
Stat Med ; 43(13): 2622-2640, 2024 Jun 15.
Artigo em Inglês | MEDLINE | ID: mdl-38684331

RESUMO

Longitudinal clinical trials for which recurrent events endpoints are of interest are commonly subject to missing event data. Primary analyses in such trials are often performed assuming events are missing at random, and sensitivity analyses are necessary to assess robustness of primary analysis conclusions to missing data assumptions. Control-based imputation is an attractive approach in superiority trials for imposing conservative assumptions on how data may be missing not at random. A popular approach to implementing control-based assumptions for recurrent events is multiple imputation (MI), but Rubin's variance estimator is often biased for the true sampling variability of the point estimator in the control-based setting. We propose distributional imputation (DI) with corresponding wild bootstrap variance estimation procedure for control-based sensitivity analyses of recurrent events. We apply control-based DI to a type I diabetes trial. In the application and simulation studies, DI produced more reasonable standard error estimates than MI with Rubin's combining rules in control-based sensitivity analyses of recurrent events.


Assuntos
Simulação por Computador , Humanos , Diabetes Mellitus Tipo 1/tratamento farmacológico , Interpretação Estatística de Dados , Modelos Estatísticos , Recidiva , Estudos Longitudinais , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Ensaios Clínicos Controlados Aleatórios como Assunto/métodos , Viés , Ensaios Clínicos como Assunto/estatística & dados numéricos
19.
J Clin Epidemiol ; 170: 111365, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38631528

RESUMO

OBJECTIVES: To describe statistical tools available for assessing publication integrity of groups of randomized controlled trials (RCTs). STUDY DESIGN AND SETTING: Narrative review. RESULTS: Freely available statistical tools have been developed that compare the observed distributions of baseline variables with the expected distributions that would occur if successful randomization occurred. For continuous variables, the tools assess baseline means, baseline P values, and the occurrence of identical means and/or standard deviation. For categorical variables, they assess baseline P values, frequency counts for individual or all variables, numbers of trial participants randomized or withdrawing, and compare reported with independently calculated P values. The tools have been used to identify publication integrity concerns in RCTs from individual groups, and performed at an acceptable level in discriminating intentionally fabricated baseline summary data from genuine RCTs. The tools can be used when concerns have been raised about RCT(s) from an individual/group and when the whole body of their work is being examined, when conducting systematic reviews, and could be adapted to aid screening of RCTs at journal submission. CONCLUSION: Statistical tools are useful for the assessment of publication integrity of groups of RCTs.


Assuntos
Ensaios Clínicos Controlados Aleatórios como Assunto , Ensaios Clínicos Controlados Aleatórios como Assunto/normas , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Ensaios Clínicos Controlados Aleatórios como Assunto/métodos , Humanos , Interpretação Estatística de Dados , Editoração/normas , Projetos de Pesquisa/normas , Viés de Publicação/estatística & dados numéricos
20.
BMC Med Res Methodol ; 24(1): 101, 2024 Apr 30.
Artigo em Inglês | MEDLINE | ID: mdl-38689224

RESUMO

BACKGROUND: Vaccine efficacy (VE) assessed in a randomized controlled clinical trial can be affected by demographic, clinical, and other subject-specific characteristics evaluated as baseline covariates. Understanding the effect of covariates on efficacy is key to decisions by vaccine developers and public health authorities. METHODS: This work evaluates the impact of including correlate of protection (CoP) data in logistic regression on its performance in identifying statistically and clinically significant covariates in settings typical for a vaccine phase 3 trial. The proposed approach uses CoP data and covariate data as predictors of clinical outcome (diseased versus non-diseased) and is compared to logistic regression (without CoP data) to relate vaccination status and covariate data to clinical outcome. RESULTS: Clinical trial simulations, in which the true relationship between CoP data and clinical outcome probability is a sigmoid function, show that use of CoP data increases the positive predictive value for detection of a covariate effect. If the true relationship is characterized by a decreasing convex function, use of CoP data does not substantially change positive or negative predictive value. In either scenario, vaccine efficacy is estimated more precisely (i.e., confidence intervals are narrower) in covariate-defined subgroups if CoP data are used, implying that using CoP data increases the ability to determine clinical significance of baseline covariate effects on efficacy. CONCLUSIONS: This study proposes and evaluates a novel approach for assessing baseline demographic covariates potentially affecting VE. Results show that the proposed approach can sensitively and specifically identify potentially important covariates and provides a method for evaluating their likely clinical significance in terms of predicted impact on vaccine efficacy. It shows further that inclusion of CoP data can enable more precise VE estimation, thus enhancing study power and/or efficiency and providing even better information to support health policy and development decisions.


Assuntos
Eficácia de Vacinas , Humanos , Modelos Logísticos , Eficácia de Vacinas/estatística & dados numéricos , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Ensaios Clínicos Controlados Aleatórios como Assunto/métodos , Vacinação/estatística & dados numéricos , Vacinação/métodos , Vacinas/uso terapêutico , Demografia/estatística & dados numéricos , Simulação por Computador , Ensaios Clínicos Fase III como Assunto/estatística & dados numéricos , Ensaios Clínicos Fase III como Assunto/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...