Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 75
Filtrar
Mais filtros

País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
BMC Med Res Methodol ; 24(1): 124, 2024 Jun 03.
Artigo em Inglês | MEDLINE | ID: mdl-38831421

RESUMO

BACKGROUND: Multi-arm multi-stage (MAMS) randomised trial designs have been proposed to evaluate multiple research questions in the confirmatory setting. In designs with several interventions, such as the 8-arm 3-stage ROSSINI-2 trial for preventing surgical wound infection, there are likely to be strict limits on the number of individuals that can be recruited or the funds available to support the protocol. These limitations may mean that not all research treatments can continue to accrue the required sample size for the definitive analysis of the primary outcome measure at the final stage. In these cases, an additional treatment selection rule can be applied at the early stages of the trial to restrict the maximum number of research arms that can progress to the subsequent stage(s). This article provides guidelines on how to implement treatment selection within the MAMS framework. It explores the impact of treatment selection rules, interim lack-of-benefit stopping boundaries and the timing of treatment selection on the operating characteristics of the MAMS selection design. METHODS: We outline the steps to design a MAMS selection trial. Extensive simulation studies are used to explore the maximum/expected sample sizes, familywise type I error rate (FWER), and overall power of the design under both binding and non-binding interim stopping boundaries for lack-of-benefit. RESULTS: Pre-specification of a treatment selection rule reduces the maximum sample size by approximately 25% in our simulations. The familywise type I error rate of a MAMS selection design is smaller than that of the standard MAMS design with similar design specifications without the additional treatment selection rule. In designs with strict selection rules - for example, when only one research arm is selected from 7 arms - the final stage significance levels can be relaxed for the primary analyses to ensure that the overall type I error for the trial is not underspent. When conducting treatment selection from several treatment arms, it is important to select a large enough subset of research arms (that is, more than one research arm) at early stages to maintain the overall power at the pre-specified level. CONCLUSIONS: Multi-arm multi-stage selection designs gain efficiency over the standard MAMS design by reducing the overall sample size. Diligent pre-specification of the treatment selection rule, final stage significance level and interim stopping boundaries for lack-of-benefit are key to controlling the operating characteristics of a MAMS selection design. We provide guidance on these design features to ensure control of the operating characteristics.


Assuntos
Ensaios Clínicos Controlados Aleatórios como Assunto , Projetos de Pesquisa , Humanos , Ensaios Clínicos Controlados Aleatórios como Assunto/métodos , Tamanho da Amostra , Seleção de Pacientes
2.
Oecologia ; 205(2): 257-269, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38806949

RESUMO

Community weighted means (CWMs) are widely used to study the relationship between community-level functional traits and environment. For certain null hypotheses, CWM-environment relationships assessed by linear regression or ANOVA and tested by standard parametric tests are prone to inflated Type I error rates. Previous research has found that this problem can be solved by permutation tests (i.e., the max test). A recent extension of the CWM approach allows the inclusion of intraspecific trait variation (ITV) by the separate calculation of fixed, site-specific, and intraspecific CWMs. The question is whether the same Type I error rate inflation exists for the relationship between environment and site-specific or intraspecific CWM. Using simulated and real-world community datasets, we show that site-specific CWM-environment relationships have also inflated Type I error rate, and this rate is negatively related to the relative ITV magnitude. In contrast, for intraspecific CWM-environment relationships, standard parametric tests have the correct Type I error rate, although somewhat reduced statistical power. We introduce an ITV-extended version of the max test, which can solve the inflation problem for site-specific CWM-environment relationships and, without considering ITV, becomes equivalent to the "original" max test used for the CWM approach. We show that this new ITV-extended max test works well across the full possible magnitude of ITV on both simulated and real-world data. Most real datasets probably do not have intraspecific trait variation large enough to alleviate the problem of inflated Type I error rate, and published studies possibly report overly optimistic significance results.


Assuntos
Ecossistema
3.
J Biopharm Stat ; : 1-14, 2024 Mar 21.
Artigo em Inglês | MEDLINE | ID: mdl-38515269

RESUMO

In recent years, clinical trials utilizing a two-stage seamless adaptive trial design have become very popular in drug development. A typical example is a phase 2/3 adaptive trial design, which consists of two stages. As an example, stage 1 is for a phase 2 dose-finding study and stage 2 is for a phase 3 efficacy confirmation study. Depending upon whether or not the target patient population, study objectives, and study endpoints are the same at different stages, Chow (2020) classified two-stage seamless adaptive design into eight categories. In practice, standard statistical methods for group sequential design with one planned interim analysis are often wrongly directly applied for data analysis. In this article, following similar ideas proposed by Chow and Lin (2015) and Chow (2020), a statistical method for the analysis of a two-stage seamless adaptive trial design with different study endpoints and shifted target patient population is discussed under the fundamental assumption that study endpoints have a known relationship. The proposed analysis method should be useful in both clinical trials with protocol amendments and clinical trials with the existence of disease progression utilizing a two-stage seamless adaptive trial design.

4.
Biom J ; 66(1): e2200322, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38063813

RESUMO

Bayesian clinical trials can benefit from available historical information through the specification of informative prior distributions. Concerns are however often raised about the potential for prior-data conflict and the impact of Bayes test decisions on frequentist operating characteristics, with particular attention being assigned to inflation of type I error (TIE) rates. This motivates the development of principled borrowing mechanisms, that strike a balance between frequentist and Bayesian decisions. Ideally, the trust assigned to historical information defines the degree of robustness to prior-data conflict one is willing to sacrifice. However, such relationship is often not directly available when explicitly considering inflation of TIE rates. We build on available literature relating frequentist and Bayesian test decisions, and investigate a rationale for inflation of TIE rate which explicitly and linearly relates the amount of borrowing and the amount of TIE rate inflation in one-arm studies. A novel dynamic borrowing mechanism tailored to hypothesis testing is additionally proposed. We show that, while dynamic borrowing prevents the possibility to obtain a simple closed-form TIE rate computation, an explicit upper bound can still be enforced. Connections with the robust mixture prior approach, particularly in relation to the choice of the mixture weight and robust component, are made. Simulations are performed to show the properties of the approach for normal and binomial outcomes, and an exemplary application is demonstrated in a case study.


Assuntos
Modelos Estatísticos , Projetos de Pesquisa , Teorema de Bayes , Simulação por Computador
5.
Biom J ; 66(1): e2200312, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38285403

RESUMO

To accelerate a randomized controlled trial, historical control data may be used after ensuring little heterogeneity between the historical and current trials. The test-then-pool approach is a simple frequentist borrowing method that assesses the similarity between historical and current control data using a two-sided test. A limitation of the conventional test-then-pool method is the inability to control the type I error rate and power for the primary hypothesis separately and flexibly for heterogeneity between trials. This is because the two-sided test focuses on the absolute value of the mean difference between the historical and current controls. In this paper, we propose a new test-then-pool method that splits the two-sided hypothesis of the conventional method into two one-sided hypotheses. Testing each one-sided hypothesis with different significance levels allows for the separate control of the type I error rate and power for heterogeneity between trials. We also propose a significance-level selection approach based on the maximum type I error rate and the minimum power. The proposed method prevented a decrease in power even when there was heterogeneity between trials while controlling type I error at a maximum tolerable type I error rate larger than the targeted type I error rate. The application of depression trial data and hypothetical trial data further supported the usefulness of the proposed method.

6.
Biostatistics ; 23(1): 328-344, 2022 01 13.
Artigo em Inglês | MEDLINE | ID: mdl-32735010

RESUMO

Bayesian clinical trials allow taking advantage of relevant external information through the elicitation of prior distributions, which influence Bayesian posterior parameter estimates and test decisions. However, incorporation of historical information can have harmful consequences on the trial's frequentist (conditional) operating characteristics in case of inconsistency between prior information and the newly collected data. A compromise between meaningful incorporation of historical information and strict control of frequentist error rates is therefore often sought. Our aim is thus to review and investigate the rationale and consequences of different approaches to relaxing strict frequentist control of error rates from a Bayesian decision-theoretic viewpoint. In particular, we define an integrated risk which incorporates losses arising from testing, estimation, and sampling. A weighted combination of the integrated risk addends arising from testing and estimation allows moving smoothly between these two targets. Furthermore, we explore different possible elicitations of the test error costs, leading to test decisions based either on posterior probabilities, or solely on Bayes factors. Sensitivity analyses are performed following the convention which makes a distinction between the prior of the data-generating process, and the analysis prior adopted to fit the data. Simulation in the case of normal and binomial outcomes and an application to a one-arm proof-of-concept trial, exemplify how such analysis can be conducted to explore sensitivity of the integrated risk, the operating characteristics, and the optimal sample size, to prior-data conflict. Robust analysis prior specifications, which gradually discount potentially conflicting prior information, are also included for comparison. Guidance with respect to cost elicitation, particularly in the context of a Phase II proof-of-concept trial, is provided.


Assuntos
Modelos Estatísticos , Projetos de Pesquisa , Teorema de Bayes , Ensaios Clínicos como Assunto , Humanos , Tamanho da Amostra
7.
Stat Sci ; 38(4): 557-575, 2023 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-38223302

RESUMO

Modern data analysis frequently involves large-scale hypothesis testing, which naturally gives rise to the problem of maintaining control of a suitable type I error rate, such as the false discovery rate (FDR). In many biomedical and technological applications, an additional complexity is that hypotheses are tested in an online manner, one-by-one over time. However, traditional procedures that control the FDR, such as the Benjamini-Hochberg procedure, assume that all p-values are available to be tested at a single time point. To address these challenges, a new field of methodology has developed over the past 15 years showing how to control error rates for online multiple hypothesis testing. In this framework, hypotheses arrive in a stream, and at each time point the analyst decides whether to reject the current hypothesis based both on the evidence against it, and on the previous rejection decisions. In this paper, we present a comprehensive exposition of the literature on online error rate control, with a review of key theory as well as a focus on applied examples. We also provide simulation results comparing different online testing algorithms and an up-to-date overview of the many methodological extensions that have been proposed.

8.
Stat Med ; 42(14): 2475-2495, 2023 06 30.
Artigo em Inglês | MEDLINE | ID: mdl-37005003

RESUMO

Platform trials evaluate multiple experimental treatments under a single master protocol, where new treatment arms are added to the trial over time. Given the multiple treatment comparisons, there is the potential for inflation of the overall type I error rate, which is complicated by the fact that the hypotheses are tested at different times and are not necessarily pre-specified. Online error rate control methodology provides a possible solution to the problem of multiplicity for platform trials where a relatively large number of hypotheses are expected to be tested over time. In the online multiple hypothesis testing framework, hypotheses are tested one-by-one over time, where at each time-step an analyst decides whether to reject the current null hypothesis without knowledge of future tests but based solely on past decisions. Methodology has recently been developed for online control of the false discovery rate as well as the familywise error rate (FWER). In this article, we describe how to apply online error rate control to the platform trial setting, present extensive simulation results, and give some recommendations for the use of this new methodology in practice. We show that the algorithms for online error rate control can have a substantially lower FWER than uncorrected testing, while still achieving noticeable gains in power when compared with the use of a Bonferroni correction. We also illustrate how online error rate control would have impacted a currently ongoing platform trial.


Assuntos
Projetos de Pesquisa , Humanos , Interpretação Estatística de Dados , Simulação por Computador
9.
Clin Trials ; 20(1): 71-80, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36647713

RESUMO

BACKGROUND: Multi-arm multi-stage trials are an efficient, adaptive approach for testing many treatments simultaneously within one protocol. In settings where numbers of patients available to be entered into trials and resources might be limited, such as primary postpartum haemorrhage, it may be necessary to select a pre-specified subset of arms at interim stages even if they are all showing some promise against the control arm. This will put a limit on the maximum number of patients required and reduce the associated costs. Motivated by the World Health Organization Refractory HaEmorrhage Devices trial in postpartum haemorrhage, we explored the properties of such a selection design in a randomised phase III setting and compared it with other alternatives. The objectives are: (1) to investigate how the timing of treatment selection affects the operating characteristics; (2) to explore the use of an information-rich (continuous) intermediate outcome to select the best-performing arm, out of four treatment arms, compared with using the primary (binary) outcome for selection at the interim stage; and (3) to identify factors that can affect the efficiency of the design. METHODS: We conducted simulations based on the refractory haemorrhage devices multi-arm multi-stage selection trial to investigate the impact of the timing of treatment selection and applying an adaptive allocation ratio on the probability of correct selection, overall power and familywise type I error rate. Simulations were also conducted to explore how other design parameters will affect both the maximum sample size and trial timelines. RESULTS: The results indicate that the overall power of the trial is bounded by the probability of 'correct' selection at the selection stage. The results showed that good operating characteristics are achieved if the treatment selection is conducted at around 17% of information time. Our results also showed that although randomising more patients to research arms before selection will increase the probability of selecting correctly, this will not increase the overall efficiency of the (selection) design compared with the fixed allocation ratio of 1:1 to all arms throughout. CONCLUSIONS: Multi-arm multi-stage selection designs are efficient and flexible with desirable operating characteristics. We give guidance on many aspects of these designs including selecting the intermediate outcome measure, the timing of treatment selection, and choosing the operating characteristics.


Assuntos
Hemorragia Pós-Parto , Projetos de Pesquisa , Feminino , Humanos , Hemorragia Pós-Parto/terapia , Tamanho da Amostra , Seleção de Pacientes , Avaliação de Resultados em Cuidados de Saúde
10.
BMC Med Res Methodol ; 22(1): 273, 2022 10 17.
Artigo em Inglês | MEDLINE | ID: mdl-36253728

RESUMO

BACKGROUND: Functional connectivity (FC) studies are often performed to discern different patterns of brain connectivity networks between healthy and patient groups. Since many neuropsychiatric disorders are related to the change in these patterns, accurate modelling of FC data can provide useful information about disease pathologies. However, analysing functional connectivity data faces several challenges, including the correlations of the connectivity edges associated with network topological characteristics, the large number of parameters in the covariance matrix, and taking into account the heterogeneity across subjects. METHODS: This study provides a new statistical approach to compare the FC networks between subgroups that consider the network topological structure of brain regions and subject heterogeneity. RESULTS: The power based on the heterogeneity structure of identity scaled in a sample size of 25 exhibited values greater than 0.90 without influencing the degree of correlation, heterogeneity, and the number of regions. This index had values above 0.80 in the small sample size and high correlation. In most scenarios, the type I error was close to 0.05. Moreover, the application of this model on real data related to autism was also investigated, which indicated no significant difference in FC networks between healthy and patient individuals. CONCLUSIONS: The results from simulation data indicated that the proposed model has high power and near-nominal type I error rates in most scenarios.


Assuntos
Encéfalo , Imageamento por Ressonância Magnética , Encéfalo/patologia , Simulação por Computador , Humanos , Imageamento por Ressonância Magnética/métodos
11.
Pharm Stat ; 21(5): 1058-1073, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-35191605

RESUMO

Clinical trials usually take a period of time to recruit volunteers, and they become a steady accumulation of data. Traditionally, the sample size of a trial is determined in advance and data is collected before analysis proceeds. Over the past decades, many strategies have been proposed and rigorous theoretical groundings have been provided to conduct sample size re-estimation. However, the application of these methodologies has not been well extended to take care of trials with adaptive designs. Therefore, we aim to fill the gap by proposing a sample size re-estimation procedure on response-adaptive randomized trial. For ethical and economical concerns, we use multiple stopping criteria with the allowance of early termination. Statistical inference is studied for the hypothesis testing under doubly-adaptive biased coin design. We also prove that the test statistics for each stage are asymptotic independently normally distributed, though dependency exists between the two stages. We find that under our methods, compared to fixed sample size design and other commonly used randomization procedures: (1) power is increased for all scenarios with adjusted sample size; (2) sample size is reduced up to 40% when underestimating the treatment effect; (3) the duration of trials is shortened. These advantages are evidenced by numerical studies and real examples.


Assuntos
Modelos Estatísticos , Projetos de Pesquisa , Interpretação Estatística de Dados , Humanos , Ensaios Clínicos Controlados Aleatórios como Assunto , Tamanho da Amostra
12.
Stat Med ; 40(12): 2839-2858, 2021 05 30.
Artigo em Inglês | MEDLINE | ID: mdl-33733513

RESUMO

Covariate-adaptive randomization (CAR) procedures have been developed in clinical trials to mitigate the imbalance of treatments among covariates. In recent years, an increasing number of trials have started to use CAR for the advantages in statistical efficiency and enhancing credibility. At the same time, sample size re-estimation (SSR) has become a common technique in industry to reduce time and cost while maintaining a good probability of success. Despite the widespread popularity of combining CAR designs with SSR, few researchers have investigated this combination theoretically. More importantly, the existing statistical inference must be adjusted to protect the desired type I error rate when a model that omits some covariates is used. In this article, we give a framework for the application of SSR in CAR trials and study the underlying theoretical properties. We give the adjusted test statistic and derive the sample size calculation formula under the CAR setting. We can tackle the difficulties caused by the adaptive features in CAR and prove the asymptotic independence between stages. Numerical studies are conducted under multiple parameter settings and scenarios that are commonly encountered in practice. The results show that all advantages of CAR and SSR can be preserved and further improved in terms of power and sample size.


Assuntos
Projetos de Pesquisa , Interpretação Estatística de Dados , Distribuição Aleatória , Ensaios Clínicos Controlados Aleatórios como Assunto , Tamanho da Amostra
13.
Stat Med ; 40(23): 4947-4960, 2021 10 15.
Artigo em Inglês | MEDLINE | ID: mdl-34111902

RESUMO

Response adaptive randomization (RAR) is appealing from methodological, ethical, and pragmatic perspectives in the sense that subjects are more likely to be randomized to better performing treatment groups based on accumulating data. However, applications of RAR in confirmatory drug clinical trials with multiple active arms are limited largely due to its complexity, and lack of control of randomization ratios to different treatment groups. To address the aforementioned issues, we propose a Response Adaptive Block Randomization (RABR) design allowing arbitrarily prespecified randomization ratios for the control and high-performing groups to meet clinical trial objectives. We show the validity of the conventional unweighted test in RABR with a controlled type I error rate based on the weighted combination test for sample size adaptive design invoking no large sample approximation. The advantages of the proposed RABR in terms of robustly reaching target final sample size to meet regulatory requirements and increasing statistical power as compared with the popular Doubly Adaptive Biased Coin Design are demonstrated by statistical simulations and a practical clinical trial design example.


Assuntos
Projetos de Pesquisa , Humanos , Distribuição Aleatória , Tamanho da Amostra
14.
Stat Med ; 40(27): 6133-6149, 2021 11 30.
Artigo em Inglês | MEDLINE | ID: mdl-34433225

RESUMO

In clinical trials, sample size re-estimation is often conducted at interim. The purpose is to determine whether the study will achieve study objectives if the observed treatment effect at interim preserves till end of the study. A traditional approach is to conduct a conditional power analysis for sample size only based on observed treatment effect. This approach, however, does not take into consideration the variabilities of (i) the observed (estimate) treatment effect and (ii) the observed (estimate) variability associated with the treatment effect. Thus, the resultant re-estimated sample sizes may not be robust and hence may not be reliable. In this article, a couple of methods are proposed, namely, adjusted effect size (AES) approach and iterated expectation/variance (IEV) approach, which can account for the variability associated with the observed responses at interim. The proposed methods provide interval estimates of sample size required for the intended trial, which is useful for making critical go/no go decision. Statistical properties of the proposed methods are evaluated in terms of controlling of type I error rate and statistical power. The results show that traditional approach performs poorly in controlling type I error inflation, whereas IEV approach has the best performance in most cases. Additionally, all re-estimation approaches can keep the statistical power over 80 % ; especially, IEV approach's statistical power, using adjusted significance level, is over 95 % . However, IEV approach may lead to a greater increment in sample size when detecting a smaller effect size. In general, IEV approach is effective when effect size is large; otherwise, AES approach is more suitable for controlling type I error rate and keep power over 80 % with a more reasonable re-estimated sample size.


Assuntos
Ensaios Clínicos como Assunto , Projetos de Pesquisa , Humanos , Tamanho da Amostra
15.
Biostatistics ; 20(3): 400-415, 2019 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-29547966

RESUMO

We consider the problem of Bayesian sample size determination for a clinical trial in the presence of historical data that inform the treatment effect. Our broadly applicable, simulation-based methodology provides a framework for calibrating the informativeness of a prior while simultaneously identifying the minimum sample size required for a new trial such that the overall design has appropriate power to detect a non-null treatment effect and reasonable type I error control. We develop a comprehensive strategy for eliciting null and alternative sampling prior distributions which are used to define Bayesian generalizations of the traditional notions of type I error control and power. Bayesian type I error control requires that a weighted-average type I error rate not exceed a prespecified threshold. We develop a procedure for generating an appropriately sized Bayesian hypothesis test using a simple partial-borrowing power prior which summarizes the fraction of information borrowed from the historical trial. We present results from simulation studies that demonstrate that a hypothesis test procedure based on this simple power prior is as efficient as those based on more complicated meta-analytic priors, such as normalized power priors or robust mixture priors, when all are held to precise type I error control requirements. We demonstrate our methodology using a real data set to design a follow-up clinical trial with time-to-event endpoint for an investigational treatment in high-risk melanoma.


Assuntos
Bioestatística/métodos , Ensaios Clínicos como Assunto , Modelos Estatísticos , Projetos de Pesquisa , Teorema de Bayes , Simulação por Computador , Humanos , Melanoma/tratamento farmacológico , Tamanho da Amostra
16.
Anal Biochem ; 599: 113680, 2020 06 15.
Artigo em Inglês | MEDLINE | ID: mdl-32194076

RESUMO

The Empirical Statistical Model (ESM) for decoy library searching fused the expected amino acid sequence of 18 non-human protein standards to a human decoy library. The ESM assumed a priori the standards were pure such that only the 18 nominal proteins were true positive, all other proteins were false positive, there was no overlap in the peptides of non-human proteins versus human proteins, and that the score distribution of individual peptides would resolve true positive from false positive results or noise. The results of random and independent sampling by LC-ESI-MS/MS indicated that the fundamental assumptions of the ESM were not in good agreement with the actual purity of the commercial test standards and so the method showed a 99.7% false negative rate. The ESM for decoy library searching apparently showed poor agreement with SDS-PAGE using silver staining, goodness of fit of MS/MS spectra by X!TANDEM, FDR correction by Benjamini and Hochberg, or comparison to the observation frequency of null random MS/MS spectra, that all confirmed the standards contain hundreds of proteins with a low FDR of primary structural identification. The protein observation frequency increased with abundance and the log10 precursor intensity distributions were Gaussian and nearly ideal for relative quantification.


Assuntos
Bases de Dados de Proteínas , Proteínas/normas , Animais , Humanos , Padrões de Referência , Espectrometria de Massas em Tandem
17.
Biometrics ; 76(2): 630-642, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-31631321

RESUMO

In this paper, we propose a Bayesian design framework for a biosimilars clinical program that entails conducting concurrent trials in multiple therapeutic indications to establish equivalent efficacy for a proposed biologic compared to a reference biologic in each indication to support approval of the proposed biologic as a biosimilar. Our method facilitates information borrowing across indications through the use of a multivariate normal correlated parameter prior (CPP), which is constructed from easily interpretable hyperparameters that represent direct statements about the equivalence hypotheses to be tested. The CPP accommodates different endpoints and data types across indications (eg, binary and continuous) and can, therefore, be used in a wide context of models without having to modify the data (eg, rescaling) to provide reasonable information-borrowing properties. We illustrate how one can evaluate the design using Bayesian versions of the type I error rate and power with the objective of determining the sample size required for each indication such that the design has high power to demonstrate equivalent efficacy in each indication, reasonably high power to demonstrate equivalent efficacy simultaneously in all indications (ie, globally), and reasonable type I error control from a Bayesian perspective. We illustrate the method with several examples, including designing biosimilars trials for follicular lymphoma and rheumatoid arthritis using binary and continuous endpoints, respectively.


Assuntos
Teorema de Bayes , Medicamentos Biossimilares/farmacologia , Medicamentos Biossimilares/farmacocinética , Ensaios Clínicos como Assunto/métodos , Ensaios Clínicos como Assunto/estatística & dados numéricos , Artrite Reumatoide/tratamento farmacológico , Artrite Reumatoide/metabolismo , Biometria , Simulação por Computador , Determinação de Ponto Final/estatística & dados numéricos , Humanos , Modelos Lineares , Linfoma Folicular/tratamento farmacológico , Linfoma Folicular/metabolismo , Modelos Estatísticos , Análise Multivariada , Tamanho da Amostra , Equivalência Terapêutica
18.
Biometrics ; 76(4): 1262-1272, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-31883270

RESUMO

Quantitative traits analyzed in Genome-Wide Association Studies (GWAS) are often nonnormally distributed. For such traits, association tests based on standard linear regression are subject to reduced power and inflated type I error in finite samples. Applying the rank-based inverse normal transformation (INT) to nonnormally distributed traits has become common practice in GWAS. However, the different variations on INT-based association testing have not been formally defined, and guidance is lacking on when to use which approach. In this paper, we formally define and systematically compare the direct (D-INT) and indirect (I-INT) INT-based association tests. We discuss their assumptions, underlying generative models, and connections. We demonstrate that the relative powers of D-INT and I-INT depend on the underlying data generating process. Since neither approach is uniformly most powerful, we combine them into an adaptive omnibus test (O-INT). O-INT is robust to model misspecification, protects the type I error, and is well powered against a wide range of nonnormally distributed traits. Extensive simulations were conducted to examine the finite sample operating characteristics of these tests. Our results demonstrate that, for nonnormally distributed traits, INT-based tests outperform the standard untransformed association test, both in terms of power and type I error rate control. We apply the proposed methods to GWAS of spirometry traits in the UK Biobank. O-INT has been implemented in the R package RNOmni, which is available on CRAN.


Assuntos
Estudo de Associação Genômica Ampla , Modelos Genéticos , Modelos Lineares , Fenótipo
19.
Stat Med ; 39(20): 2655-2670, 2020 09 10.
Artigo em Inglês | MEDLINE | ID: mdl-32432805

RESUMO

Between-group comparison based on the restricted mean survival time (RMST) is getting attention as an alternative to the conventional logrank/hazard ratio approach for time-to-event outcomes in randomized controlled trials (RCTs). The validity of the commonly used nonparametric inference procedure for RMST has been well supported by large sample theories. However, we sometimes encounter cases with a small sample size in practice, where we cannot rely on the large sample properties. Generally, the permutation approach can be useful to handle these situations in RCTs. However, a numerical issue arises when implementing permutation tests for difference or ratio of RMST from two groups. In this article, we discuss the numerical issue and consider six permutation methods for comparing survival time distributions between two groups using RMST in RCTs setting. We conducted extensive numerical studies and assessed type I error rates of these methods. Our numerical studies demonstrated that the inflation of the type I error rate of the asymptotic methods is not negligible when sample size is small, and that all of the six permutation methods are workable solutions. Although some permutation methods became a little conservative, no remarkable inflation of the type I error rates were observed. We recommend using permutation tests instead of the asymptotic tests, especially when the sample size is less than 50 per arm.


Assuntos
Taxa de Sobrevida , Humanos , Modelos de Riscos Proporcionais , Ensaios Clínicos Controlados Aleatórios como Assunto , Tamanho da Amostra
20.
BMC Med Res Methodol ; 20(1): 4, 2020 01 07.
Artigo em Inglês | MEDLINE | ID: mdl-31910813

RESUMO

BACKGROUND: There is a growing interest in the use of Bayesian adaptive designs in late-phase clinical trials. This includes the use of stopping rules based on Bayesian analyses in which the frequentist type I error rate is controlled as in frequentist group-sequential designs. METHODS: This paper presents a practical comparison of Bayesian and frequentist group-sequential tests. Focussing on the setting in which data can be summarised by normally distributed test statistics, we evaluate and compare boundary values and operating characteristics. RESULTS: Although Bayesian and frequentist group-sequential approaches are based on fundamentally different paradigms, in a single arm trial or two-arm comparative trial with a prior distribution specified for the treatment difference, Bayesian and frequentist group-sequential tests can have identical stopping rules if particular critical values with which the posterior probability is compared or particular spending function values are chosen. If the Bayesian critical values at different looks are restricted to be equal, O'Brien and Fleming's design corresponds to a Bayesian design with an exceptionally informative negative prior, Pocock's design to a Bayesian design with a non-informative prior and frequentist designs with a linear alpha spending function are very similar to Bayesian designs with slightly informative priors.This contrasts with the setting of a comparative trial with independent prior distributions specified for treatment effects in different groups. In this case Bayesian and frequentist group-sequential tests cannot have the same stopping rule as the Bayesian stopping rule depends on the observed means in the two groups and not just on their difference. In this setting the Bayesian test can only be guaranteed to control the type I error for a specified range of values of the control group treatment effect. CONCLUSIONS: Comparison of frequentist and Bayesian designs can encourage careful thought about design parameters and help to ensure appropriate design choices are made.


Assuntos
Teorema de Bayes , Ensaios Clínicos como Assunto , Projetos de Pesquisa , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA