Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 23
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Stat Med ; 43(11): 2239-2262, 2024 May 20.
Artigo em Inglês | MEDLINE | ID: mdl-38545961

RESUMO

A coordinated testing policy is an essential tool for responding to emerging epidemics, as was seen with COVID-19. However, it is very difficult to agree on the best policy when there are multiple conflicting objectives. A key objective is minimizing cost, which is why pooled testing (a method that involves pooling samples taken from multiple individuals and analyzing this with a single diagnostic test) has been suggested. In this article, we present results from an extensive and realistic simulation study comparing testing policies based on individually testing subjects with symptoms (a policy resembling the UK strategy at the start of the COVID-19 pandemic), individually testing subjects at random or pools of subjects randomly combined and tested. To compare these testing methods, a dynamic model compromised of a relationship network and an extended SEIR model is used. In contrast to most existing literature, testing capacity is considered as fixed and limited rather than unbounded. This article then explores the impact of the proportion of symptomatic infections on the expected performance of testing policies. Symptomatic testing performs better than pooled testing unless a low proportion of infections are symptomatic. Additionally, we include the novel feature for testing of non-compliance and perform a sensitivity analysis for different compliance assumptions. Our results suggest for the pooled testing scheme to be superior to testing symptomatic people individually, only a small proportion of the population ( > 10 % $$ >10\% $$ ) needs to not comply with the testing procedure.


Assuntos
Teste para COVID-19 , COVID-19 , Simulação por Computador , Humanos , COVID-19/diagnóstico , COVID-19/epidemiologia , Teste para COVID-19/métodos , Teste para COVID-19/estatística & dados numéricos , Pandemias , Modelos Estatísticos , SARS-CoV-2 , Política de Saúde , Reino Unido/epidemiologia
2.
BMC Infect Dis ; 23(1): 900, 2023 Dec 21.
Artigo em Inglês | MEDLINE | ID: mdl-38129789

RESUMO

BACKGROUND: There is evidence that during the COVID pandemic, a number of patient and HCW infections were nosocomial. Various measures were put in place to try to reduce these infections including developing asymptomatic PCR (polymerase chain reaction) testing schemes for healthcare workers. Regularly testing all healthcare workers requires many tests while reducing this number by only testing some healthcare workers can result in undetected cases. An efficient way to test as many individuals as possible with a limited testing capacity is to consider pooling multiple samples to be analysed with a single test (known as pooled testing). METHODS: Two different pooled testing schemes for the asymptomatic testing are evaluated using an individual-based model representing the transmission of SARS-CoV-2 in a 'typical' English hospital. We adapt the modelling to reflect two scenarios: a) a retrospective look at earlier SARS-CoV-2 variants under lockdown or social restrictions, and b) transitioning back to 'normal life' without lockdown and with the omicron variant. The two pooled testing schemes analysed differ in the population that is eligible for testing. In the 'ward' testing scheme only healthcare workers who work on a single ward are eligible and in the 'full' testing scheme all healthcare workers are eligible including those that move across wards. Both pooled schemes are compared against the baseline scheme which tests only symptomatic healthcare workers. RESULTS: Including a pooled asymptomatic testing scheme is found to have a modest (albeit statistically significant) effect, reducing the total number of nosocomial healthcare worker infections by about 2[Formula: see text] in both the lockdown and non-lockdown setting. However, this reduction must be balanced with the increase in cost and healthcare worker isolations. Both ward and full testing reduce HCW infections similarly but the cost for ward testing is much less. We also consider the use of lateral flow devices (LFDs) for follow-up testing. Considering LFDs reduces cost and time but LFDs have a different error profile to PCR tests. CONCLUSIONS: Whether a PCR-only or PCR and LFD ward testing scheme is chosen depends on the metrics of most interest to policy makers, the virus prevalence and whether there is a lockdown.


Assuntos
COVID-19 , Infecção Hospitalar , Humanos , COVID-19/diagnóstico , COVID-19/epidemiologia , COVID-19/prevenção & controle , Estudos Retrospectivos , Hospitais , Pessoal de Saúde , Infecção Hospitalar/diagnóstico , Infecção Hospitalar/epidemiologia , Infecção Hospitalar/prevenção & controle
4.
Trials ; 24(1): 640, 2023 Oct 05.
Artigo em Inglês | MEDLINE | ID: mdl-37798805

RESUMO

In the UK, the Medicines and Healthcare products Regulatory Agency consulted on proposals "to improve and strengthen the UK clinical trials legislation to help us make the UK the best place to research and develop safe and innovative medicines". The purpose of the consultation was to help finalise the proposals and contribute to the drafting of secondary legislation. We discussed these proposals as members of the Trials Methodology Research Partnership Adaptive Designs Working Group, which is jointly funded by the Medical Research Council and the National Institute for Health and Care Research. Two topics arose frequently in the discussion: the emphasis on legislation, and the absence of questions on data sharing. It is our opinion that the proposals rely heavily on legislation to change practice. However, clinical trials are heterogeneous, and as a result some trials will struggle to comply with all of the proposed legislation. Furthermore, adaptive design clinical trials are even more heterogeneous than their non-adaptive counterparts, and face more challenges. Consequently, it is possible that increased legislation could have a greater negative impact on adaptive designs than non-adaptive designs. Overall, we are sceptical that the introduction of legislation will achieve the desired outcomes, with some exceptions. Meanwhile the topic of data sharing - making anonymised individual-level clinical trial data available to other investigators for further use - is entirely absent from the proposals and the consultation in general. However, as an aspect of the wider concept of open science and reproducible research, data sharing is an increasingly important aspect of clinical trials. The benefits of data sharing include faster innovation, improved surveillance of drug safety and effectiveness and decreasing participant exposure to unnecessary risk. There are already a number of UK-focused documents that discuss and encourage data sharing, for example, the Concordat on Open Research Data and the Medical Research Council's Data Sharing Policy. We strongly suggest that data sharing should be the norm rather than the exception, and hope that the forthcoming proposals on clinical trials invite discussion on this important topic.


Assuntos
Disseminação de Informação , Projetos de Pesquisa , Humanos , Atenção à Saúde
5.
Stat Sci ; 38(2): 185-208, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-37324576

RESUMO

Response-Adaptive Randomization (RAR) is part of a wider class of data-dependent sampling algorithms, for which clinical trials are typically used as a motivating application. In that context, patient allocation to treatments is determined by randomization probabilities that change based on the accrued response data in order to achieve experimental goals. RAR has received abundant theoretical attention from the biostatistical literature since the 1930's and has been the subject of numerous debates. In the last decade, it has received renewed consideration from the applied and methodological communities, driven by well-known practical examples and its widespread use in machine learning. Papers on the subject present different views on its usefulness, and these are not easy to reconcile. This work aims to address this gap by providing a unified, broad and fresh review of methodological and practical issues to consider when debating the use of RAR in clinical trials.

6.
Stat Med ; 42(14): 2496-2520, 2023 06 30.
Artigo em Inglês | MEDLINE | ID: mdl-37021359

RESUMO

In adaptive clinical trials, the conventional end-of-trial point estimate of a treatment effect is prone to bias, that is, a systematic tendency to deviate from its true value. As stated in recent FDA guidance on adaptive designs, it is desirable to report estimates of treatment effects that reduce or remove this bias. However, it may be unclear which of the available estimators are preferable, and their use remains rare in practice. This article is the second in a two-part series that studies the issue of bias in point estimation for adaptive trials. Part I provided a methodological review of approaches to remove or reduce the potential bias in point estimation for adaptive designs. In part II, we discuss how bias can affect standard estimators and assess the negative impact this can have. We review current practice for reporting point estimates and illustrate the computation of different estimators using a real adaptive trial example (including code), which we use as a basis for a simulation study. We show that while on average the values of these estimators can be similar, for a particular trial realization they can give noticeably different values for the estimated treatment effect. Finally, we propose guidelines for researchers around the choice of estimators and the reporting of estimates following an adaptive design. The issue of bias should be considered throughout the whole lifecycle of an adaptive design, with the estimation strategy prespecified in the statistical analysis plan. When available, unbiased or bias-reduced estimates are to be preferred.


Assuntos
Projetos de Pesquisa , Humanos , Simulação por Computador , Viés
7.
Stat Methods Med Res ; 32(6): 1193-1202, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-37021480

RESUMO

Response-adaptive randomization allows the probabilities of allocating patients to treatments in a clinical trial to change based on the previously observed response data, in order to achieve different experimental goals. One concern over the use of such designs in practice, particularly from a regulatory viewpoint, is controlling the type I error rate. To address this, Robertson and Wason (Biometrics, 2019) proposed methodology that guarantees familywise error rate control for a large class of response-adaptive designs by re-weighting the usual z-test statistic. In this article, we propose an improvement of their method that is conceptually simpler, in the context where patients are allocated to the experimental treatment arms in a trial in blocks (i.e. groups) using response-adaptive randomization. We show the modified method guarantees that there will never be negative weights for the contribution of each block of data to the adjusted test statistics, and can also provide a substantial power advantage in practice.


Assuntos
Projetos de Pesquisa , Humanos , Distribuição Aleatória , Probabilidade
8.
Stat Med ; 42(14): 2475-2495, 2023 06 30.
Artigo em Inglês | MEDLINE | ID: mdl-37005003

RESUMO

Platform trials evaluate multiple experimental treatments under a single master protocol, where new treatment arms are added to the trial over time. Given the multiple treatment comparisons, there is the potential for inflation of the overall type I error rate, which is complicated by the fact that the hypotheses are tested at different times and are not necessarily pre-specified. Online error rate control methodology provides a possible solution to the problem of multiplicity for platform trials where a relatively large number of hypotheses are expected to be tested over time. In the online multiple hypothesis testing framework, hypotheses are tested one-by-one over time, where at each time-step an analyst decides whether to reject the current null hypothesis without knowledge of future tests but based solely on past decisions. Methodology has recently been developed for online control of the false discovery rate as well as the familywise error rate (FWER). In this article, we describe how to apply online error rate control to the platform trial setting, present extensive simulation results, and give some recommendations for the use of this new methodology in practice. We show that the algorithms for online error rate control can have a substantially lower FWER than uncorrected testing, while still achieving noticeable gains in power when compared with the use of a Bonferroni correction. We also illustrate how online error rate control would have impacted a currently ongoing platform trial.


Assuntos
Projetos de Pesquisa , Humanos , Interpretação Estatística de Dados , Simulação por Computador
9.
Stat Med ; 42(2): 122-145, 2023 01 30.
Artigo em Inglês | MEDLINE | ID: mdl-36451173

RESUMO

Recent FDA guidance on adaptive clinical trial designs defines bias as "a systematic tendency for the estimate of treatment effect to deviate from its true value," and states that it is desirable to obtain and report estimates of treatment effects that reduce or remove this bias. The conventional end-of-trial point estimates of the treatment effects are prone to bias in many adaptive designs, because they do not take into account the potential and realized trial adaptations. While much of the methodological developments on adaptive designs have tended to focus on control of type I error rates and power considerations, in contrast the question of biased estimation has received relatively less attention. This article is the first in a two-part series that studies the issue of potential bias in point estimation for adaptive trials. Part I provides a comprehensive review of the methods to remove or reduce the potential bias in point estimation of treatment effects for adaptive designs, while part II illustrates how to implement these in practice and proposes a set of guidelines for trial statisticians. The methods reviewed in this article can be broadly classified into unbiased and bias-reduced estimation, and we also provide a classification of estimators by the type of adaptive design. We compare the proposed methods, highlight available software and code, and discuss potential methodological gaps in the literature.


Assuntos
Projetos de Pesquisa , Software , Humanos , Viés
10.
Bioinformatics ; 39(1)2023 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-36326442

RESUMO

MOTIVATION: While classical approaches for controlling the false discovery rate (FDR) of RNA sequencing (RNAseq) experiments have been well described, modern research workflows and growing databases enable a new paradigm of controlling the FDR globally across RNAseq experiments in the past, present and future. The simplest analysis strategy that analyses each RNAseq experiment separately and applies an FDR correction method can lead to inflation of the overall FDR. We propose applying recently developed methodology for online multiple hypothesis testing to control the global FDR in a principled way across multiple RNAseq experiments. RESULTS: We show that repeated application of classical repeated offline approaches has variable control of global FDR of RNAseq experiments over time. We demonstrate that the online FDR algorithms are a principled way to control FDR. Furthermore, in certain simulation scenarios, we observe empirically that online approaches have comparable power to repeated offline approaches. AVAILABILITY AND IMPLEMENTATION: The onlineFDR package is freely available at http://www.bioconductor.org/packages/onlineFDR. Additional code used for the simulation studies can be found at https://github.com/latlio/onlinefdr_rnaseq_simulation. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Assuntos
Algoritmos , Software , Simulação por Computador , Análise de Sequência de RNA/métodos , Sequência de Bases
11.
Stat Sci ; 38(4): 557-575, 2023 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-38223302

RESUMO

Modern data analysis frequently involves large-scale hypothesis testing, which naturally gives rise to the problem of maintaining control of a suitable type I error rate, such as the false discovery rate (FDR). In many biomedical and technological applications, an additional complexity is that hypotheses are tested in an online manner, one-by-one over time. However, traditional procedures that control the FDR, such as the Benjamini-Hochberg procedure, assume that all p-values are available to be tested at a single time point. To address these challenges, a new field of methodology has developed over the past 15 years showing how to control error rates for online multiple hypothesis testing. In this framework, hypotheses arrive in a stream, and at each time point the analyst decides whether to reject the current hypothesis based both on the evidence against it, and on the previous rejection decisions. In this paper, we present a comprehensive exposition of the literature on online error rate control, with a review of key theory as well as a focus on applied examples. We also provide simulation results comparing different online testing algorithms and an up-to-date overview of the many methodological extensions that have been proposed.

12.
PLoS One ; 17(9): e0274272, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36094920

RESUMO

When comparing the performance of multi-armed bandit algorithms, the potential impact of missing data is often overlooked. In practice, it also affects their implementation where the simplest approach to overcome this is to continue to sample according to the original bandit algorithm, ignoring missing outcomes. We investigate the impact on performance of this approach to deal with missing data for several bandit algorithms through an extensive simulation study assuming the rewards are missing at random. We focus on two-armed bandit algorithms with binary outcomes in the context of patient allocation for clinical trials with relatively small sample sizes. However, our results apply to other applications of bandit algorithms where missing data is expected to occur. We assess the resulting operating characteristics, including the expected reward. Different probabilities of missingness in both arms are considered. The key finding of our work is that when using the simplest strategy of ignoring missing data, the impact on the expected performance of multi-armed bandit strategies varies according to the way these strategies balance the exploration-exploitation trade-off. Algorithms that are geared towards exploration continue to assign samples to the arm with more missing responses (which being perceived as the arm with less observed information is deemed more appealing by the algorithm than it would otherwise be). In contrast, algorithms that are geared towards exploitation would rapidly assign a high value to samples from the arms with a current high mean irrespective of the level observations per arm. Furthermore, for algorithms focusing more on exploration, we illustrate that the problem of missing responses can be alleviated using a simple mean imputation approach.


Assuntos
Algoritmos , Simulação por Computador , Humanos , Pesquisa , Recompensa
13.
Stat Methods Med Res ; 31(11): 2104-2121, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-35876412

RESUMO

Covariate adjustment via a regression approach is known to increase the precision of statistical inference when fixed trial designs are employed in randomized controlled studies. When an adaptive multi-arm design is employed with the ability to select treatments, it is unclear how covariate adjustment affects various aspects of the study. Consider the design framework that relies on pre-specified treatment selection rule(s) and a combination test approach for hypothesis testing. It is our primary goal to evaluate the impact of covariate adjustment on adaptive multi-arm designs with treatment selection. Our secondary goal is to show how the Uniformly Minimum Variance Conditionally Unbiased Estimator can be extended to account for covariate adjustment analytically. We find that adjustment with different sets of covariates can lead to different treatment selection outcomes and hence probabilities of rejecting hypotheses. Nevertheless, we do not see any negative impact on the control of the familywise error rate when covariates are included in the analysis model. When adjusting for covariates that are moderately or highly correlated with the outcome, we see various benefits to the analysis of the design. Conversely, there is negligible impact when including covariates that are uncorrelated with the outcome. Overall, pre-specification of covariate adjustment is recommended for the analysis of adaptive multi-arm design with treatment selection. Having the statistical analysis plan in place prior to the interim and final analyses is crucial, especially when a non-collapsible measure of treatment effect is considered in the trial.


Assuntos
Projetos de Pesquisa , Probabilidade , Resultado do Tratamento , Seleção de Pacientes , Simulação por Computador
14.
Stat Med ; 41(5): 877-890, 2022 02 28.
Artigo em Inglês | MEDLINE | ID: mdl-35023184

RESUMO

Adapting the final sample size of a trial to the evidence accruing during the trial is a natural way to address planning uncertainty. Since the sample size is usually determined by an argument based on the power of the trial, an interim analysis raises the question of how the final sample size should be determined conditional on the accrued information. To this end, we first review and compare common approaches to estimating conditional power, which is often used in heuristic sample size recalculation rules. We then discuss the connection of heuristic sample size recalculation and optimal two-stage designs, demonstrating that the latter is the superior approach in a fully preplanned setting. Hence, unplanned design adaptations should only be conducted as reaction to trial-external new evidence, operational needs to violate the originally chosen design, or post hoc changes in the optimality criterion but not as a reaction to trial-internal data. We are able to show that commonly discussed sample size recalculation rules lead to paradoxical adaptations where an initially planned optimal design is not invariant under the adaptation rule even if the planning assumptions do not change. Finally, we propose two alternative ways of reacting to newly emerging trial-external evidence in ways that are consistent with the originally planned design to avoid such inconsistencies.


Assuntos
Amigos , Projetos de Pesquisa , Humanos , Tamanho da Amostra , Incerteza
15.
Am Stat ; 75(4): 424-432, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34992303

RESUMO

Sample size derivation is a crucial element of planning any confirmatory trial. The required sample size is typically derived based on constraints on the maximal acceptable Type I error rate and minimal desired power. Power depends on the unknown true effect and tends to be calculated either for the smallest relevant effect or a likely point alternative. The former might be problematic if the minimal relevant effect is close to the null, thus requiring an excessively large sample size, while the latter is dubious since it does not account for the a priori uncertainty about the likely alternative effect. A Bayesian perspective on sample size derivation for a frequentist trial can reconcile arguments about the relative a priori plausibility of alternative effects with ideas based on the relevance of effect sizes. Many suggestions as to how such "hybrid" approaches could be implemented in practice have been put forward. However, key quantities are often defined in subtly different ways in the literature. Starting from the traditional entirely frequentist approach to sample size derivation, we derive consistent definitions for the most commonly used hybrid quantities and highlight connections, before discussing and demonstrating their use in sample size derivation for clinical trials.

16.
Pharm Stat ; 20(1): 109-116, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-32790026

RESUMO

Multi-arm trials are an efficient way of simultaneously testing several experimental treatments against a shared control group. As well as reducing the sample size required compared to running each trial separately, they have important administrative and logistical advantages. There has been debate over whether multi-arm trials should correct for the fact that multiple null hypotheses are tested within the same experiment. Previous opinions have ranged from no correction is required, to a stringent correction (controlling the probability of making at least one type I error) being needed, with regulators arguing the latter for confirmatory settings. In this article, we propose that controlling the false-discovery rate (FDR) is a suitable compromise, with an appealing interpretation in multi-arm clinical trials. We investigate the properties of the different correction methods in terms of the positive and negative predictive value (respectively how confident we are that a recommended treatment is effective and that a non-recommended treatment is ineffective). The number of arms and proportion of treatments that are truly effective is varied. Controlling the FDR provides good properties. It retains the high positive predictive value of FWER correction in situations where a low proportion of treatments is effective. It also has a good negative predictive value in situations where a high proportion of treatments is effective. In a multi-arm trial testing distinct treatment arms, we recommend that sponsors and trialists consider use of the FDR.


Assuntos
Projetos de Pesquisa , Grupos Controle , Interpretação Estatística de Dados , Humanos , Probabilidade , Tamanho da Amostra
18.
Stat Med ; 39(23): 3135-3155, 2020 10 15.
Artigo em Inglês | MEDLINE | ID: mdl-32557848

RESUMO

When simultaneously testing multiple hypotheses, the usual approach in the context of confirmatory clinical trials is to control the familywise error rate (FWER), which bounds the probability of making at least one false rejection. In many trial settings, these hypotheses will additionally have a hierarchical structure that reflects the relative importance and links between different clinical objectives. The graphical approach of Bretz et al (2009) is a flexible and easily communicable way of controlling the FWER while respecting complex trial objectives and multiple structured hypotheses. However, the FWER can be a very stringent criterion that leads to procedures with low power, and may not be appropriate in exploratory trial settings. This motivates controlling generalized error rates, particularly when the number of hypotheses tested is no longer small. We consider the generalized familywise error rate (k-FWER), which is the probability of making k or more false rejections, as well as the tail probability of the false discovery proportion (FDP), which is the probability that the proportion of false rejections is greater than some threshold. We also consider asymptotic control of the false discovery rate, which is the expectation of the FDP. In this article, we show how to control these generalized error rates when using the graphical approach and its extensions. We demonstrate the utility of the resulting graphical procedures on three clinical trial case studies.


Assuntos
Projetos de Pesquisa , Humanos , Probabilidade
19.
Commun Stat Theory Methods ; 48(3): 616-627, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31217751

RESUMO

To efficiently and completely correct for selection bias in adaptive two-stage trials, uniformly minimum variance conditionally unbiased estimators (UMVCUEs) have been derived for trial designs with normally distributed data. However, a common assumption is that the variances are known exactly, which is unlikely to be the case in practice. We extend the work of Cohen and Sackrowitz (Statistics & Probability Letters, 8(3):273-278, 1989), who proposed an UMVCUE for the best performing candidate in the normal setting with a common unknown variance. Our extension allows for multiple selected candidates, as well as unequal stage one and two sample sizes.

20.
Bioinformatics ; 35(20): 4196-4199, 2019 10 15.
Artigo em Inglês | MEDLINE | ID: mdl-30873526

RESUMO

SUMMARY: In many areas of biological research, hypotheses are tested in a sequential manner, without having access to future P-values or even the number of hypotheses to be tested. A key setting where this online hypothesis testing occurs is in the context of publicly available data repositories, where the family of hypotheses to be tested is continually growing as new data is accumulated over time. Recently, Javanmard and Montanari proposed the first procedures that control the FDR for online hypothesis testing. We present an R package, onlineFDR, which implements these procedures and provides wrapper functions to apply them to a historic dataset or a growing data repository. AVAILABILITY AND IMPLEMENTATION: The R package is freely available through Bioconductor (http://www.bioconductor.org/packages/onlineFDR). SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Assuntos
Software
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...