Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 151
Filtrar
Mais filtros

Eixos temáticos
Tipo de documento
Intervalo de ano de publicação
1.
Stat Med ; 43(15): 2944-2956, 2024 Jul 10.
Artigo em Inglês | MEDLINE | ID: mdl-38747112

RESUMO

Sample size formulas have been proposed for comparing two sensitivities (specificities) in the presence of verification bias under a paired design. However, the existing sample size formulas involve lengthy calculations of derivatives and are too complicated to implement. In this paper, we propose alternative sample size formulas for each of three existing tests, two Wald tests and one weighted McNemar's test. The proposed sample size formulas are more intuitive and simpler to implement than their existing counterparts. Furthermore, by comparing the sample sizes calculated based on the three tests, we can show that the three tests have similar sample sizes even though the weighted McNemar's test only use the data from discordant pairs whereas the two Wald tests also use the additional data from accordant pairs.


Assuntos
Sensibilidade e Especificidade , Tamanho da Amostra , Humanos , Modelos Estatísticos , Viés , Simulação por Computador
2.
Stat Med ; 43(2): 358-378, 2024 01 30.
Artigo em Inglês | MEDLINE | ID: mdl-38009329

RESUMO

Individually randomized group treatment (IRGT) trials, in which the clustering of outcome is induced by group-based treatment delivery, are increasingly popular in public health research. IRGT trials frequently incorporate longitudinal measurements, of which the proper sample size calculations should account for correlation structures reflecting both the treatment-induced clustering and repeated outcome measurements. Given the relatively sparse literature on designing longitudinal IRGT trials, we propose sample size procedures for continuous and binary outcomes based on the generalized estimating equations approach, employing the block exchangeable correlation structures with different correlation parameters for the treatment arm and for the control arm, and surveying five marginal mean models with different assumptions of time effect: no-time constant treatment effect, linear-time constant treatment effect, categorical-time constant treatment effect, linear time by treatment interaction, and categorical time by treatment interaction. Closed-form sample size formulas are derived for continuous outcomes, which depends on the eigenvalues of the correlation matrices; detailed numerical sample size procedures are proposed for binary outcomes. Through simulations, we demonstrate that the empirical power agrees well with the predicted power, for as few as eight groups formed in the treatment arm, when data are analyzed using the matrix-adjusted estimating equations for the correlation parameters with a bias-corrected sandwich variance estimator.


Assuntos
Modelos Estatísticos , Projetos de Pesquisa , Humanos , Tamanho da Amostra , Viés , Análise por Conglomerados , Simulação por Computador
3.
J Dairy Sci ; 2024 Jun 12.
Artigo em Inglês | MEDLINE | ID: mdl-38876218

RESUMO

This research introduces a systematic framework for calculating sample size in studies focusing on enteric methane (CH4, g/kg of DMI) yield reduction in dairy cows. Adhering to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, we conducted a comprehensive search across the Web of Science, Scopus, and PubMed Central databases for studies published from 2012 to 2023. The inclusion criteria were: studies reporting CH4 yield and its variability in dairy cows, employing specific experimental designs (Latin Square Design (LSD), Crossover Design, Randomized Complete Block Design (RCBD), and Repeated Measures Design) and measurement methods (Open-circuit respirometry chambers (RC), the GreenFeed system, and the sulfur hexafluoride tracer technique), conducted in Canada, the United States and Europe. A total of 150 studies, which included 177 reports, met our criteria and were included in the database. Our methodology for using the database for sample size calculations began by defining 6 CH4 yield reduction levels (5, 10, 15, 20, 30, and 50%). Utilizing an adjusted Cohen's f formula and a power analysis we calculated the sample sizes required for these reductions in balanced LSD and RCBD reports from studies involving 3 or 4 treatments. The results indicate that within-subject studies (i.e., LSD) require smaller sample sizes to detect CH4 yield reductions compared with between-subject studies (i.e., RCBD). Although experiments using RC typically require fewer individuals due to their higher accuracy, our results demonstrate that this expected advantage is not evident in reports from RCBD studies with 4 treatments. A key innovation of this research is the development of a web-based tool that simplifies the process of sample size calculation (samplesizecalculator.ucdavis.edu). Developed using Python, this tool leverages the extensive database to provide tailored sample size recommendations for specific experimental scenarios. It ensures that experiments are adequately powered to detect meaningful differences in CH4 emissions, thereby contributing to the scientific rigor of studies in this critical area of environmental and agricultural research. With its user-friendly interface and robust backend calculations, this tool represents a significant advancement in the methodology for planning and executing CH4 emission studies in dairy cows, aligning with global efforts toward sustainable agricultural practices and environmental conservation.

4.
Pharm Stat ; 23(1): 46-59, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38267827

RESUMO

Count outcomes are collected in clinical trials for new drug development in several therapeutic areas and the event rate is commonly used as a single primary endpoint. Count outcomes that are greater than the mean value are termed overdispersion; thus, count outcomes are assumed to have a negative binomial distribution. However, in clinical trials for treating asthma and chronic obstructive pulmonary disease (COPD), a regulatory agency has suggested that a continuous endpoint related to lung function must be evaluated as a primary endpoint in addition to the event rate. The two co-primary endpoints that need to be evaluated include overdispersed count and continuous outcomes. Some researchers have proposed sample size calculation methods in the context of co-primary endpoints for various outcome types. However, methodologies for sample size calculation in trials with two co-primary endpoints, including overdispersed count and continuous outcomes, required when planning clinical trials for treating asthma and COPD, remain to be proposed. In this study, we aimed to develop a hypothesis-testing method and a corresponding sample size calculation method with two co-primary endpoints including overdispersed count and continuous outcomes. In a simulation, we demonstrated that the proposed sample size calculation method has adequate power accuracy. In addition, we illustrated an application of the proposed sample size calculation method to a placebo-controlled Phase 3 trial for patients with COPD.


Assuntos
Asma , Doença Pulmonar Obstrutiva Crônica , Humanos , Tamanho da Amostra , Asma/tratamento farmacológico , Doença Pulmonar Obstrutiva Crônica/diagnóstico , Doença Pulmonar Obstrutiva Crônica/tratamento farmacológico , Distribuição Binomial , Simulação por Computador
5.
Pharm Stat ; 2024 Mar 20.
Artigo em Inglês | MEDLINE | ID: mdl-38509020

RESUMO

In randomised controlled trials, the outcome of interest could be recurrent events, such as hospitalisations for heart failure. If mortality rates are non-negligible, both recurrent events and competing terminal events need to be addressed when formulating the estimand and statistical analysis is no longer trivial. In order to design future trials with primary recurrent event endpoints with competing risks, it is necessary to be able to perform power calculations to determine sample sizes. This paper introduces a simulation-based approach for power estimation based on a proportional means model for recurrent events and a proportional hazards model for terminal events. The simulation procedure is presented along with a discussion of what the user needs to specify to use the approach. The method is flexible and based on marginal quantities which are easy to specify. However, the method introduces a lack of a certain type of dependence. This is explored in a sensitivity analysis which suggests that the power is robust in spite of that. Data from a randomised controlled trial, LEADER, is used as the basis for generating data for a future trial. Finally, potential power gains of recurrent event methods as opposed to first event methods are discussed.

6.
Biom J ; 66(5): e202300167, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38988194

RESUMO

In the individual stepped-wedge randomized trial (ISW-RT), subjects are allocated to sequences, each sequence being defined by a control period followed by an experimental period. The total follow-up time is the same for all sequences, but the duration of the control and experimental periods varies among sequences. To our knowledge, there is no validated sample size calculation formula for ISW-RTs unlike stepped-wedge cluster randomized trials (SW-CRTs). The objective of this study was to adapt the formula used for SW-CRTs to the case of individual randomization and to validate this adaptation using a Monte Carlo simulation study. The proposed sample size calculation formula for an ISW-RT design yielded satisfactory empirical power for most scenarios except scenarios with operating characteristic values near the boundary (i.e., smallest possible number of periods, very high or very low autocorrelation coefficient). Overall, the results provide useful insights into the sample size calculation for ISW-RTs.


Assuntos
Método de Monte Carlo , Ensaios Clínicos Controlados Aleatórios como Assunto , Tamanho da Amostra , Humanos , Biometria/métodos
7.
Eur J Neurosci ; 58(1): 2339-2360, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-37143185

RESUMO

The main reasons for the low reliability of results from preclinical studies are the lack of prior sample size calculations and poor experimental design. Here, we demonstrate how the tools of meta-analysis can be implemented to tackle these issues. We conducted a systematic search to identify controlled studies testing established migraine treatments in the electrophysiological model of trigeminovascular nociception (EMTVN). Drug effects on the two outcomes, dural stimulation-evoked responses and ongoing neuronal activity were analysed separately using a three-level model with robust variance estimation. According to the meta-analysis, which included 21 experiments in rats reported in 13 studies, these drugs significantly reduced trigeminovascular nociceptive traffic, affecting both outcomes. Based on the estimated effect sizes and outcome variance, we provide guidance on sample sizes allowing to detect such effects with sufficient power in future experiments. Considering the revealed methodological features that potentially influence the results and the main source of statistical bias of the included studies, we discuss the translational potential of the EMTVN and the steps needed to improve it. We believe that the presented approach can be used for design optimization in research with other animal models and as such deserves further validation.


Assuntos
Transtornos de Enxaqueca , Nociceptividade , Ratos , Animais , Nociceptividade/fisiologia , Reprodutibilidade dos Testes , Neurônios/fisiologia , Transtornos de Enxaqueca/tratamento farmacológico
8.
Biometrics ; 79(4): 3701-3714, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-37612246

RESUMO

The restricted mean time in favor (RMT-IF) of treatment has just been added to the analytic toolbox for composite endpoints of recurrent events and death. To help practitioners design new trials based on this method, we develop tools to calculate the sample size and power. Specifically, we formulate the outcomes as a multistate Markov process with a sequence of transient states for recurrent events and an absorbing state for death. The transition intensities, in this case the instantaneous risks of another nonfatal event or death, are assumed to be time-homogeneous but nonetheless allowed to depend on the number of past events. Using the properties of Coxian distributions, we derive the RMT-IF effect size under the alternative hypothesis as a function of the treatment-to-control intensity ratios along with the baseline intensities, the latter of which can be easily estimated from historical data. We also reduce the variance of the nonparametric RMT-IF estimator to calculable terms under a standard set-up for censoring. Simulation studies show that the resulting formulas provide accurate approximation to the sample size and power in realistic settings. For illustration, a past cardiovascular trial with recurrent-hospitalization and mortality outcomes is analyzed to generate the parameters needed to design a future trial. The procedures are incorporated into the rmt package along with the original methodology on the Comprehensive R Archive Network (CRAN).


Assuntos
Hospitalização , Projetos de Pesquisa , Humanos , Tamanho da Amostra , Simulação por Computador , Fatores de Tempo
9.
Stat Med ; 42(16): 2777-2796, 2023 07 20.
Artigo em Inglês | MEDLINE | ID: mdl-37094566

RESUMO

Micro-randomized trials (MRTs) are a novel experimental design for developing mobile health interventions. Participants are repeatedly randomized in an MRT, resulting in longitudinal data with time-varying treatments. Causal excursion effects are the main quantities of interest in MRT primary and secondary analyses. We consider MRTs where the proximal outcome is binary and the randomization probability is constant or time-varying but not data-dependent. We develop a sample size formula for detecting a nonzero marginal excursion effect. We prove that the formula guarantees power under a set of working assumptions. We demonstrate via simulation that violations of certain working assumptions do not affect the power, and for those that do, we point out the direction in which the power changes. We then propose practical guidelines for using the sample size formula. As an illustration, the formula is used to size an MRT on interventions for excessive drinking. The sample size calculator is implemented in R package MRTSampleSizeBinary and an interactive R Shiny app. This work can be used in trial planning for a wide range of MRTs with binary proximal outcomes.


Assuntos
Projetos de Pesquisa , Humanos , Tamanho da Amostra , Ensaios Clínicos Controlados Aleatórios como Assunto , Simulação por Computador
10.
Stat Med ; 42(10): 1480-1491, 2023 05 10.
Artigo em Inglês | MEDLINE | ID: mdl-36808736

RESUMO

A multi-arm trial allows simultaneous comparison of multiple experimental treatments with a common control and provides a substantial efficiency advantage compared to the traditional randomized controlled trial. Many novel multi-arm multi-stage (MAMS) clinical trial designs have been proposed. However, a major hurdle to adopting the group sequential MAMS routinely is the computational effort of obtaining total sample size and sequential stopping boundaries. In this paper, we develop a group sequential MAMS trial design based on the sequential conditional probability ratio test. The proposed method provides analytical solutions for futility and efficacy boundaries to an arbitrary number of stages and arms. Thus, it avoids complicated computational effort for the methods proposed by Magirr et al. Simulation results showed that the proposed method has several advantages compared to the methods implemented in R package MAMS by Magirr et al.


Assuntos
Projetos de Pesquisa , Humanos , Seleção de Pacientes , Tamanho da Amostra , Simulação por Computador
11.
Stat Appl Genet Mol Biol ; 21(1)2022 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-36215429

RESUMO

Batch effect Reduction of mIcroarray data with Dependent samples usinG Empirical Bayes (BRIDGE) is a recently developed statistical method to address the issue of batch effect correction in batch-confounded microarray studies with dependent samples. The key component of the BRIDGE methodology is the use of samples run as technical replicates in two or more batches, "bridging samples", to inform batch effect correction/attenuation. While previously published results indicate a relationship between the number of bridging samples, M, and the statistical power of downstream statistical testing on the batch-corrected data, there is of yet no formal statistical framework or user-friendly software, for estimating M to achieve a specific statistical power for hypothesis tests conducted on the batch-corrected data. To fill this gap, we developed pwrBRIDGE, a simulation-based approach to estimate the bridging sample size, M, in batch-confounded longitudinal microarray studies. To illustrate the use of pwrBRIDGE, we consider a hypothetical, longitudinal batch-confounded study whose goal is to identify Alzheimer's disease (AD) progression-associated genes from amnestic mild cognitive impairment (aMCI) to AD in human blood after a 5-year follow-up. pwrBRIDGE helps researchers design and plan batch-confounded microarray studies with dependent samples to avoid over- or under-powered studies.


Assuntos
Software , Teorema de Bayes , Humanos , Estudos Longitudinais , Análise em Microsséries , Tamanho da Amostra
12.
BMC Med Res Methodol ; 23(1): 90, 2023 04 11.
Artigo em Inglês | MEDLINE | ID: mdl-37041459

RESUMO

Indirect standardization, and its associated parameter the standardized incidence ratio, is a commonly-used tool in hospital profiling for comparing the incidence of negative outcomes between an index hospital and a larger population of reference hospitals, while adjusting for confounding covariates. In statistical inference of the standardized incidence ratio, traditional methods often assume the covariate distribution of the index hospital to be known. This assumption severely compromises one's ability to compute required sample sizes for high-powered indirect standardization, as in contexts where sample size calculation is desired, there are usually no means of knowing this distribution. This paper presents novel statistical methodology to perform sample size calculation for the standardized incidence ratio without knowing the covariate distribution of the index hospital and without collecting information from the index hospital to estimate this covariate distribution. We apply our methods to simulation studies and to real hospitals, to assess both its capabilities in a vacuum and in comparison to traditional assumptions of indirect standardization.


Assuntos
Hospitais , Humanos , Tamanho da Amostra , Simulação por Computador , Padrões de Referência
13.
BMC Med Res Methodol ; 23(1): 274, 2023 11 21.
Artigo em Inglês | MEDLINE | ID: mdl-37990159

RESUMO

BACKGROUND: For certain conditions, treatments aim to lessen deterioration over time. A trial outcome could be change in a continuous measure, analysed using a random slopes model with a different slope in each treatment group. A sample size for a trial with a particular schedule of visits (e.g. annually for three years) can be obtained using a two-stage process. First, relevant (co-) variances are estimated from a pre-existing dataset e.g. an observational study conducted in a similar setting. Second, standard formulae are used to calculate sample size. However, the random slopes model assumes linear trajectories with any difference in group means increasing proportionally to follow-up time. The impact of these assumptions failing is unclear. METHODS: We used simulation to assess the impact of a non-linear trajectory and/or non-proportional treatment effect on the proposed trial's power. We used four trajectories, both linear and non-linear, and simulated observational studies to calculate sample sizes. Trials of this size were then simulated, with treatment effects proportional or non-proportional to time. RESULTS: For a proportional treatment effect and a trial visit schedule matching the observational study, powers are close to nominal even for non-linear trajectories. However, if the schedule does not match the observational study, powers can be above or below nominal levels, with the extent of this depending on parameters such as the residual error variance. For a non-proportional treatment effect, using a random slopes model can lead to powers far from nominal levels. CONCLUSIONS: If trajectories are suspected to be non-linear, observational data used to inform power calculations should have the same visit schedule as the proposed trial where possible. Additionally, if the treatment effect is expected to be non-proportional, the random slopes model should not be used. A model allowing trajectories to vary freely over time could be used instead, either as a second line analysis method (bearing in mind that power will be lost) or when powering the trial.


Assuntos
Tamanho da Amostra , Humanos , Simulação por Computador
14.
Clin Trials ; 20(5): 473-478, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37144615

RESUMO

BACKGROUND: The sample size calculation is an important step in designing randomised controlled trials. For a trial comparing a control and an intervention group, where the outcome is binary, the sample size calculation requires choosing values for the anticipated event rates in both the control and intervention groups (the effect size), and the error rates. The Difference ELicitation in TriAls guidance recommends that the effect size should be both realistic, and clinically important to stakeholder groups. Overestimating the effect size leads to sample sizes that are too small to reliably detect the true population effect size, which in turn results in low achieved power. In this study, we use the Delphi approach to gain consensus on what the minimum clinically important effect size is for Balanced-2, a randomised controlled trial comparing processed electroencephalogram-guided 'light' to 'deep' general anaesthesia on the incidence of postoperative delirium in older adults undergoing major surgery. METHODS: Delphi rounds were conducted using electronic surveys. Surveys were administered to two stakeholder groups: specialist anaesthetists from a general adult department in Auckland City Hospital, New Zealand (Group 1), and specialist anaesthetists with expertise in clinical research, identified from the Australian and New Zealand College of Anaesthetist's Clinical Trials Network (Group 2). A total of 187 anaesthetists were invited to participate (81 from Group 1 and 106 from Group 2). Results from each Delphi round were summarised and presented in subsequent rounds until consensus was reached (>70% agreement). RESULTS: The overall response rate for the first Delphi survey was 47% (88/187). The median minimum clinically important effect size was 5.0% (interquartile range: 5.0-10.0) for both stakeholder groups. The overall response rate for the second Delphi survey was 51% (95/187). Consensus was reached after the second round, as 74% of respondents in Group 1 and 82% of respondents in Group 2 agreed with the median effect size. The combined minimum clinically important effect size across both groups was 5.0% (interquartile range: 3.0-6.5). CONCLUSIONS: This study demonstrates that surveying stakeholder groups using a Delphi process is a simple way of defining a minimum clinically important effect size, which aids the sample size calculation and determines whether a randomised study is feasible.


Assuntos
Técnica Delphi , Humanos , Idoso , Austrália , Tamanho da Amostra , Inquéritos e Questionários , Consenso , Ensaios Clínicos Controlados Aleatórios como Assunto
15.
Pharm Stat ; 22(4): 707-720, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37114714

RESUMO

Conditional (European Medicines Agency) or accelerated (U.S. Food and Drug Administration) approval of drugs allows earlier access to promising new treatments that address unmet medical needs. Certain post-marketing requirements must typically be met in order to obtain full approval, such as conducting a new post-market clinical trial. We study the applicability of the recently developed harmonic mean χ 2 -test to this conditional or accelerated approval framework. The proposed approach can be used both to support the design of the post-market trial and the analysis of the combined evidence provided by both trials. Other methods considered are the two-trials rule, Fisher's criterion and Stouffer's method. In contrast to some of the traditional methods, the harmonic mean χ 2 -test always requires a post-market clinical trial. If the p -value from the pre-market clinical trial is ≪ 0.025 , a smaller sample size for the post-market clinical trial is needed than with the two-trials rule. For illustration, we apply the harmonic mean χ 2 -test to a drug which received conditional (and later full) market licensing by the EMA. A simulation study is conducted to study the operating characteristics of the harmonic mean χ 2 -test and two-trials rule in more detail. We finally investigate the applicability of these two methods to compute the power at interim of an ongoing post-market trial. These results are expected to aid in the design and assessment of the required post-market studies in terms of the level of evidence required for full approval.


Assuntos
Aprovação de Drogas , Humanos , Tamanho da Amostra , Estados Unidos , Ensaios Clínicos como Assunto
16.
Biom J ; 65(7): e2000326, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37309256

RESUMO

The increasing interest in subpopulation analysis has led to the development of various new trial designs and analysis methods in the fields of personalized medicine and targeted therapies. In this paper, subpopulations are defined in terms of an accumulation of disjoint population subsets and will therefore be called composite populations. The proposed trial design is applicable to any set of composite populations, considering normally distributed endpoints and random baseline covariates. Treatment effects for composite populations are tested by combining p-values, calculated on the subset levels, using the inverse normal combination function to generate test statistics for those composite populations while the closed testing procedure accounts for multiple testing. Critical boundaries for intersection hypothesis tests are derived using multivariate normal distributions, reflecting the joint distribution of composite population test statistics given no treatment effect exists. For sample size calculation and sample size, recalculation multivariate normal distributions are derived which describe the joint distribution of composite population test statistics under an assumed alternative hypothesis. Simulations demonstrate the absence of any practical relevant inflation of the type I error rate. The target power after sample size recalculation is typically met or close to being met.


Assuntos
Projetos de Pesquisa , Tamanho da Amostra , Ensaios Clínicos como Assunto
17.
Biom J ; 65(8): e2300123, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-37377083

RESUMO

The formula of Fleiss and Cuzick (1979) to estimate the intraclass correlation coefficient is applied to reduce the task of sample size calculation for clustered data with binary outcome. It is demonstrated that this approach reduces the complexity of sample size calculation to the determination of the null and alternative hypothesis and the formulation of the quantitative influence of the belonging to the same cluster on the therapy success probability.


Assuntos
Projetos de Pesquisa , Tamanho da Amostra , Probabilidade , Análise por Conglomerados
18.
Int Ophthalmol ; 43(8): 2999-3010, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-36917324

RESUMO

PURPOSE: Randomized Controlled Trials (RCTs) are considered the gold standard for the practice of evidence-based medicine. The purpose of this study is to systematically assess the reporting of sample size calculations in ophthalmology RCTs in 5 leading journals over a 20-year period. Reviewing sample size calculations in ophthalmology RCTs will shed light on the methodological quality of RCTs and, by extension, on the validity of published results. METHODS: The MEDLINE database was searched to identify full reports of RCTs in the journals Ophthalmology, JAMA Ophthalmology, American Journal of Ophthalmology, Investigative Ophthalmology and Visual Science, and British Journal of Ophthalmology between January and December of the years 2000, 2010 and 2020. Screening identified 559 articles out of which 289 met the inclusion criteria for this systematic review. Data regarding sample size calculation reporting and trial characteristics was extracted for each trial by independent investigators. RESULTS: In 2020, 77.9% of the RCTs reported sample size calculations as compared with 37% in 2000 (p < 0.001) and 60.7% in 2010 (p = 0.012). Studies reporting all necessary parameters for sample size recalculation increased significantly from 17.2% in 2000 to 39.3% in 2010 and 43.0% in 2020 (p < 0.001). Reporting of funding was greater in 2020 (98.8%) compared with 2010 (89.3%) and 2000 (53.1%). Registration in a clinical trials database occurred more frequently in 2020 (94.2%) compared to 2000 (1.2%; p < 0.001) and 2010 (68%; p < 0.001). In 2020, 38.4% of studies reported different sample sizes in the online registry from the published article. Overall, the most studied area in 2000 was glaucoma (29.6% of RCTs), whereas in 2010 and 2020, it was retina (40.2 and 37.2% of the RCTs, respectively). The number of patients enrolled in a study and the number of eyes studied was significantly greater in 2020 compared to 2000 and 2010 (p < 0.001). CONCLUSION: Sample size calculation reporting in ophthalmology RCTs has improved significantly between the years 2000 and 2020 and is comparable to other fields in medicine. However, reporting of certain parameters remains inconsistent with current publication guidelines.


Assuntos
Oftalmologia , Humanos , Tamanho da Amostra , Ensaios Clínicos Controlados Aleatórios como Assunto , Medicina Baseada em Evidências
19.
Stat Med ; 41(18): 3627-3641, 2022 08 15.
Artigo em Inglês | MEDLINE | ID: mdl-35596691

RESUMO

Stepped wedge designs are an increasingly popular variant of longitudinal cluster randomized trial designs, and roll out interventions across clusters in a randomized, but step-wise fashion. In the standard stepped wedge design, assumptions regarding the effect of time on outcomes may require that all clusters start and end trial participation at the same time. This would require ethics approvals and data collection procedures to be in place in all clusters before a stepped wedge trial can start in any cluster. Hence, although stepped wedge designs are useful for testing the impacts of many cluster-based interventions on outcomes, there can be lengthy delays before a trial can commence. In this article, we introduce "batched" stepped wedge designs. Batched stepped wedge designs allow clusters to commence the study in batches, instead of all at once, allowing for staggered cluster recruitment. Like the stepped wedge, the batched stepped wedge rolls out the intervention to all clusters in a randomized and step-wise fashion: a series of self-contained stepped wedge designs. Provided that separate period effects are included for each batch, software for standard stepped wedge sample size calculations can be used. With this time parameterization, in many situations including when linear models are assumed, sample size calculations reduce to the setting of a single stepped wedge design with multiple clusters per sequence. In these situations, sample size calculations will not depend on the delays between the commencement of batches. Hence, the power of batched stepped wedge designs is robust to unexpected delays between batches.


Assuntos
Projetos de Pesquisa , Análise por Conglomerados , Humanos , Modelos Lineares , Tamanho da Amostra
20.
Stat Med ; 41(20): 4022-4033, 2022 09 10.
Artigo em Inglês | MEDLINE | ID: mdl-35688463

RESUMO

Selection trials are used to compare potentially active experimental treatments without a control arm. While sample size calculation methods exist for binary endpoints, no such methods are available for time-to-event endpoints, even though these are ubiquitous in clinical trials. Recent selection trials have begun using progression-free survival as their primary endpoint, but have dichotomized it at a specific time point for sample size calculation and analysis. This changes the clinical question and may reduce power to detect a difference between the arms. In this article, we develop the theory for sample size calculation in selection trials where the time-to-event endpoint is assumed to follow an exponential or Weilbull distribution. We provide a free web application for sample size calculation, as well as an R package, that researchers can use in the design of their studies.


Assuntos
Projetos de Pesquisa , Humanos , Seleção de Pacientes , Ensaios Clínicos Controlados Aleatórios como Assunto , Tamanho da Amostra
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa