Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 37
Filtrar
1.
Biometrics ; 80(2)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38837900

RESUMO

Randomization-based inference using the Fisher randomization test allows for the computation of Fisher-exact P-values, making it an attractive option for the analysis of small, randomized experiments with non-normal outcomes. Two common test statistics used to perform Fisher randomization tests are the difference-in-means between the treatment and control groups and the covariate-adjusted version of the difference-in-means using analysis of covariance. Modern computing allows for fast computation of the Fisher-exact P-value, but confidence intervals have typically been obtained by inverting the Fisher randomization test over a range of possible effect sizes. The test inversion procedure is computationally expensive, limiting the usage of randomization-based inference in applied work. A recent paper by Zhu and Liu developed a closed form expression for the randomization-based confidence interval using the difference-in-means statistic. We develop an important extension of Zhu and Liu to obtain a closed form expression for the randomization-based covariate-adjusted confidence interval and give practitioners a sufficiency condition that can be checked using observed data and that guarantees that these confidence intervals have correct coverage. Simulations show that our procedure generates randomization-based covariate-adjusted confidence intervals that are robust to non-normality and that can be calculated in nearly the same time as it takes to calculate the Fisher-exact P-value, thus removing the computational barrier to performing randomization-based inference when adjusting for covariates. We also demonstrate our method on a re-analysis of phase I clinical trial data.


Assuntos
Simulação por Computador , Intervalos de Confiança , Humanos , Biometria/métodos , Modelos Estatísticos , Interpretação Estatística de Dados , Distribuição Aleatória , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Ensaios Clínicos Controlados Aleatórios como Assunto/métodos
2.
Ann Bot ; 131(4): 555-568, 2023 04 28.
Artigo em Inglês | MEDLINE | ID: mdl-36794962

RESUMO

BACKGROUND: Relative growth rate (RGR) has a long history of use in biology. In its logged form, RGR = ln[(M + ΔM)/M], where M is size of the organism at the commencement of the study, and ΔM is new growth over time interval Δt. It illustrates the general problem of comparing non-independent (confounded) variables, e.g. (X + Y) vs. X. Thus, RGR depends on what starting M(X) is used even within the same growth phase. Equally, RGR lacks independence from its derived components, net assimilation rate (NAR) and leaf mass ratio (LMR), as RGR = NAR × LMR, so that they cannot legitimately be compared by standard regression or correlation analysis. FINDINGS: The mathematical properties of RGR exemplify the general problem of 'spurious' correlations that compare expressions derived from various combinations of the same component terms X and Y. This is particularly acute when X >> Y, the variance of X or Y is large, or there is little range overlap of X and Y values among datasets being compared. Relationships (direction, curvilinearity) between such confounded variables are essentially predetermined and so should not be reported as if they are a finding of the study. Standardizing by M rather than time does not solve the problem. We propose the inherent growth rate (IGR), lnΔM/lnM, as a simple, robust alternative to RGR that is independent of M within the same growth phase. CONCLUSIONS: Although the preferred alternative is to avoid the practice altogether, we discuss cases where comparing expressions with components in common may still have utility. These may provide insights if (1) the regression slope between pairs yields a new variable of biological interest, (2) the statistical significance of the relationship remains supported using suitable methods, such as our specially devised randomization test, or (3) multiple datasets are compared and found to be statistically different. Distinguishing true biological relationships from spurious ones, which arise from comparing non-independent expressions, is essential when dealing with derived variables associated with plant growth analyses.


Assuntos
Desenvolvimento Vegetal , Folhas de Planta
3.
Biom J ; 65(7): e2200082, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37199702

RESUMO

We propose a method to construct simultaneous confidence intervals for a parameter vector from inverting a series of randomization tests (RT). The randomization tests are facilitated by an efficient multivariate Robbins-Monro procedure that takes the correlation information of all components into account. The estimation method does not require any distributional assumption of the population other than the existence of the second moments. The resulting simultaneous confidence intervals are not necessarily symmetric about the point estimate of the parameter vector but possess the property of equal tails in all dimensions. In particular, we present the constructing the mean vector of one population and the difference between two mean vectors of two populations. Extensive simulation is conducted to show numerical comparison with four methods. We illustrate the application of the proposed method to test bioequivalence with multiple endpoints on some real data.


Assuntos
Equivalência Terapêutica , Intervalos de Confiança , Distribuição Aleatória , Simulação por Computador
4.
Behav Res Methods ; 2023 Dec 11.
Artigo em Inglês | MEDLINE | ID: mdl-38082114

RESUMO

Single-case experimental design (SCED) data can be analyzed following different approaches. One of the first historically proposed options is randomizations tests, benefiting from the inclusion of randomization in the design: a desirable methodological feature. Randomization tests have become more feasible with the availability of computational resources, and such tests have been proposed for all major types of SCEDs: multiple-baseline, reversal/withdrawal, alternating treatments, and changing criterion designs. The focus of the current text is on the last of these, given that they have not been the subject of any previous simulation study. Specifically, we estimate type I error rates and statistical power for two different randomization procedures applicable to changing criterion designs: the phase change moment randomization and the blocked alternating criterion randomization. We include different series lengths, number of phases, levels of autocorrelation, and random variability. The results suggest that type I error rates are generally controlled and that sufficient power can be achieved with as few as 28-30 measurements for independent data, although more measurements are needed in case of positive autocorrelation. The presence of a reversal to a previous criterion level is beneficial. R code is provided for carrying out randomization tests following the two randomization procedures.

5.
Stat Med ; 41(10): 1862-1883, 2022 05 10.
Artigo em Inglês | MEDLINE | ID: mdl-35146788

RESUMO

A practical limitation of cluster randomized controlled trials (cRCTs) is that the number of available clusters may be small, resulting in an increased risk of baseline imbalance under simple randomization. Constrained randomization overcomes this issue by restricting the allocation to a subset of randomization schemes where sufficient overall covariate balance across comparison arms is achieved. However, for multi-arm cRCTs, several design and analysis issues pertaining to constrained randomization have not been fully investigated. Motivated by an ongoing multi-arm cRCT, we elaborate the method of constrained randomization and provide a comprehensive evaluation of the statistical properties of model-based and randomization-based tests under both simple and constrained randomization designs in multi-arm cRCTs, with varying combinations of design and analysis-based covariate adjustment strategies. In particular, as randomization-based tests have not been extensively studied in multi-arm cRCTs, we additionally develop most-powerful randomization tests under the linear mixed model framework for our comparisons. Our results indicate that under constrained randomization, both model-based and randomization-based analyses could gain power while preserving nominal type I error rate, given proper analysis-based adjustment for the baseline covariates. Randomization-based analyses, however, are more robust against violations of distributional assumptions. The choice of balance metrics and candidate set sizes and their implications on the testing of the pairwise and global hypotheses are also discussed. Finally, we caution against the design and analysis of multi-arm cRCTs with an extremely small number of clusters, due to insufficient degrees of freedom and the tendency to obtain an overly restricted randomization space.


Assuntos
Projetos de Pesquisa , Análise por Conglomerados , Humanos , Distribuição Aleatória , Ensaios Clínicos Controlados Aleatórios como Assunto
6.
J Biopharm Stat ; 32(3): 441-449, 2022 05 04.
Artigo em Inglês | MEDLINE | ID: mdl-35666618

RESUMO

Randomization-based inference is a useful alternative to traditional population model-based methods. In trials with missing data, multiple imputation is often used. We describe how to construct a randomization test in clinical trials where multiple imputation is used for handling missing data. We illustrate the proposed methodology using Fisher's combining function applied to individual scores in two post-traumatic stress disorder trials.


Assuntos
Interpretação Estatística de Dados , Humanos , Distribuição Aleatória
7.
Behav Res Methods ; 54(4): 1701-1714, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-34608614

RESUMO

Researchers conducting small-scale cluster randomized controlled trials (RCTs) during the pilot testing of an intervention often look for evidence of promise to justify an efficacy trial. We developed a method to test for intervention effects that is adaptive (i.e., responsive to data exploration), requires few assumptions, and is statistically valid (i.e., controls the type I error rate), by adapting masked visual analysis techniques to cluster RCTs. We illustrate the creation of masked graphs and their analysis using data from a pilot study in which 15 high school programs were randomly assigned to either business as usual or an intervention developed to promote psychological and academic well-being in 9th grade students in accelerated coursework. We conclude that in small-scale cluster RCTs there can be benefits of testing for effects without a priori specification of a statistical model or test statistic.


Assuntos
Modelos Estatísticos , Projetos de Pesquisa , Análise por Conglomerados , Humanos , Ensaios Clínicos Controlados Aleatórios como Assunto
8.
Stat Med ; 39(20): 2655-2670, 2020 09 10.
Artigo em Inglês | MEDLINE | ID: mdl-32432805

RESUMO

Between-group comparison based on the restricted mean survival time (RMST) is getting attention as an alternative to the conventional logrank/hazard ratio approach for time-to-event outcomes in randomized controlled trials (RCTs). The validity of the commonly used nonparametric inference procedure for RMST has been well supported by large sample theories. However, we sometimes encounter cases with a small sample size in practice, where we cannot rely on the large sample properties. Generally, the permutation approach can be useful to handle these situations in RCTs. However, a numerical issue arises when implementing permutation tests for difference or ratio of RMST from two groups. In this article, we discuss the numerical issue and consider six permutation methods for comparing survival time distributions between two groups using RMST in RCTs setting. We conducted extensive numerical studies and assessed type I error rates of these methods. Our numerical studies demonstrated that the inflation of the type I error rate of the asymptotic methods is not negligible when sample size is small, and that all of the six permutation methods are workable solutions. Although some permutation methods became a little conservative, no remarkable inflation of the type I error rates were observed. We recommend using permutation tests instead of the asymptotic tests, especially when the sample size is less than 50 per arm.


Assuntos
Taxa de Sobrevida , Humanos , Modelos de Riscos Proporcionais , Ensaios Clínicos Controlados Aleatórios como Assunto , Tamanho da Amostra
9.
Stat Med ; 39(21): 2843-2854, 2020 Sep 20.
Artigo em Inglês | MEDLINE | ID: mdl-32491198

RESUMO

Randomization-based interval estimation takes into account the particular randomization procedure in the analysis and preserves the confidence level even in the presence of heterogeneity. It is distinguished from population-based confidence intervals with respect to three aspects: definition, computation, and interpretation. The article contributes to the discussion of how to construct a confidence interval for a treatment difference from randomization tests when analyzing data from randomized clinical trials. The discussion covers (i) the definition of a confidence interval for a treatment difference in randomization-based inference, (ii) computational algorithms for efficiently approximating the endpoints of an interval, and (iii) evaluation of statistical properties (ie, coverage probability and interval length) of randomization-based and population-based confidence intervals under a selected set of randomization procedures when assuming heterogeneity in patient outcomes. The method is illustrated with a case study.


Assuntos
Algoritmos , Projetos de Pesquisa , Intervalos de Confiança , Humanos , Probabilidade , Distribuição Aleatória , Ensaios Clínicos Controlados Aleatórios como Assunto
10.
Behav Res Methods ; 52(3): 1355-1370, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-31898296

RESUMO

Single-case experiments have become increasingly popular in psychological and educational research. However, the analysis of single-case data is often complicated by the frequent occurrence of missing or incomplete data. If missingness or incompleteness cannot be avoided, it becomes important to know which strategies are optimal, because the presence of missing data or inadequate data handling strategies may lead to experiments no longer "meeting standards" set by, for example, the What Works Clearinghouse. For the examination and comparison of strategies to handle missing data, we simulated complete datasets for ABAB phase designs, randomized block designs, and multiple-baseline designs. We introduced different levels of missingness in the simulated datasets by randomly deleting 10%, 30%, and 50% of the data. We evaluated the type I error rate and statistical power of a randomization test for the null hypothesis that there was no treatment effect under these different levels of missingness, using different strategies for handling missing data: (1) randomizing a missing-data marker and calculating all reference statistics only for the available data points, (2) estimating the missing data points by single imputation using the state space representation of a time series model, and (3) multiple imputation based on regressing the available data points on preceding and succeeding data points. The results are conclusive for the conditions simulated: The randomized-marker method outperforms the other two methods in terms of statistical power in a randomization test, while keeping the type I error rate under control.


Assuntos
Projetos de Pesquisa , Interpretação Estatística de Dados , Distribuição Aleatória
11.
Behav Res Methods ; 52(2): 654-666, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-31270794

RESUMO

Multilevel models (MLMs) have been proposed in single-case research, to synthesize data from a group of cases in a multiple-baseline design (MBD). A limitation of this approach is that MLMs require several statistical assumptions that are often violated in single-case research. In this article we propose a solution to this limitation by presenting a randomization test (RT) wrapper for MLMs that offers a nonparametric way to evaluate treatment effects, without making distributional assumptions or an assumption of random sampling. We present the rationale underlying the proposed technique and validate its performance (with respect to Type I error rate and power) as compared to parametric statistical inference in MLMs, in the context of evaluating the average treatment effect across cases in an MBD. We performed a simulation study that manipulated the numbers of cases and of observations per case in a dataset, the data variability between cases, the distributional characteristics of the data, the level of autocorrelation, and the size of the treatment effect in the data. The results showed that the power of the RT wrapper is superior to the power of parametric tests based on F distributions for MBDs with fewer than five cases, and that the Type I error rate of the RT wrapper is controlled for bimodal data, whereas this is not the case for traditional MLMs.


Assuntos
Modelos Estatísticos , Simulação por Computador , Método de Monte Carlo , Análise Multinível , Distribuição Aleatória , Distribuições Estatísticas
12.
Stata J ; 19(4): 803-819, 2019 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-32565746

RESUMO

Permutation tests are useful in stepped-wedge trials to provide robust statistical tests of intervention-effect estimates. However, the Stata command permute does not produce valid tests in this setting because individual observations are not exchangeable. We introduce the swpermute command that permutes clusters to sequences to maintain exchangeability. The command provides additional functionality to aid users in performing analyses of stepped-wedge trials. In particular, we include the option "withinperiod" that performs the specified analysis separately in each period of the study with the resulting period-specific intervention-effect estimates combined as a weighted average. We also include functionality to test non-zero null hypotheses to aid the construction of confidence intervals. Examples of the application of swpermute are given using data from a trial testing the impact of a new tuberculosis diagnostic test on bacterial confirmation of a tuberculosis diagnosis.

13.
Behav Res Methods ; 51(6): 2454-2476, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-30022457

RESUMO

Single-case experimental designs (SCEDs) are increasingly used in fields such as clinical psychology and educational psychology for the evaluation of treatments and interventions in individual participants. The AB phase design, also known as the interrupted time series design, is one of the most basic SCEDs used in practice. Randomization can be included in this design by randomly determining the start point of the intervention. In this article, we first introduce this randomized AB phase design and review its advantages and disadvantages. Second, we present some data-analytical possibilities and pitfalls related to this design and show how the use of randomization tests can mitigate or remedy some of these pitfalls. Third, we demonstrate that the Type I error of randomization tests in randomized AB phase designs is under control in the presence of unexpected linear trends in the data. Fourth, we report the results of a simulation study investigating the effect of unexpected linear trends on the power of the randomization test in randomized AB phase designs. The implications of these results for the analysis of randomized AB phase designs are discussed. We conclude that randomized AB phase designs are experimentally valid, but that the power of these designs is sufficient only for large treatment effects and large sample sizes. For small treatment effects and small sample sizes, researchers should turn to more complex phase designs, such as randomized ABAB phase designs or randomized multiple-baseline designs.


Assuntos
Pesquisa Comportamental/métodos , Análise de Séries Temporais Interrompida , Projetos de Pesquisa , Humanos , Distribuição Aleatória , Tamanho da Amostra , Erro Científico Experimental
14.
Ecology ; 99(1): 148-157, 2018 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-29065214

RESUMO

Successional dynamics in plant community assembly may result from both deterministic and stochastic ecological processes. The relative importance of different ecological processes is expected to vary over the successional sequence, between different plant functional groups, and with the disturbance levels and land-use management regimes of the successional systems. We evaluate the relative importance of stochastic and deterministic processes in bryophyte and vascular plant community assembly after fire in grazed and ungrazed anthropogenic coastal heathlands in Northern Europe. A replicated series of post-fire successions (n = 12) were initiated under grazed and ungrazed conditions, and vegetation data were recorded in permanent plots over 13 years. We used redundancy analysis (RDA) to test for deterministic successional patterns in species composition repeated across the replicate successional series and analyses of co-occurrence to evaluate to what extent species respond synchronously along the successional gradient. Change in species co-occurrences over succession indicates stochastic successional dynamics at the species level (i.e., species equivalence), whereas constancy in co-occurrence indicates deterministic dynamics (successional niche differentiation). The RDA shows high and deterministic vascular plant community compositional change, especially early in succession. Co-occurrence analyses indicate stochastic species-level dynamics the first two years, which then give way to more deterministic replacements. Grazed and ungrazed successions are similar, but the early stage stochasticity is higher in ungrazed areas. Bryophyte communities in ungrazed successions resemble vascular plant communities. In contrast, bryophytes in grazed successions showed consistently high stochasticity and low determinism in both community composition and species co-occurrence. In conclusion, stochastic and individualistic species responses early in succession give way to more niche-driven dynamics in later successional stages. Grazing reduces predictability in both successional trends and species-level dynamics, especially in plant functional groups that are not well adapted to disturbance.


Assuntos
Ecossistema , Incêndios , Ecologia , Europa (Continente) , Processos Estocásticos
15.
Adv Exp Med Biol ; 1082: 123-144, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30357718

RESUMO

This chapter considers the fundamental concepts in the theory of probability and applied statistics in epidemiology, including the biostatistical concepts and measures in genetic association and familial aggregation studies, including: Additional Approaches in Familial Aggregation Studies Twin Studies Adoption Studies Inbreeding Studies Randomization Test Segregation studies, Linkage studies, Association studies Genome-wide Association Studies (GWAS) Big Data and Human Genomics.


Assuntos
Genética Humana , Estatística como Assunto , Adoção , Biometria , Ligação Genética , Predisposição Genética para Doença , Estudo de Associação Genômica Ampla , Humanos , Endogamia , Estudos em Gêmeos como Assunto
16.
Behav Res Methods ; 50(2): 557-575, 2018 04.
Artigo em Inglês | MEDLINE | ID: mdl-28389851

RESUMO

The conditional power (CP) of the randomization test (RT) was investigated in a simulation study in which three different single-case effect size (ES) measures were used as the test statistics: the mean difference (MD), the percentage of nonoverlapping data (PND), and the nonoverlap of all pairs (NAP). Furthermore, we studied the effect of the experimental design on the RT's CP for three different single-case designs with rapid treatment alternation: the completely randomized design (CRD), the randomized block design (RBD), and the restricted randomized alternation design (RRAD). As a third goal, we evaluated the CP of the RT for three types of simulated data: data generated from a standard normal distribution, data generated from a uniform distribution, and data generated from a first-order autoregressive Gaussian process. The results showed that the MD and NAP perform very similarly in terms of CP, whereas the PND performs substantially worse. Furthermore, the RRAD yielded marginally higher power in the RT, followed by the CRD and then the RBD. Finally, the power of the RT was almost unaffected by the type of the simulated data. On the basis of the results of the simulation study, we recommend at least 20 measurement occasions for single-case designs with a randomized treatment order that are to be evaluated with an RT using a 5% significance level. Furthermore, we do not recommend use of the PND, because of its low power in the RT.


Assuntos
Simulação por Computador , Método de Monte Carlo , Distribuição Aleatória , Algoritmos , Humanos , Distribuição Normal , Projetos de Pesquisa
17.
Stat Med ; 36(17): 2735-2749, 2017 Jul 30.
Artigo em Inglês | MEDLINE | ID: mdl-28480546

RESUMO

The spatial relative risk function is a useful tool for describing geographical variation in disease incidence. We consider the problem of comparing relative risk functions between two time periods, with the idea of detecting alterations in the spatial pattern of disease risk irrespective of whether there has been a change in the overall incidence rate. Using case-control datasets for each period, we use kernel smoothing methods to derive a test statistic based on the difference between the log-relative risk functions, which we term the log-relative risk ratio. For testing a null hypothesis of an unchanging spatial pattern of risk, we show how p-values can be computed using both randomization methods and an asymptotic normal approximation. The methodology is applied to data on campylobacteriosis from 2006 to 2013 in a region of New Zealand. We find clear evidence of a change in the spatial pattern of risk between those years, which can be explained in differences by response to a public health initiative between urban and rural communities. Copyright © 2017 John Wiley & Sons, Ltd.


Assuntos
Métodos Epidemiológicos , Risco , Análise Espacial , Infecções por Campylobacter/epidemiologia , Estudos de Casos e Controles , Simulação por Computador , Geografia , Humanos , Incidência , Nova Zelândia/epidemiologia , Distribuição Aleatória
18.
Mol Biol Evol ; 32(7): 1895-906, 2015 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-25771196

RESUMO

Rates and timescales of viral evolution can be estimated using phylogenetic analyses of time-structured molecular sequences. This involves the use of molecular-clock methods, calibrated by the sampling times of the viral sequences. However, the spread of these sampling times is not always sufficient to allow the substitution rate to be estimated accurately. We conducted Bayesian phylogenetic analyses of simulated virus data to evaluate the performance of the date-randomization test, which is sometimes used to investigate whether time-structured data sets have temporal signal. An estimate of the substitution rate passes this test if its mean does not fall within the 95% credible intervals of rate estimates obtained using replicate data sets in which the sampling times have been randomized. We find that the test sometimes fails to detect rate estimates from data with no temporal signal. This error can be minimized by using a more conservative criterion, whereby the 95% credible interval of the estimate with correct sampling times should not overlap with those obtained with randomized sampling times. We also investigated the behavior of the test when the sampling times are not uniformly distributed throughout the tree, which sometimes occurs in empirical data sets. The test performs poorly in these circumstances, such that a modification to the randomization scheme is needed. Finally, we illustrate the behavior of the test in analyses of nucleotide sequences of cereal yellow dwarf virus. Our results validate the use of the date-randomization test and allow us to propose guidelines for interpretation of its results.


Assuntos
Luteoviridae/classificação , Filogenia , Distribuição Aleatória , Calibragem , Simulação por Computador , Fatores de Tempo
19.
Stat Med ; 35(14): 2315-27, 2016 06 30.
Artigo em Inglês | MEDLINE | ID: mdl-26787557

RESUMO

Minimization, a dynamic allocation method, is gaining popularity especially in cancer clinical trials. Aiming to achieve balance on all important prognostic factors simultaneously, this procedure can lead to a substantial reduction in covariate imbalance compared with conventional randomization in small clinical trials. While minimization has generated enthusiasm, some controversy exists over the proper analysis of such a trial. Critics argue that standard testing methods that do not account for the dynamic allocation algorithm can lead to invalid statistical inference. Acknowledging this limitation, the International Conference on Harmonization E9 guideline suggests that 'the complexity of the logistics and potential impact on analyses be carefully evaluated when considering dynamic allocation'. In this article, we investigate the proper analysis approaches to inference in a minimization design for both continuous and time-to-event endpoints and evaluate the validity and power of these approaches under a variety of scenarios both theoretically and empirically. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.


Assuntos
Modelos Estatísticos , Neoplasias/terapia , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Algoritmos , Bioestatística , Simulação por Computador , Humanos , Neoplasias Pulmonares/terapia , Modelos de Riscos Proporcionais , Reprodutibilidade dos Testes
20.
Stat Med ; 34(11): 1904-11, 2015 May 20.
Artigo em Inglês | MEDLINE | ID: mdl-25630496

RESUMO

Vaccine benefit is usually two-folded: (i) prevent a disease or, failing that, (ii) diminish the severity of a disease. To assess vaccine effect, we propose two adaptive tests. The weighted two-part test is a combination of two statistics, one on disease incidence and one on disease severity. More weight is given to the statistic with the larger a priori effect size, and the weights are determined to maximize testing power. The randomized test applies to the scenario where the total number of infections is relatively small. It uses information on disease severity to bolster power while preserving disease incidence as the primary interest. Properties of the proposed tests are explored asymptotically and by numerical studies. Although motivated by vaccine studies, the proposed tests apply to any trials that involve both binary and continuous outcomes for evaluating treatment effect.


Assuntos
Vacinas contra a AIDS , Infecções por HIV/epidemiologia , Infecções por HIV/prevenção & controle , HIV-1 , Modelos Estatísticos , Ensaios Clínicos Controlados Aleatórios como Assunto , Determinação de Ponto Final , Humanos , Incidência , Projetos de Pesquisa , Índice de Gravidade de Doença , Carga Viral
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa