Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 67
Filtrar
1.
J Econom ; 243(1-2)2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-39372141

RESUMO

This paper considers the problem of making inferences about the effects of a program on multiple outcomes when the assignment of treatment status is imperfectly randomized. By imperfect randomization we mean that treatment status is reassigned after an initial randomization on the basis of characteristics that may be observed or unobserved by the analyst. We develop a partial identification approach to this problem that makes use of information limiting the extent to which randomization is imperfect to show that it is still possible to make nontrivial inferences about the effects of the program in such settings. We consider a family of null hypotheses in which each null hypothesis specifies that the program has no effect on one of several outcomes of interest. Under weak assumptions, we construct a procedure for testing this family of null hypotheses in a way that controls the familywise error rate - the probability of even one false rejection - in finite samples. We develop our methodology in the context of a reanalysis of the HighScope Perry Preschool program. We find statistically significant effects of the program on a number of different outcomes of interest, including outcomes related to criminal activity for males and females, even after accounting for the imperfectness of the randomization and the multiplicity of null hypotheses.

2.
Epilepsy Behav ; 157: 109869, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38851125

RESUMO

People with epilepsy often suffer from comorbid psychiatric disorders, which negatively affects their quality of life. Emotion regulation is an important cognitive process that is impaired in individuals with psychiatric disorders, such as depression. Adults with epilepsy also show difficulties in emotion regulation, particularly during later-stage, higher-order cognitive processing. Yet, the spatiotemporal and frequency correlates of these functional brain deficits in epilepsy remain unknown, as do the nature of these deficits in adolescent epilepsy. Here, we aim to elucidate the spatiotemporal profile of emotional conflict processing in adolescents with epilepsy, relative to controls, using magnetoencephalography (MEG) and relate these findings to anxiety and depression symptom severity assessed with self-report scales. We hypothesized to see blunted brain activity during emotional conflict in adolescents with epilepsy, relative to controls, in the posterior parietal, prefrontal and cingulate cortices due to their role in explicit and implicit regulation around participant response (500-1000 ms). We analyzed MEG recordings from 53 adolescents (28 epilepsy [14focal,14generalized], 25 controls) during an emotional conflict task. We showed that while controls exhibited behavioral interference to emotional conflict, adolescents with epilepsy failed to exhibit this normative response time pattern. Adolescents with epilepsy showed blunted brain responses to emotional conflict in brain regions related to error evaluation and learning around the average response time (500-700 ms), and in regions involved in decision making during post-response monitoring (800-1000 ms). Interestingly, behavioral patterns and psychiatric symptom severity varied between epilepsy subgroups, wherein those with focal epilepsy showed preserved response time interference. Thus, brain responses were regressed with depression and anxiety levels for each epilepsy subgroup separately. Analyses revealed that under activation in error evaluation regions (500-600 ms) predicted anxiety and depression in focal epilepsy, while regions related to learning (600-700 ms) predicted anxiety in generalized epilepsy, suggesting differential mechanisms of dysfunction in these subgroups. Despite similar rates of anxiety and depression across the groups, adolescents with epilepsy still exhibited deficits in emotional conflict processing in brain and behavioral responses. This suggests that these deficits may exist independently from psychopathology and may stem from underlying dysfunctions that predispose these individuals to develop both disorders. Findings such as these may provide potential targets for future research and therapies.


Assuntos
Conflito Psicológico , Epilepsia , Magnetoencefalografia , Humanos , Adolescente , Masculino , Feminino , Epilepsia/fisiopatologia , Epilepsia/psicologia , Epilepsia/complicações , Encéfalo/fisiopatologia , Emoções/fisiologia , Depressão/fisiopatologia , Depressão/psicologia , Ansiedade/fisiopatologia , Ansiedade/psicologia , Tempo de Reação/fisiologia , Escalas de Graduação Psiquiátrica , Mapeamento Encefálico
3.
Psychophysiology ; 61(4): e14478, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-37937898

RESUMO

Parkinson's disease (PD) has been associated with greater total power in canonical frequency bands (i.e., alpha, beta) of the resting electroencephalogram (EEG). However, PD has also been associated with a reduction in the proportion of total power across all frequency bands. This discrepancy may be explained by aperiodic activity (exponent and offset) present across all frequency bands. Here, we examined differences in the eyes-open (EO) and eyes-closed (EC) resting EEG of PD participants (N = 26) on and off medication, and age-matched healthy controls (CTL; N = 26). We extracted power from canonical frequency bands using traditional methods (total alpha and beta power) and extracted separate parameters for periodic (parameterized alpha and beta power) and aperiodic activity (exponent and offset). Cluster-based permutation tests over spatial and frequency dimensions indicated that total alpha and beta power, and aperiodic exponent and offset were greater in PD participants, independent of medication status. After removing the exponent and offset, greater alpha power in PD (vs. CTL) was only present in EO recordings and no reliable differences in beta power were observed. Differences between PD and CTL in the resting EEG are likely driven by aperiodic activity, suggestive of greater relative inhibitory neural activity and greater neuronal spiking. Our findings suggest that resting EEG activity in PD is characterized by medication-invariant differences in aperiodic activity which is independent of the increase in alpha power with EO. This highlights the importance of considering aperiodic activity contributions to the neural correlates of brain disorders.


Assuntos
Doença de Parkinson , Humanos , Eletroencefalografia , Descanso/fisiologia
4.
Curr Protoc ; 3(11): e931, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37988228

RESUMO

Genome-wide association studies (GWAS) successfully identified numerous common variants involved in complex diseases, but only limited heritability was explained by these findings. Advances in high-throughput sequencing technology made it possible to assess the contribution of rare variants in common diseases. However, study of rare variants introduces challenges due to low frequency of rare variants. Well-established common variant methods were underpowered to identify the rare variants in GWAS. To address this challenge, several new methods have been developed to examine the role of rare variants in complex diseases. These approaches are based on testing the aggregate effect of multiple rare variants in a predefined genetic region. Provided here is an overview of statistical approaches and the protocols explaining step-by-step analysis of aggregations tests with the hands-on experience using R scripts in four categories: burden tests, adaptive burden tests, variance-component tests, and combined tests. Also explained are the concepts of rare variants, permutation tests, kernel methods, and genetic variant annotation. At the end we discuss relevant topics of bioinformatics tools for annotation, family-based design of rare-variant analysis, population stratification adjustment, and meta-analysis. © 2023 The Authors. Current Protocols published by Wiley Periodicals LLC.


Assuntos
Doença , Variação Genética , Estudo de Associação Genômica Ampla , Estudo de Associação Genômica Ampla/métodos , Doença/genética
5.
Stat Methods Med Res ; 32(3): 465-473, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36624622

RESUMO

Clustered survival data frequently occurs in biomedical research fields and clinical trials. The log-rank tests are used for two independent samples of clustered data tests. We use the block Efron's biased-coin randomization (design) to assign patients to treatment groups in a clinical trial by forcing a sequential experiment to be balanced. In this article, the p-values of the null permutation distribution of log-rank tests for clustered data are approximated via the double saddlepoint approximation method. Comprehensive numerical studies are carried out to assess the accuracy of the saddlepoint approximation. This approximation demonstrates great accuracy over the asymptotic normal approximation.


Assuntos
Pesquisa Biomédica , Projetos de Pesquisa , Humanos
6.
J Biopharm Stat ; 33(2): 210-219, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-35980127

RESUMO

Clustered data frequently occur in biomedical research fields and clinical trials. The log-rank tests are widely used for two-independent samples of clustered data tests. The randomized block design and truncated binomial design are used for forcing balance in clinical trials and reducing selection bias. In this paper, survival clustered data are randomized by generalized randomized block, and subsequently clustered data in each block are randomized by truncated binomial design. Consequently, the p-values of the null permutation distribution of log-rank tests for clustered data are approximated via the double saddlepoint approximation method. Comprehensive numerical studies are carried out to assess the accuracy of the saddlepoint approximation. This approximation has a great accuracy over the asymptotic normal approximation.


Assuntos
Biometria , Ensaios Clínicos Controlados Aleatórios como Assunto , Humanos
7.
Behav Ecol Sociobiol ; 76(11): 151, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36325506

RESUMO

The non-independence of social network data is a cause for concern among behavioural ecologists conducting social network analysis. This has led to the adoption of several permutation-based methods for testing common hypotheses. One of the most common types of analysis is nodal regression, where the relationships between node-level network metrics and nodal covariates are analysed using a permutation technique known as node-label permutations. We show that, contrary to accepted wisdom, node-label permutations do not automatically account for the non-independences assumed to exist in network data, because regression-based permutation tests still assume exchangeability of residuals. The same assumption also applies to the quadratic assignment procedure (QAP), a permutation-based method often used for conducting dyadic regression. We highlight that node-label permutations produce the same p-values as equivalent parametric regression models, but that in the presence of non-independence, parametric regression models can also produce accurate effect size estimates. We also note that QAP only controls for a specific type of non-independence between edges that are connected to the same nodes, and that appropriate parametric regression models are also able to account for this type of non-independence. Based on this, we suggest that standard parametric models could be used in the place of permutation-based methods. Moving away from permutation-based methods could have several benefits, including reducing over-reliance on p-values, generating more reliable effect size estimates, and facilitating the adoption of causal inference methods and alternative types of statistical analysis. Supplementary Information: The online version contains supplementary material available at 10.1007/s00265-022-03254-x.

8.
Entropy (Basel) ; 24(9)2022 Sep 02.
Artigo em Inglês | MEDLINE | ID: mdl-36141120

RESUMO

In this study, we focus on mixed data which are either observations of univariate random variables which can be quantitative or qualitative, or observations of multivariate random variables such that each variable can include both quantitative and qualitative components. We first propose a novel method, called CMIh, to estimate conditional mutual information taking advantages of the previously proposed approaches for qualitative and quantitative data. We then introduce a new local permutation test, called LocAT for local adaptive test, which is well adapted to mixed data. Our experiments illustrate the good behaviour of CMIh and LocAT, and show their respective abilities to accurately estimate conditional mutual information and to detect conditional (in)dependence for mixed data.

9.
Trials ; 23(1): 762, 2022 Sep 08.
Artigo em Inglês | MEDLINE | ID: mdl-36076295

RESUMO

BACKGROUND: The HEALing (Helping to End Addiction Long-termSM) Communities Study (HCS) is a multi-site parallel group cluster randomized wait-list comparison trial designed to evaluate the effect of the Communities That Heal (CTH) intervention compared to usual care on opioid overdose deaths. Covariate-constrained randomization (CCR) was applied to balance the community-level baseline covariates in the HCS. The purpose of this paper is to evaluate the performance of model-based tests and permutation tests in the HCS setting. We conducted a simulation study to evaluate type I error rates and power for model-based and permutation tests for the multi-site HCS as well as for a subgroup analysis of a single state (Massachusetts). We also investigated whether the maximum degree of imbalance in the CCR design has an impact on the performance of the tests. METHODS: The primary outcome, the number of opioid overdose deaths, is count data assessed at the community level that will be analyzed using a negative binomial regression model. We conducted a simulation study to evaluate the type I error rates and power for 3 tests: (1) Wald-type t-test with small-sample corrected empirical standard error estimates, (2) Wald-type z-test with model-based standard error estimates, and (3) permutation test with test statistics calculated by the difference in average residuals for the two groups. RESULTS: Our simulation results demonstrated that Wald-type t-tests with small-sample corrected empirical standard error estimates from the negative binomial regression model maintained proper type I error. Wald-type z-tests with model-based standard error estimates were anti-conservative. Permutation tests preserved type I error rates if the constrained space was not too small. For all tests, the power was high to detect the hypothesized 40% reduction in opioid overdose deaths for the intervention vs. comparison group both for the overall HCS and the subgroup analysis of Massachusetts (MA). CONCLUSIONS: Based on the results of our simulation study, the Wald-type t-test with small-sample corrected empirical standard error estimates from a negative binomial regression model is a valid and appropriate approach for analyzing cluster-level count data from the HEALing Communities Study. TRIAL REGISTRATION: ClinicalTrials.gov http://www. CLINICALTRIALS: gov ; Identifier: NCT04111939.


Assuntos
Overdose de Opiáceos , Simulação por Computador , Humanos , Massachusetts , Modelos Estatísticos , Distribuição Aleatória
10.
Entropy (Basel) ; 24(8)2022 Aug 03.
Artigo em Inglês | MEDLINE | ID: mdl-36010735

RESUMO

A new nonparametric test of equality of two densities is investigated. The test statistic is an average of log-Bayes factors, each of which is constructed from a kernel density estimate. Prior densities for the bandwidths of the kernel estimates are required, and it is shown how to choose priors so that the log-Bayes factors can be calculated exactly. Critical values of the test statistic are determined by a permutation distribution, conditional on the data. An attractive property of the methodology is that a critical value of 0 leads to a test for which both type I and II error probabilities tend to 0 as sample sizes tend to ∞. Existing results on Kullback-Leibler loss of kernel estimates are crucial to obtaining these asymptotic results, and also imply that the proposed test works best with heavy-tailed kernels. Finite sample characteristics of the test are studied via simulation, and extensions to multivariate data are straightforward, as illustrated by an application to bivariate connectionist data.

11.
Front Microbiol ; 13: 914429, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35928167

RESUMO

Diversity analysis is a de facto standard procedure for most existing microbiome studies. Nevertheless, diversity metrics can be insensitive to changes in community composition (identities). For example, if species A (e.g., a beneficial microbe) is replaced by equal number of species B (e.g., an opportunistic pathogen), the diversity metric may not change, but the community composition has changed. The shared species analysis (SSA) is a computational technique that can discern changes of community composition by detecting the increase/decrease of shared species between two sets of microbiome samples, and it should be more sensitive than standard diversity analysis in discerning changes in microbiome structures. Here, we investigated the effects of ethnicity and lifestyles in China on the structure of Chinese gut microbiomes by reanalyzing the datasets of a large Chinese cohort with 300+ individuals covering 7 biggest Chinese ethnic groups (>95% Chinese population). We found: (i) Regarding lifestyles, SSA revealed significant differences between 100% of pair-wise comparisons in community compositions across all but phylum taxon levels (phylum level = 29%), but diversity analysis only revealed 14-29% pair-wise differences in community diversity across all four taxon levels. (ii) Regarding ethnicities, SSA revealed 100% pair-wise differences in community compositions across all but phylum (phylum level = 48-62%) levels, but diversity analysis only revealed 5-57% differences in community diversity across all four taxon levels. (iii) Ethnicity seems to have more prevalent effects on community structures than lifestyle does (iv) Community structures of the gut microbiomes are more stable at the phylum level than at the other three levels. (v) SSA is more powerful than diversity analysis in detecting the changes of community structures; furthermore, SSA can produce lists of unique and shared OTUs. (vi) Finally, we performed stochasticity analysis to mechanistically interpret the observed differences revealed by the SSA and diversity analyses.

12.
Methods Ecol Evol ; 13(1): 144-156, 2022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-35873757

RESUMO

Permutation tests are widely used to test null hypotheses with animal social network data, but suffer from high rates of type I and II error when the permutations do not properly simulate the intended null hypothesis.Two common types of permutations each have limitations. Pre-network (or datastream) permutations can be used to control 'nuisance effects' like spatial, temporal or sampling biases, but only when the null hypothesis assumes random social structure. Node (or node-label) permutation tests can test null hypotheses that include nonrandom social structure, but only when nuisance effects do not shape the observed network.We demonstrate one possible solution addressing these limitations: using pre-network permutations to adjust the values for each node or edge before conducting a node permutation test. We conduct a range of simulations to estimate error rates caused by confounding effects of social or non-social structure in the raw data.Regressions on simulated datasets suggest that this 'double permutation' approach is less likely to produce elevated error rates relative to using only node permutations, pre-network permutations or node permutations with simple covariates, which all exhibit elevated type I errors under at least one set of simulated conditions. For example, in scenarios where type I error rates from pre-network permutation tests exceed 30%, the error rates from double permutation remain at 5%.The double permutation procedure provides one potential solution to issues arising from elevated type I and type II error rates when testing null hypotheses with social network data. We also discuss alternative approaches that can provide robust inference, including fitting mixed effects models, restricted node permutations, testing multiple null hypotheses and splitting large datasets to generate replicated networks. Finally, we highlight ways that uncertainty can be explicitly considered and carried through the analysis.

13.
Eval Health Prof ; 45(1): 8-21, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-35245983

RESUMO

In this article, we present single-case causal mediation analysis as the application of causal mediation analysis to data collected within a single-case experiment. This method combines the focus on the individual with the focus on mechanisms of change, rendering it a promising approach for both mediation and single-case researchers. For this purpose, we propose a new method based on time-discrete state-space modeling to estimate the direct and indirect treatment effects. We demonstrate how to estimate the model for a single-case experiment on stress and craving in a routine alcohol consumer before and after an imposed period of abstinence. Furthermore, we present a simulation study that examines the estimation and testing of the standardized indirect effect. All parameters used to generate the data were recovered with acceptable precision. We use maximum likelihood and permutation procedures to calculate p-values and standard errors of the parameters estimates. The new method is promising for testing mediated effects in single-case experimental designs. We further discuss limitations of the new method with respect to causal inference, as well as more technical concerns, such as the choice of the time lags between the measurements.


Assuntos
Modelos Estatísticos , Projetos de Pesquisa , Causalidade , Simulação por Computador , Humanos
14.
Eval Health Prof ; 45(1): 54-65, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-35209736

RESUMO

In response to the importance of individual-level effects, the purpose of this paper is to describe the new randomization permutation (RP) test for a mediation mechanism for a single subject. We extend seminal work on permutation tests for individual-level data by proposing a test for mediation for one person. The method requires random assignment to the levels of the treatment variable at each measurement occasion, and repeated measures of the mediator and outcome from one subject. If several assumptions are met, the process by which a treatment changes an outcome can be statistically evaluated for a single subject, using the permutation mediation test method and the permutation confidence interval method for residuals. A simulation study evaluated the statistical properties of the new method suggesting that at least eight repeated measures are needed to control Type I error rates and larger sample sizes are needed for power approaching .8 even for large effects. The RP mediation test is a promising method for elucidating intraindividual processes of change that may inform personalized medicine and tailoring of process-based treatments for one subject.


Assuntos
Projetos de Pesquisa , Simulação por Computador , Humanos , Distribuição Aleatória
15.
J Biopharm Stat ; 32(5): 641-651, 2022 Sep 03.
Artigo em Inglês | MEDLINE | ID: mdl-35068340

RESUMO

Left-truncated data are constructed from the time of an initial event and the time of the event of interest. That is to say, each individual represented by such data is exposed to an event prior to the event of interest, and both times are recorded. Weighted log-rank testing is commonly used for such data. In this paper, a saddlepoint approximation method is provided for computing p-values of the permutation distribution of tests from the weighted log-rank testing in the presence of left-truncated data and Wei's urn design. A simulation study is used to assess the efficiency of the saddlepoint approximation. The accuracy of the saddlepoint approximation in comparison to the normal approximation enables us to compute accurate confidence intervals for the treatment effect.


Assuntos
Projetos de Pesquisa , Simulação por Computador , Intervalos de Confiança , Humanos
16.
Behav Res Methods ; 53(6): 2712-2724, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34050436

RESUMO

Recent replication crisis has led to a number of ad hoc suggestions to decrease the chance of making false positive findings. Among them, Johnson (Proceedings of the National Academy of Sciences, 110, 19313-19317, 2013) and Benjamin et al. (Nature Human Behaviour, 2, 6-10 2018) recommend using the significance level of α = 0.005 (0.5%) as opposed to the conventional 0.05 (5%) level. Even though their suggestion is easy to implement, it is unclear whether or not the commonly used statistical tests are robust and/or powerful at such a small significance level. Therefore, the main aim of our study is to investigate the robustness and power curve behaviors of independent (unpaired) two-sample tests for metric and ordinal data at nominal significance levels of α = 0.005 and α = 0.05. Through an extensive simulation study, it is found that the permutation versions of the Welch t-test and the Brunner-Munzel test are particularly robust and powerful while the commonly used two-sample tests which utilize t-distribution tend to be either liberal or conservative, and have peculiar power curve behaviors under skewed distributions with variance heterogeneity.


Assuntos
Reações Falso-Positivas , Modelos Estatísticos , Distribuições Estatísticas , Simulação por Computador , Humanos , Probabilidade
17.
G3 (Bethesda) ; 11(4)2021 04 15.
Artigo em Inglês | MEDLINE | ID: mdl-33638985

RESUMO

Quantitative trait loci (QTL) hotspots (genomic locations enriched in QTL) are a common and notable feature when collecting many QTL for various traits in many areas of biological studies. The QTL hotspots are important and attractive since they are highly informative and may harbor genes for the quantitative traits. So far, the current statistical methods for QTL hotspot detection use either the individual-level data from the genetical genomics experiments or the summarized data from public QTL databases to proceed with the detection analysis. These methods may suffer from the problems of ignoring the correlation structure among traits, neglecting the magnitude of LOD scores for the QTL, or paying a very high computational cost, which often lead to the detection of excessive spurious hotspots, failure to discover biologically interesting hotspots composed of a small-to-moderate number of QTL with strong LOD scores, and computational intractability, respectively, during the detection process. In this article, we describe a statistical framework that can handle both types of data as well as address all the problems at a time for QTL hotspot detection. Our statistical framework directly operates on the QTL matrix and hence has a very cheap computational cost and is deployed to take advantage of the QTL mapping results for assisting the detection analysis. Two special devices, trait grouping and top γn,α profile, are introduced into the framework. The trait grouping attempts to group the traits controlled by closely linked or pleiotropic QTL together into the same trait groups and randomly allocates these QTL together across the genomic positions separately by trait group to account for the correlation structure among traits, so as to have the ability to obtain much stricter thresholds and dismiss spurious hotspots. The top γn,α profile is designed to outline the LOD-score pattern of QTL in a hotspot across the different hotspot architectures, so that it can serve to identify and characterize the types of QTL hotspots with varying sizes and LOD-score distributions. Real examples, numerical analysis, and simulation study are performed to validate our statistical framework, investigate the detection properties, and also compare with the current methods in QTL hotspot detection. The results demonstrate that the proposed statistical framework can effectively accommodate the correlation structure among traits, identify the types of hotspots, and still keep the notable features of easy implementation and fast computation for practical QTL hotspot detection.


Assuntos
Locos de Características Quantitativas , Mapeamento Cromossômico , Simulação por Computador , Escore Lod , Fenótipo
18.
J Biopharm Stat ; 31(3): 352-361, 2021 05 04.
Artigo em Inglês | MEDLINE | ID: mdl-33347337

RESUMO

The weighted log-rank class is the common and widely used class of two-sample tests for clustered data. Clustered data with censored failure times often arise in tumorigenicity investigations and clinical trials. The randomized block design is a significant design that reduces both unintentional bias and selection bias. Accordingly, the p-values of the null permutation distribution of weighted log-rank class for clustered data are approximated using the double saddlepoint approximation technique. Comprehensive simulation studies are carried out to appraise the accuracy of the saddlepoint approximation. This approximation exhibits a significant improvement in precision over the asymptotic approximation. This precision motivates us to determine the approximated confidence intervals for the treatment impact.


Assuntos
Biometria , Simulação por Computador , Humanos , Análise de Sobrevida
19.
Stat Med ; 39(28): 4187-4200, 2020 12 10.
Artigo em Inglês | MEDLINE | ID: mdl-32794222

RESUMO

Generalized additive models (GAMs) with bivariate smoothers are frequently used to map geographic disease risks in epidemiology studies. A challenge in identifying health disparities has been the lack of intuitive and computationally feasible methods to assess whether the pattern of spatial effects varies over time. In this research, we accommodate time-stratified smoothers into the GAM framework to estimate time-specific spatial risk patterns while borrowing information from confounding effects across time. A backfitting algorithm for model estimation is proposed along with a permutation testing framework for assessing temporal heterogeneity of geospatial risk patterns across two or more time points. Simulation studies show that our proposed permuted mean squared difference (PMSD) test performs well with respect to type I error and power in various settings when compared with existing methods. The proposed model and PMSD test are used geospatial risk patterns of patent ductus arteriosus (PDA) in the state of Massachusetts over 2003-2009. We show that there is variation over time in spatial patterns of PDA risk, adjusting for other known risk factors, suggesting the presence of potential time-varying and space-related risk factors other than the adjusted ones.


Assuntos
Algoritmos , Simulação por Computador , Humanos
20.
Am J Epidemiol ; 189(11): 1324-1332, 2020 11 02.
Artigo em Inglês | MEDLINE | ID: mdl-32648891

RESUMO

Randomized controlled trials are crucial for the evaluation of interventions such as vaccinations, but the design and analysis of these studies during infectious disease outbreaks is complicated by statistical, ethical, and logistical factors. Attempts to resolve these complexities have led to the proposal of a variety of trial designs, including individual randomization and several types of cluster randomization designs: parallel-arm, ring vaccination, and stepped wedge designs. Because of the strong time trends present in infectious disease incidence, however, methods generally used to analyze stepped wedge trials might not perform well in these settings. Using simulated outbreaks, we evaluated various designs and analysis methods, including recently proposed methods for analyzing stepped wedge trials, to determine the statistical properties of these methods. While new methods for analyzing stepped wedge trials can provide some improvement over previous methods, we find that they still lag behind parallel-arm cluster-randomized trials and individually randomized trials in achieving adequate power to detect intervention effects. We also find that these methods are highly sensitive to the weighting of effect estimates across time periods. Despite the value of new methods, stepped wedge trials still have statistical disadvantages compared with other trial designs in epidemic settings.


Assuntos
Biometria/métodos , Interpretação Estatística de Dados , Surtos de Doenças/estatística & dados numéricos , Modelos Estatísticos , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Análise por Conglomerados , Simulação por Computador , Humanos , Ensaios Clínicos Controlados Aleatórios como Assunto/métodos , Projetos de Pesquisa
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA