Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 31
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Biom J ; 66(1): e2300077, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37857533

RESUMO

P-values that are derived from continuously distributed test statistics are typically uniformly distributed on (0,1) under least favorable parameter configurations (LFCs) in the null hypothesis. Conservativeness of a p-value P (meaning that P is under the null hypothesis stochastically larger than uniform on (0,1)) can occur if the test statistic from which P is derived is discrete, or if the true parameter value under the null is not an LFC. To deal with both of these sources of conservativeness, we present two approaches utilizing randomized p-values. We illustrate their effectiveness for testing a composite null hypothesis under a binomial model. We also give an example of how the proposed p-values can be used to test a composite null in group testing designs. We find that the proposed randomized p-values are less conservative compared to nonrandomized p-values under the null hypothesis, but that they are stochastically not smaller under the alternative. The problem of establishing the validity of randomized p-values has received attention in previous literature. We show that our proposed randomized p-values are valid under various discrete statistical models, which are such that the distribution of the corresponding test statistic belongs to an exponential family. The behavior of the power function for the tests based on the proposed randomized p-values as a function of the sample size is also investigated. Simulations and a real data example are used to compare the different considered p-values.


Assuntos
Modelos Estatísticos , Tamanho da Amostra
2.
Stat Med ; 42(17): 2944-2961, 2023 07 30.
Artigo em Inglês | MEDLINE | ID: mdl-37173292

RESUMO

Modern high-throughput biomedical devices routinely produce data on a large scale, and the analysis of high-dimensional datasets has become commonplace in biomedical studies. However, given thousands or tens of thousands of measured variables in these datasets, extracting meaningful features poses a challenge. In this article, we propose a procedure to evaluate the strength of the associations between a nominal (categorical) response variable and multiple features simultaneously. Specifically, we propose a framework of large-scale multiple testing under arbitrary correlation dependency among test statistics. First, marginal multinomial regressions are performed for each feature individually. Second, we use an approach of multiple marginal models for each baseline-category pair to establish asymptotic joint normality of the stacked vector of the marginal multinomial regression coefficients. Third, we estimate the (limiting) covariance matrix between the estimated coefficients from all marginal models. Finally, our approach approximates the realized false discovery proportion of a thresholding procedure for the marginal p-values for each baseline-category logit pair. The proposed approach offers a sensible trade-off between the expected numbers of true and false findings. Furthermore, we demonstrate a practical application of the method on hyperspectral imaging data. This dataset is obtained by a matrix-assisted laser desorption/ionization (MALDI) instrument. MALDI demonstrates tremendous potential for clinical diagnosis, particularly for cancer research. In our application, the nominal response categories represent cancer (sub-)types.


Assuntos
Espectrometria de Massas por Ionização e Dessorção a Laser Assistida por Matriz , Humanos , Espectrometria de Massas por Ionização e Dessorção a Laser Assistida por Matriz/métodos , Estatística como Assunto
3.
Biom J ; 65(2): e2100328, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36029271

RESUMO

Large-scale hypothesis testing has become a ubiquitous problem in high-dimensional statistical inference, with broad applications in various scientific disciplines. One relevant application is constituted by imaging mass spectrometry (IMS) association studies, where a large number of tests are performed simultaneously in order to identify molecular masses that are associated with a particular phenotype, for example, a cancer subtype. Mass spectra obtained from matrix-assisted laser desorption/ionization (MALDI) experiments are dependent, when considered as statistical quantities. False discovery proportion (FDP) estimation and  control under arbitrary dependency structure among test statistics is an active topic in modern multiple testing research. In this context, we are concerned with the evaluation of associations between the binary outcome variable (describing the phenotype) and multiple predictors derived from MALDI measurements. We propose an inference procedure in which the correlation matrix of the test statistics is utilized. The approach is based on multiple marginal models. Specifically, we fit a marginal logistic regression model for each predictor individually. Asymptotic joint normality of the stacked vector of the marginal regression coefficients is established under standard regularity assumptions, and their (limiting) correlation matrix is estimated. The proposed method extracts common factors from the resulting empirical correlation matrix. Finally, we estimate the realized FDP of a thresholding procedure for the marginal p-values. We demonstrate a practical application of the proposed workflow to MALDI IMS data in an oncological context.


Assuntos
Espectrometria de Massas por Ionização e Dessorção a Laser Assistida por Matriz , Espectrometria de Massas por Ionização e Dessorção a Laser Assistida por Matriz/métodos
4.
Biom J ; 64(2): 384-409, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-33464615

RESUMO

We are concerned with testing replicability hypotheses for many endpoints simultaneously. This constitutes a multiple test problem with composite null hypotheses. Traditional p$p$ -values, which are computed under least favorable parameter configurations (LFCs), are over-conservative in the case of composite null hypotheses. As demonstrated in prior work, this poses severe challenges in the multiple testing context, especially when one goal of the statistical analysis is to estimate the proportion π0$\pi _0$ of true null hypotheses. Randomized p$p$ -values have been proposed to remedy this issue. In the present work, we discuss the application of randomized p$p$ -values in replicability analysis. In particular, we introduce a general class of statistical models for which valid, randomized p$p$ -values can be calculated easily. By means of computer simulations, we demonstrate that their usage typically leads to a much more accurate estimation of π0$\pi _0$ than the LFC-based approach. Finally, we apply our proposed methodology to a real data example from genomics.


Assuntos
Genômica , Modelos Estatísticos , Simulação por Computador
5.
Biom J ; 61(1): 40-61, 2019 01.
Artigo em Inglês | MEDLINE | ID: mdl-30003587

RESUMO

Multivariate multiple test procedures have received growing attention recently. This is due to the fact that data generated by modern applications typically are high-dimensional, but possess pronounced dependencies due to the technical mechanisms involved in the experiments. Hence, it is possible and often necessary to exploit these dependencies in order to achieve reasonable power. In the present paper, we express dependency structures in the most general manner, namely, by means of copula functions. One class of nonparametric copula estimators is constituted by Bernstein copulae. We extend previous statistical results regarding bivariate Bernstein copulae to the multivariate case and study their impact on multiple tests. In particular, we utilize them to derive asymptotic confidence regions for the family-wise error rate (FWER) of multiple test procedures that are empirically calibrated by making use of Bernstein copulae approximations of the dependency structure among the test statistics. This extends a similar approach by Stange et al. (2015) in the parametric case. A simulation study quantifies the gain in FWER level exhaustion and, consequently, power that can be achieved by exploiting the dependencies, in comparison with common threshold calibrations like the Bonferroni or Sidák corrections. Finally, we demonstrate an application of the proposed methodology to real-life data from insurance.


Assuntos
Biometria/métodos , Calibragem , Análise Multivariada , Estatísticas não Paramétricas
6.
Biom J ; 64(2): 197, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-35152458
7.
Bioinformatics ; 31(22): 3577-83, 2015 Nov 15.
Artigo em Inglês | MEDLINE | ID: mdl-26249812

RESUMO

MOTIVATION: When analyzing a case group of patients with ultra-rare disorders the ethnicities are often diverse and the data quality might vary. The population substructure in the case group as well as the heterogeneous data quality can cause substantial inflation of test statistics and result in spurious associations in case-control studies if not properly adjusted for. Existing techniques to correct for confounding effects were especially developed for common variants and are not applicable to rare variants. RESULTS: We analyzed strategies to select suitable controls for cases that are based on similarity metrics that vary in their weighting schemes. We simulated different disease entities on real exome data and show that a similarity-based selection scheme can help to reduce false positive associations and to optimize the performance of the statistical tests. Especially when data quality as well as ethnicities vary a lot in the case group, a matching approach that puts more weight on rare variants shows the best performance. We reanalyzed collections of unrelated patients with Kabuki make-up syndrome, Hyperphosphatasia with Mental Retardation syndrome and Catel-Manzke syndrome for which the disease genes were recently described. We show that rare variant association tests are more sensitive and specific in identifying the disease gene than intersection filters and should thus be considered as a favorable approach in analyzing even small patient cohorts. AVAILABILITY AND IMPLEMENTATION: Datasets used in our analysis are available at ftp://ftp.1000genomes.ebi.ac.uk./vol1/ftp/ CONTACT: : peter.krawitz@charite.de SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Assuntos
Estudos de Associação Genética , Variação Genética , Estudos de Casos e Controles , Confiabilidade dos Dados , Doença/genética , Etnicidade/genética , Humanos , Curva ROC , Análise de Sequência de DNA
8.
Stat Appl Genet Mol Biol ; 14(4): 347-60, 2015 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-26215535

RESUMO

Genetic association studies lead to simultaneous categorical data analysis. The sample for every genetic locus consists of a contingency table containing the numbers of observed genotype-phenotype combinations. Under case-control design, the row counts of every table are identical and fixed, while column counts are random. The aim of the statistical analysis is to test independence of the phenotype and the genotype at every locus. We present an objective Bayesian methodology for these association tests, which relies on the conjugacy of Dirichlet and multinomial distributions. Being based on the likelihood principle, the Bayesian tests avoid looping over all tables with given marginals. Making use of data generated by The Wellcome Trust Case Control Consortium (WTCCC), we illustrate that the ordering of the Bayes factors shows a good agreement with that of frequentist p-values. Furthermore, we deal with specifying prior probabilities for the validity of the null hypotheses, by taking linkage disequilibrium structure into account and exploiting the concept of effective numbers of tests. Application of a Bayesian decision theoretic multiple test procedure to the WTCCC data illustrates the proposed methodology. Finally, we discuss two methods for reconciling frequentist and Bayesian approaches to the multiple association test problem.


Assuntos
Teorema de Bayes , Estudos de Associação Genética , Algoritmos , Alelos , Simulação por Computador , Loci Gênicos , Genótipo , Humanos , Modelos Genéticos , Modelos Estatísticos , Fenótipo
9.
Stat Appl Genet Mol Biol ; 14(5): 497-505, 2015 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-26506100

RESUMO

We are concerned with statistical inference for 2 × C × K contingency tables in the context of genetic case-control association studies. Multivariate methods based on asymptotic Gaussianity of vectors of test statistics require information about the asymptotic correlation structure among these test statistics under the global null hypothesis. In the case of C=2, we show that for a wide variety of test statistics this asymptotic correlation structure is given by the standardized linkage disequilibrium matrix of the K loci under investigation. Three popular choices of test statistics are discussed for illustration. In the case of C=3, the standardized composite linkage disequilibrium matrix is the limiting correlation matrix of the K locus-specific Cochran-Armitage trend test statistics.


Assuntos
Estudos de Associação Genética , Algoritmos , Estudos de Casos e Controles , Interpretação Estatística de Dados , Loci Gênicos , Predisposição Genética para Doença , Humanos , Desequilíbrio de Ligação , Modelos Genéticos , Modelos Estatísticos , Distribuição Normal , Razão de Chances
10.
Nucleic Acids Res ; 40(6): 2426-31, 2012 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-22127862

RESUMO

With the availability of next-generation sequencing (NGS) technology, it is expected that sequence variants may be called on a genomic scale. Here, we demonstrate that a deeper understanding of the distribution of the variant call frequencies at heterozygous loci in NGS data sets is a prerequisite for sensitive variant detection. We model the crucial steps in an NGS protocol as a stochastic branching process and derive a mathematical framework for the expected distribution of alleles at heterozygous loci before measurement that is sequencing. We confirm our theoretical results by analyzing technical replicates of human exome data and demonstrate that the variance of allele frequencies at heterozygous loci is higher than expected by a simple binomial distribution. Due to this high variance, mutation callers relying on binomial distributed priors are less sensitive for heterozygous variants that deviate strongly from the expected mean frequency. Our results also indicate that error rates can be reduced to a greater degree by technical replicates than by increasing sequencing depth.


Assuntos
Frequência do Gene , Sequenciamento de Nucleotídeos em Larga Escala , Análise de Sequência de DNA , Alelos , Exoma , Heterozigoto , Humanos , Processos Estocásticos
11.
Stat Appl Genet Mol Biol ; 11(4)2012 Jul 27.
Artigo em Inglês | MEDLINE | ID: mdl-22850061

RESUMO

We study exact tests for (2 x 2) and (2 x 3) contingency tables, in particular exact chi-squared tests and exact tests of Fisher type. In practice, these tests are typically carried out without randomization, leading to reproducible results but not exhausting the significance level. We discuss that this can lead to methodological and practical issues in a multiple testing framework when many tables are simultaneously under consideration as in genetic association studies.Realized randomized p-values are proposed as a solution which is especially useful for data-adaptive (plug-in) procedures. These p-values allow to estimate the proportion of true null hypotheses much more accurately than their non-randomized counterparts. Moreover, we address the problem of positively correlated p-values for association by considering techniques to reduce multiplicity by estimating the "effective number of tests" from the correlation structure.An algorithm is provided that bundles all these aspects, efficient computer implementations are made available, a small-scale simulation study is presented and two real data examples are shown.


Assuntos
Algoritmos , Estudos de Associação Genética , Estudos de Casos e Controles , Distribuição de Qui-Quadrado , Biologia Computacional/métodos , Biologia Computacional/normas , Simulação por Computador , Perfilação da Expressão Gênica , Estudos de Associação Genética/estatística & dados numéricos , Marcadores Genéticos/fisiologia , Ensaios de Triagem em Larga Escala/métodos , Ensaios de Triagem em Larga Escala/estatística & dados numéricos , Humanos , Distribuição Aleatória , Projetos de Pesquisa
12.
Biom J ; 55(3): 463-77, 2013 May.
Artigo em Inglês | MEDLINE | ID: mdl-23378199

RESUMO

Connecting multiple testing with binary classification, we derive a false discovery rate-based classification approach for two-class mixture models, where the available data (represented as feature vectors) for each individual comparison take values in Rd for some d≥1 and may exhibit certain forms of autocorrelation. This generalizes previous findings for the independent case in dimension d=1. Two resulting classification procedures are described which allow for incorporating prior knowledge about class probabilities and for user-supplied weighting of the severity of misclassifying a member of the "0"-class as "1" and vice versa. The key mathematical tools to be employed are multivariate estimation methods for probability density functions or density ratios. We compare the two algorithms with respect to their theoretical properties and with respect to their performance in practice. Computer simulations indicate that they can both successfully be applied to autocorrelated time series data with moving average structure. Our approach was inspired and its practicability will be demonstrated by applications from the field of brain-computer interfacing and the processing of electroencephalography data.


Assuntos
Algoritmos , Interpretação Estatística de Dados , Modelos Estatísticos , Encéfalo/fisiologia , Simulação por Computador , Eletroencefalografia/métodos , Reações Falso-Positivas , Humanos , Processamento de Sinais Assistido por Computador
13.
PLoS One ; 18(2): e0280503, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36724145

RESUMO

We present a new approach to modeling the future development of extreme temperatures globally and on the time-scale of several centuries by using non-stationary generalized extreme value distributions in combination with logistic functions. The statistical models we propose are applied to annual maxima of daily temperature data from fully coupled climate models spanning the years 1850 through 2300. They enable us to investigate how extremes will change depending on the geographic location not only in terms of the magnitude, but also in terms of the timing of the changes. We find that in general, changes in extremes are stronger and more rapid over land masses than over oceans. In addition, our statistical models allow for changes in the different parameters of the fitted generalized extreme value distributions (a location, a scale and a shape parameter) to take place independently and at varying time periods. Different statistical models are presented and the Bayesian Information Criterion is used for model selection. It turns out that in most regions, changes in mean and variance take place simultaneously while the shape parameter of the distribution is predicted to stay constant. In the Arctic region, however, a different picture emerges: There, climate variability is predicted to increase rather quickly in the second half of the twenty-first century, probably due to the melting of ice, whereas changes in the mean values take longer and come into effect later.


Assuntos
Clima , Modelos Estatísticos , Temperatura , Teorema de Bayes , Mudança Climática
14.
Neuroimage ; 56(2): 387-99, 2011 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-21172442

RESUMO

Machine learning and pattern recognition algorithms have in the past years developed to become a working horse in brain imaging and the computational neurosciences, as they are instrumental for mining vast amounts of neural data of ever increasing measurement precision and detecting minuscule signals from an overwhelming noise floor. They provide the means to decode and characterize task relevant brain states and to distinguish them from non-informative brain signals. While undoubtedly this machinery has helped to gain novel biological insights, it also holds the danger of potential unintentional abuse. Ideally machine learning techniques should be usable for any non-expert, however, unfortunately they are typically not. Overfitting and other pitfalls may occur and lead to spurious and nonsensical interpretation. The goal of this review is therefore to provide an accessible and clear introduction to the strengths and also the inherent dangers of machine learning usage in the neurosciences.


Assuntos
Algoritmos , Inteligência Artificial , Encéfalo , Biologia Computacional/métodos , Modelos Neurológicos , Humanos
15.
Neuroimage ; 54(2): 851-9, 2011 Jan 15.
Artigo em Inglês | MEDLINE | ID: mdl-20832477

RESUMO

We propose a novel approach to solving the electro-/magnetoencephalographic (EEG/MEG) inverse problem which is based upon a decomposition of the current density into a small number of spatial basis fields. It is designed to recover multiple sources of possibly different extent and depth, while being invariant with respect to phase angles and rotations of the coordinate system. We demonstrate the method's ability to reconstruct simulated sources of random shape and show that the accuracy of the recovered sources can be increased, when interrelated field patterns are co-localized. Technically, this leads to large-scale mathematical problems, which are solved using recent advances in convex optimization. We apply our method for localizing brain areas involved in different types of motor imagery using real data from Brain-Computer Interface (BCI) sessions. Our approach based on single-trial localization of complex Fourier coefficients yields class-specific focal sources in the sensorimotor cortices.


Assuntos
Mapeamento Encefálico/métodos , Encéfalo/fisiologia , Eletroencefalografia , Magnetoencefalografia , Processamento de Sinais Assistido por Computador , Algoritmos , Humanos
16.
Neuroimage ; 51(4): 1303-9, 2010 Jul 15.
Artigo em Inglês | MEDLINE | ID: mdl-20303409

RESUMO

Brain-computer interfaces (BCIs) allow a user to control a computer application by brain activity as measured, e.g., by electroencephalography (EEG). After about 30years of BCI research, the success of control that is achieved by means of a BCI system still greatly varies between subjects. For about 20% of potential users the obtained accuracy does not reach the level criterion, meaning that BCI control is not accurate enough to control an application. The determination of factors that may serve to predict BCI performance, and the development of methods to quantify a predictor value from psychological and/or physiological data serve two purposes: a better understanding of the 'BCI-illiteracy phenomenon', and avoidance of a costly and eventually frustrating training procedure for participants who might not obtain BCI control. Furthermore, such predictors may lead to approaches to antagonize BCI illiteracy. Here, we propose a neurophysiological predictor of BCI performance which can be determined from a two minute recording of a 'relax with eyes open' condition using two Laplacian EEG channels. A correlation of r=0.53 between the proposed predictor and BCI feedback performance was obtained on a large data base with N=80 BCI-naive participants in their first session with the Berlin brain-computer interface (BBCI) system which operates on modulations of sensory motor rhythms (SMRs).


Assuntos
Eletroencefalografia , Córtex Motor/fisiologia , Córtex Somatossensorial/fisiologia , Interface Usuário-Computador , Adulto , Algoritmos , Artefatos , Biorretroalimentação Psicológica , Calibragem , Alfabetização Digital , Sinais (Psicologia) , Interpretação Estatística de Dados , Feminino , Lateralidade Funcional/fisiologia , Mãos/inervação , Mãos/fisiologia , Humanos , Masculino , Estimulação Luminosa , Desempenho Psicomotor/fisiologia
17.
Stat Med ; 29(22): 2347-58, 2010 Sep 30.
Artigo em Inglês | MEDLINE | ID: mdl-20641143

RESUMO

We study the link between two quality measures of SNP (single nucleotide polymorphism) data in genome-wide association (GWA) studies, that is, per SNP call rates (CR) and p-values for testing Hardy-Weinberg equilibrium (HWE). The aim is to improve these measures by applying methods based on realized randomized p-values, the false discovery rate and estimates for the proportion of false hypotheses. While exact non-randomized conditional p-values for testing HWE cannot be recommended for estimating the proportion of false hypotheses, their realized randomized counterparts should be used. P-values corresponding to the asymptotic unconditional chi-square test lead to reasonable estimates only if SNPs with low minor allele frequency are excluded. We provide an algorithm to compute the probability that SNPs violate HWE given the observed CR, which yields an improved measure of data quality. The proposed methods are applied to SNP data from the KORA (Cooperative Health Research in the Region of Augsburg, Southern Germany) 500 K project, a GWA study in a population-based sample genotyped by Affymetrix GeneChip 500 K arrays using the calling algorithm BRLMM 1.4.0. We show that all SNPs with CR = 100 per cent are nearly in perfect HWE which militates in favor of the population to meet the conditions required for HWE at least for these SNPs. Moreover, we show that the proportion of SNPs not being in HWE increases with decreasing CR. We conclude that using a single threshold for judging HWE p-values without taking the CR into account is problematic. Instead we recommend a stratified analysis with respect to CR.


Assuntos
Teorema de Bayes , Interpretação Estatística de Dados , Estudo de Associação Genômica Ampla/métodos , Polimorfismo de Nucleotídeo Único/genética , Adulto , Idoso , Doenças Cardiovasculares/genética , Frequência do Gene , Genótipo , Alemanha , Humanos , Pessoa de Meia-Idade
18.
Brain Topogr ; 23(2): 186-93, 2010 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-20162347

RESUMO

One crucial question in the design of electroencephalogram (EEG)-based brain-computer interface (BCI) experiments is the selection of EEG channels. While a setup with few channels is more convenient and requires less preparation time, a dense placement of electrodes provides more detailed information and henceforth could lead to a better classification performance. Here, we investigate this question for a specific setting: a BCI that uses the popular CSP algorithm in order to classify voluntary modulations of sensorimotor rhythms (SMR). In a first approach 13 different fixed channel configurations are compared to the full one consisting of 119 channels. The configuration with 48 channels results to be the best one, while configurations with less channels, from 32 to 8, performed not significantly worse than the best configuration in cases where only few training trials are available. In a second approach an optimal channel configuration is obtained by an iterative procedure in the spirit of stepwise variable selection with nonparametric multiple comparisons. As a surprising result, in the second approach a setting with 22 channels centered over the motor areas was selected. Thanks to the acquisition of a large data set recorded from 80 novice participants using 119 EEG channels, the results of this study can be expected to have a high degree of generalizability.


Assuntos
Encéfalo/fisiologia , Eletroencefalografia/instrumentação , Eletroencefalografia/métodos , Interface Usuário-Computador , Adulto , Algoritmos , Retroalimentação , Feminino , Humanos , Imaginação/fisiologia , Masculino , Atividade Motora/fisiologia , Córtex Motor/fisiologia , Processamento de Sinais Assistido por Computador
19.
Pain Med ; 10(2): 393-400, 2009 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-19207236

RESUMO

OBJECTIVE: The prevalence of neuropathic pain in prediabetes and the associated risk factors in the general population are not known. The aim of this study was to determine the prevalence and risk factors of neuropathic pain in subjects with diabetes, impaired fasting glucose (IFG), impaired glucose tolerance (IGT), or normal glucose tolerance (NGT). DESIGN: Survey of neuropathic painful polyneuropathy assessed by the Michigan Neuropathy Screening Instrument using its pain-relevant questions and an examination score cutpoint >2 in a diabetic and control population. An oral glucose tolerance test was performed in the control subjects. SETTING: Population of the city of Augsburg and two surrounding counties. PATIENTS: Subjects with diabetes (N = 195) and controls matched for age and sex (N = 198) from the population-based MONItoring trends and determinants in CArdiovascular/Cooperative Research in the Region of Augsburg (MONICA/KORA) Augsburg Surveys S2 and S3 aged 25-74 years. RESULTS: Among the controls, 46 (23.2%) had IGT (either isolated or combined with IFG), 71 (35.9%) had isolated IFG, and 81 had NGT. The prevalence (95% confidence interval) of neuropathic pain was 13.3 (8.9-18.9)% in the diabetic subjects, 8.7 (2.4-20.0)% in those with IGT, 4.2 (0.9-11.9)% in those with IFG, and 1.2 (0.03-6.7)% in those with NGT (overall P = 0.003). In the entire population (N = 393), age, weight, peripheral arterial disease (PAD), and diabetes were risk factors significantly associated with neuropathic pain, while in the diabetic group, these factors were age, weight, and PAD (all P < 0.05). CONCLUSIONS: The prevalence of neuropathic pain is two- to threefold increased in subjects with IGT and diabetes compared with those with isolated IFG. Apart from diabetes, the predominant risk factors are age, obesity, and PAD.


Assuntos
Complicações do Diabetes/epidemiologia , Neuralgia/epidemiologia , Estado Pré-Diabético/complicações , Adulto , Idoso , Índice de Massa Corporal , Feminino , Intolerância à Glucose/complicações , Teste de Tolerância a Glucose , Humanos , Masculino , Pessoa de Meia-Idade , Neuralgia/etiologia , Doenças Vasculares Periféricas/complicações , Prevalência , Fatores de Risco
20.
PLoS One ; 11(2): e0149016, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-26914144

RESUMO

Signal detection in functional magnetic resonance imaging (fMRI) inherently involves the problem of testing a large number of hypotheses. A popular strategy to address this multiplicity is the control of the false discovery rate (FDR). In this work we consider the case where prior knowledge is available to partition the set of all hypotheses into disjoint subsets or families, e. g., by a-priori knowledge on the functionality of certain regions of interest. If the proportion of true null hypotheses differs between families, this structural information can be used to increase statistical power. We propose a two-stage multiple test procedure which first excludes those families from the analysis for which there is no strong evidence for containing true alternatives. We show control of the family-wise error rate at this first stage of testing. Then, at the second stage, we proceed to test the hypotheses within each non-excluded family and obtain asymptotic control of the FDR within each family at this second stage. Our main mathematical result is that this two-stage strategy implies asymptotic control of the FDR with respect to all hypotheses. In simulations we demonstrate the increased power of this new procedure in comparison with established procedures in situations with highly unbalanced families. Finally, we apply the proposed method to simulated and to real fMRI data.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética , Algoritmos , Encéfalo/fisiologia , Reações Falso-Positivas , Dinâmica não Linear
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA