Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 650
Filtrar
Mais filtros

Tipo de documento
Intervalo de ano de publicação
1.
Am J Hum Genet ; 2024 Sep 30.
Artigo em Inglês | MEDLINE | ID: mdl-39362217

RESUMO

Recent positive selection can result in an excess of long identity-by-descent (IBD) haplotype segments overlapping a locus. The statistical methods that we propose here address three major objectives in studying selective sweeps: scanning for regions of interest, identifying possible sweeping alleles, and estimating a selection coefficient s. First, we implement a selection scan to locate regions with excess IBD rates. Second, we estimate the allele frequency and location of an unknown sweeping allele by aggregating over variants that are more abundant in an inferred outgroup with excess IBD rate versus the rest of the sample. Third, we propose an estimator for the selection coefficient and quantify uncertainty using the parametric bootstrap. Comparing against state-of-the-art methods in extensive simulations, we show that our methods are more precise at estimating s when s≥0.015. We also show that our 95% confidence intervals contain s in nearly 95% of our simulations. We apply these methods to study positive selection in European ancestry samples from the Trans-Omics for Precision Medicine project. We analyze eight loci where IBD rates are more than four standard deviations above the genome-wide median, including LCT where the maximum IBD rate is 35 standard deviations above the genome-wide median. Overall, we present robust and accurate approaches to study recent adaptive evolution without knowing the identity of the causal allele or using time series data.

2.
Theor Popul Biol ; 155: 1-9, 2024 02.
Artigo em Inglês | MEDLINE | ID: mdl-38000513

RESUMO

By quantifying key life history parameters in populations, such as growth rate, longevity, and generation time, researchers and administrators can obtain valuable insights into its dynamics. Although point estimates of demographic parameters have been available since the inception of demography as a scientific discipline, the construction of confidence intervals has typically relied on approximations through series expansions or computationally intensive techniques. This study introduces the first mathematical expression for calculating confidence intervals for the aforementioned life history traits when individuals are unidentifiable and data are presented as a life table. The key finding is the accurate estimation of the confidence interval for r, the instantaneous growth rate, which is tested using Monte Carlo simulations with four arbitrary discrete distributions. In comparison to the bootstrap method, the proposed interval construction method proves more efficient, particularly for experiments with a total offspring size below 400. We discuss handling cases where data are organized in extended life tables or as a matrix of vital rates. We have developed and provided accompanying code to facilitate these computations.


Assuntos
Longevidade , Crescimento Demográfico , Humanos , Intervalos de Confiança , Dinâmica Populacional , Tábuas de Vida
3.
Biometrics ; 80(3)2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-39282732

RESUMO

We develop a methodology for valid inference after variable selection in logistic regression when the responses are partially observed, that is, when one observes a set of error-prone testing outcomes instead of the true values of the responses. Aiming at selecting important covariates while accounting for missing information in the response data, we apply the expectation-maximization algorithm to compute maximum likelihood estimators subject to LASSO penalization. Subsequent to variable selection, we make inferences on the selected covariate effects by extending post-selection inference methodology based on the polyhedral lemma. Empirical evidence from our extensive simulation study suggests that our post-selection inference results are more reliable than those from naive inference methods that use the same data to perform variable selection and inference without adjusting for variable selection.


Assuntos
Algoritmos , Simulação por Computador , Funções Verossimilhança , Humanos , Modelos Logísticos , Interpretação Estatística de Dados , Biometria/métodos , Modelos Estatísticos
4.
Biometrics ; 80(2)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38536746

RESUMO

The paper extends the empirical likelihood (EL) approach of Liu et al. to a new and very flexible family of latent class models for capture-recapture data also allowing for serial dependence on previous capture history, conditionally on latent type and covariates. The EL approach allows to estimate the overall population size directly rather than by adding estimates conditional to covariate configurations. A Fisher-scoring algorithm for maximum likelihood estimation is proposed and a more efficient alternative to the traditional EL approach for estimating the non-parametric component is introduced; this allows us to show that the mapping between the non-parametric distribution of the covariates and the probabilities of being never captured is one-to-one and strictly increasing. Asymptotic results are outlined, and a procedure for constructing profile likelihood confidence intervals for the population size is presented. Two examples based on real data are used to illustrate the proposed approach and a simulation study indicates that, when estimating the overall undercount, the method proposed here is substantially more efficient than the one based on conditional maximum likelihood estimation, especially when the sample size is not sufficiently large.


Assuntos
Modelos Estatísticos , Funções Verossimilhança , Simulação por Computador , Densidade Demográfica , Tamanho da Amostra
5.
Stat Med ; 2024 Jul 30.
Artigo em Inglês | MEDLINE | ID: mdl-39080846

RESUMO

We often estimate a parameter of interest ψ $$ \psi $$ when the identifying conditions involve a finite-dimensional nuisance parameter θ ∈ ℝ d $$ \theta \in {\mathbb{R}} $$ . Examples from causal inference are inverse probability weighting, marginal structural models and structural nested models, which all lead to unbiased estimating equations. This article presents a consistent sandwich estimator for the variance of estimators ψ ^ $$ \hat{\psi} $$ that solve unbiased estimating equations including θ $$ \theta $$ which is also estimated by solving unbiased estimating equations. This article presents four additional results for settings where θ ^ $$ \hat{\theta} $$ solves (partial) score equations and ψ $$ \psi $$ does not depend on θ $$ \theta $$ . This includes many causal inference settings where θ $$ \theta $$ describes the treatment probabilities, missing data settings where θ $$ \theta $$ describes the missingness probabilities, and measurement error settings where θ $$ \theta $$ describes the error distribution. These four additional results are: (1) Counter-intuitively, the asymptotic variance of ψ ^ $$ \hat{\psi} $$ is typically smaller when θ $$ \theta $$ is estimated. (2) If estimating θ $$ \theta $$ is ignored, the sandwich estimator for the variance of ψ ^ $$ \hat{\psi} $$ is conservative. (3) A consistent sandwich estimator for the variance of ψ ^ $$ \hat{\psi} $$ . (4) If ψ ^ $$ \hat{\psi} $$ with the true θ $$ \theta $$ plugged in is efficient, the asymptotic variance of ψ ^ $$ \hat{\psi} $$ does not depend on whether θ $$ \theta $$ is estimated. To illustrate we use observational data to calculate confidence intervals for (1) the effect of cazavi versus colistin on bacterial infections and (2) how the effect of antiretroviral treatment depends on its initiation time in HIV-infected patients.

6.
Stat Med ; 43(8): 1577-1603, 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38339872

RESUMO

Due to the dependency structure in the sampling process, adaptive trial designs create challenges in point and interval estimation and in the calculation of P-values. Optimal adaptive designs, which are designs where the parameters governing the adaptivity are chosen to maximize some performance criterion, suffer from the same problem. Various analysis methods which are able to handle this dependency structure have already been developed. In this work, we aim to give a comprehensive summary of these methods and show how they can be applied to the class of designs with planned adaptivity, of which optimal adaptive designs are an important member. The defining feature of these kinds of designs is that the adaptive elements are completely prespecified. This allows for explicit descriptions of the calculations involved, which makes it possible to evaluate different methods in a fast and accurate manner. We will explain how to do so, and present an extensive comparison of the performance characteristics of various estimators between an optimal adaptive design and its group-sequential counterpart.


Assuntos
Projetos de Pesquisa , Humanos , Intervalos de Confiança , Tamanho da Amostra
7.
BMC Med Res Methodol ; 24(1): 256, 2024 Oct 29.
Artigo em Inglês | MEDLINE | ID: mdl-39472775

RESUMO

BACKGROUND: Dichotomisation of statistical significance, rather than interpretation of effect sizes supported by confidence intervals, is a long-standing problem. METHODS: We distributed an online survey to clinical trial statisticians across the UK, Australia and Canada asking about their experiences, perspectives and practices with respect to interpretation of statistical findings from randomised trials. We report a descriptive analysis of the closed-ended questions and a thematic analysis of the open-ended questions. RESULTS: We obtained 101 responses across a broad range of career stages (24% professors; 51% senior lecturers; 22% junior statisticians) and areas of work (28% early phase trials; 44% drug trials; 38% health service trials). The majority (93%) believed that statistical findings should be interpreted by considering (minimal) clinical importance of treatment effects, but many (61%) said quantifying clinically important effect sizes was difficult, and fewer (54%) followed this approach in practice. Thematic analysis identified several barriers to forming a consensus on the statistical interpretation of the study findings, including: the dynamics within teams, lack of knowledge or difficulties in communicating that knowledge, as well as external pressures. External pressures included the pressure to publish definitive findings and statistical review which can sometimes be unhelpful but can at times be a saving grace. However, the concept of the minimally important difference was identified as a particularly poorly defined, even nebulous, construct which lies at the heart of much disagreement and confusion in the field. CONCLUSION: The majority of participating statisticians believed that it is important to interpret statistical findings based on the clinically important effect size, but report this is difficult to operationalise. Reaching a consensus on the interpretation of a study is a social process involving disparate members of the research team along with editors and reviewers, as well as patients who likely have a role in the elicitation of minimally important differences.


Assuntos
Ensaios Clínicos Controlados Aleatórios como Assunto , Humanos , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Ensaios Clínicos Controlados Aleatórios como Assunto/métodos , Interpretação Estatística de Dados , Austrália , Canadá , Inquéritos e Questionários , Reino Unido , Pesquisadores/estatística & dados numéricos , Projetos de Pesquisa/estatística & dados numéricos
8.
J Biopharm Stat ; : 1-12, 2024 Apr 14.
Artigo em Inglês | MEDLINE | ID: mdl-38615346

RESUMO

The randomization design employed to gather the data is the basis for the exact distributions of the permutation tests. One of the designs that is frequently used in clinical trials to force balance and remove experimental bias is the truncated binomial design. The exact distribution of the weighted log-rank class of tests for censored cluster medical data under the truncated binomial design is examined in this paper. For p-values in this class, a double saddlepoint approximation is developed using the truncated binomial design. With the right censored cluster data, the saddlepoint approximation's speed and accuracy over the normal asymptotic make it easier to invert the weighted log-rank tests and find nominal 95% confidence intervals for the treatment effect.

9.
Entropy (Basel) ; 26(1)2024 Jan 22.
Artigo em Inglês | MEDLINE | ID: mdl-38275503

RESUMO

The paper makes a case that the current discussions on replicability and the abuse of significance testing have overlooked a more general contributor to the untrustworthiness of published empirical evidence, which is the uninformed and recipe-like implementation of statistical modeling and inference. It is argued that this contributes to the untrustworthiness problem in several different ways, including [a] statistical misspecification, [b] unwarranted evidential interpretations of frequentist inference results, and [c] questionable modeling strategies that rely on curve-fitting. What is more, the alternative proposals to replace or modify frequentist testing, including [i] replacing p-values with observed confidence intervals and effects sizes, and [ii] redefining statistical significance, will not address the untrustworthiness of evidence problem since they are equally vulnerable to [a]-[c]. The paper calls for distinguishing between unduly data-dependant 'statistical results', such as a point estimate, a p-value, and accept/reject H0, from 'evidence for or against inferential claims'. The post-data severity (SEV) evaluation of the accept/reject H0 results, converts them into evidence for or against germane inferential claims. These claims can be used to address/elucidate several foundational issues, including (i) statistical vs. substantive significance, (ii) the large n problem, and (iii) the replicability of evidence. Also, the SEV perspective sheds light on the impertinence of the proposed alternatives [i]-[iii], and oppugns [iii] the alleged arbitrariness of framing H0 and H1 which is often exploited to undermine the credibility of frequentist testing.

10.
Biometrics ; 79(4): 3388-3401, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-37459178

RESUMO

Varying coefficient models have been used to explore dynamic effects in many scientific areas, such as in medicine, finance, and epidemiology. As most existing models ignore the existence of zero regions, we propose a new soft-thresholded varying coefficient model, where the coefficient functions are piecewise smooth with zero regions. Our new modeling approach enables us to perform variable selection, detect the zero regions of selected variables, obtain point estimates of the varying coefficients with zero regions, and construct a new type of sparse confidence intervals that accommodate zero regions. We prove the asymptotic properties of the estimator, based on which we draw statistical inference. Our simulation study reveals that the proposed sparse confidence intervals achieve the desired coverage probability. We apply the proposed method to analyze a large-scale preoperative opioid study.


Assuntos
Simulação por Computador , Probabilidade , Tamanho da Amostra
11.
Stat Med ; 42(11): 1822-1867, 2023 05 20.
Artigo em Inglês | MEDLINE | ID: mdl-36866590

RESUMO

There are established methods for estimating disease prevalence with associated confidence intervals for complex surveys with perfect assays, or simple random sample surveys with imperfect assays. We develop and study methods for the complicated case of complex surveys with imperfect assays. The new methods use the melding method to combine gamma intervals for directly standardized rates and established adjustments for imperfect assays by estimating sensitivity and specificity. One of the new methods appears to have at least nominal coverage in all simulated scenarios. We compare our new methods to established methods in special cases (complex surveys with perfect assays or simple surveys with imperfect assays). In some simulations, our methods appear to guarantee coverage, while competing methods have much lower than nominal coverage, especially when overall prevalence is very low. In other settings, our methods are shown to have higher than nominal coverage. We apply our method to a seroprevalence survey of SARS-CoV-2 in undiagnosed adults in the United States between May and July 2020.


Assuntos
COVID-19 , SARS-CoV-2 , Adulto , Humanos , COVID-19/epidemiologia , Prevalência , Estudos Soroepidemiológicos , Intervalos de Confiança
12.
J Int Neuropsychol Soc ; 29(4): 397-405, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-35481552

RESUMO

OBJECTIVE: The Mayo Normative Studies (MNS) represents a robust dataset that provides demographically corrected norms for the Rey Auditory Verbal Learning Test. We report MNS application to an independent cohort to evaluate whether MNS norms accurately adjust for age, sex, and education differences in subjects from a different geographic region of the country. As secondary goals, we examined item-level patterns, recognition benefit compared to delayed free recall, and derived Auditory Verbal Learning Test (AVLT) confidence intervals (CIs) to facilitate clinical performance characterization. METHOD: Participants from the Emory Healthy Brain Study (463 women, 200 men) who were administered the AVLT were analyzed to demonstrate expected demographic group differences. AVLT scores were transformed using MNS normative correction to characterize the success of MNS demographic adjustment. RESULTS: Expected demographic effects were observed across all primary raw AVLT scores. Depending on sample size, MNS normative adjustment either eliminated or minimized all observed statistically significant AVLT differences. Estimated CIs yielded broad CI ranges exceeding the standard deviation of each measure. The recognition performance benefit across age ranged from 2.7 words (SD = 2.3) in the 50-54-year-old group to 4.7 words (SD = 2.7) in the 70-75-year-old group. CONCLUSIONS: These findings demonstrate generalizability of MNS normative correction to an independent sample from a different geographic region, with demographic adjusted performance differences close to overall performance levels near the expected value of T = 50. A large recognition performance benefit is commonly observed in the normal aging process and by itself does not necessarily suggest a pathological retrieval deficit.


Assuntos
Testes de Memória e Aprendizagem , Rememoração Mental , Masculino , Humanos , Feminino , Pessoa de Meia-Idade , Idoso , Testes Neuropsicológicos , Intervalos de Confiança , Reconhecimento Psicológico , Aprendizagem Verbal , Valores de Referência
13.
Eur J Epidemiol ; : 1035-1042, 2023 Sep 16.
Artigo em Inglês | MEDLINE | ID: mdl-37715928

RESUMO

OBJECTIVE: To examine the time trend of statistical inference, statistical reporting style of results, and effect measures from the abstracts of randomized controlled trials (RCTs). STUDY DESGIN AND SETTINGS: We downloaded 385,867 PubMed abstracts of RCTs from 1975 to 2021. We used text-mining to detect reporting of statistical inference (p-values, confidence intervals, significance terminology), statistical reporting style of results, and effect measures for binary outcomes, including time-to-event measures. We validated the text mining algorithms by random samples of abstracts. RESULTS: A total of 320 676 abstracts contained statistical inference. The percentage of abstracts including statistical inference increased from 65% (1975) to 87% (2006) and then decreased slightly. From 1975 to 1990, the sole reporting of language regarding statistical significance was predominant. Since 1990, reporting of p-values without confidence intervals has been the most common reporting style. Reporting of confidence intervals increased from 0.5% (1975) to 29% (2021). The two most common effect measures for binary outcomes were hazard ratios and odds ratios. Number needed to treat and number needed to harm are reported in less than 5% of abstracts with binary endpoints. CONCLUSIONS: Reporting of statistical inference in abstracts of RCTs has increased over time. Increasingly, p-values and confidence intervals are reported rather than just mentioning the presence of "statistical significance". The reporting of odds ratios comes with the liability that the untrained reader will interpret them as risk ratios, which is often not justified, especially in RCTs.

14.
Multivariate Behav Res ; 58(6): 1183-1186, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37096594

RESUMO

The multivariate delta method was used by Yuan and Chan to estimate standard errors and confidence intervals for standardized regression coefficients. Jones and Waller extended the earlier work to situations where data are nonnormal by utilizing Browne's asymptotic distribution-free (ADF) theory. Furthermore, Dudgeon developed standard errors and confidence intervals, employing heteroskedasticity-consistent (HC) estimators, that are robust to nonnormality with better performance in smaller sample sizes compared to Jones and Waller's ADF technique. Despite these advancements, empirical research has been slow to adopt these methodologies. This can be a result of the dearth of user-friendly software programs to put these techniques to use. We present the betaDelta and the betaSandwich packages in the R statistical software environment in this manuscript. Both the normal-theory approach and the ADF approach put forth by Yuan and Chan and Jones and Waller are implemented by the betaDelta package. The HC approach proposed by Dudgeon is implemented by the betaSandwich package. The use of the packages is demonstrated with an empirical example. We think the packages will enable applied researchers to accurately assess the sampling variability of standardized regression coefficients.


Assuntos
Software , Intervalos de Confiança , Interpretação Estatística de Dados , Tamanho da Amostra
15.
Sensors (Basel) ; 23(12)2023 Jun 11.
Artigo em Inglês | MEDLINE | ID: mdl-37420665

RESUMO

Raman-based distributed temperature sensing (DTS) is a valuable tool for field testing and validating heat transfer models in borehole heat exchanger (BHE) and ground source heat pump (GSHP) applications. However, temperature uncertainty is rarely reported in the literature. In this paper, a new calibration method was proposed for single-ended DTS configurations, along with a method to remove fictitious temperature drifts due to ambient air variations. The methods were implemented for a distributed thermal response test (DTRT) case study in an 800 m deep coaxial BHE. The results show that the calibration method and temperature drift correction are robust and give adequate results, with a temperature uncertainty increasing non-linearly from about 0.4 K near the surface to about 1.7 K at 800 m. The temperature uncertainty is dominated by the uncertainty in the calibrated parameters for depths larger than 200 m. The paper also offers insights into thermal features observed during the DTRT, including a heat flux inversion along the borehole depth and the slow temperature homogenization under circulation.


Assuntos
Temperatura Alta , Sensação Térmica , Temperatura , Calibragem , Incerteza
16.
Sensors (Basel) ; 23(21)2023 Oct 31.
Artigo em Inglês | MEDLINE | ID: mdl-37960554

RESUMO

The paper explores the application of Steiner's most-frequent-value (MFV) statistical method in sensor data analysis. The MFV is introduced as a powerful tool to identify the most-common value in a dataset, even when data points are scattered, unlike traditional mode calculations. Furthermore, the paper underscores the MFV method's versatility in estimating environmental gamma background blue (the natural level of gamma radiation present in the environment, typically originating from natural sources such as rocks, soil, and cosmic rays), making it useful in scenarios where traditional statistical methods are challenging. It presents the MFV approach as a reliable technique for characterizing ambient radiation levels around large-scale experiments, such as the DEAP-3600 dark matter detector. Using the MFV alongside passive sensors such as thermoluminescent detectors and employing a bootstrapping approach, this study showcases its effectiveness in evaluating background radiation and its aptness for estimating confidence intervals. In summary, this paper underscores the importance of the MFV and bootstrapping as valuable statistical tools in various scientific fields that involve the analysis of sensor data. These tools help in estimating the most-common values and make data analysis easier, especially in complex situations, where we need to be reasonably confident about our estimated ranges. Our calculations based on MFV statistics and bootstrapping indicate that the ambient radiation level in Cube Hall at SNOLAB is 35.19 µGy for 1342 h of exposure, with an uncertainty range of +3.41 to -3.59µGy, corresponding to a 68.27% confidence level. In the vicinity of the DEAP-3600 water shielding, the ambient radiation level is approximately 34.80 µGy, with an uncertainty range of +3.58 to -3.48µGy, also at a 68.27% confidence level. These findings offer crucial guidance for experimental design at SNOLAB, especially in the context of dark matter research.

17.
Biom J ; 65(7): e2200082, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37199702

RESUMO

We propose a method to construct simultaneous confidence intervals for a parameter vector from inverting a series of randomization tests (RT). The randomization tests are facilitated by an efficient multivariate Robbins-Monro procedure that takes the correlation information of all components into account. The estimation method does not require any distributional assumption of the population other than the existence of the second moments. The resulting simultaneous confidence intervals are not necessarily symmetric about the point estimate of the parameter vector but possess the property of equal tails in all dimensions. In particular, we present the constructing the mean vector of one population and the difference between two mean vectors of two populations. Extensive simulation is conducted to show numerical comparison with four methods. We illustrate the application of the proposed method to test bioequivalence with multiple endpoints on some real data.


Assuntos
Equivalência Terapêutica , Intervalos de Confiança , Distribuição Aleatória , Simulação por Computador
18.
Behav Res Methods ; 55(2): 474-490, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-35292932

RESUMO

Researchers can generate bootstrap confidence intervals for some statistics in SPSS using the BOOTSTRAP command. However, this command can only be applied to selected procedures, and only to selected statistics in these procedures. We developed an extension command and prepared some sample syntax files based on existing approaches from the Internet to illustrate how researchers can (a) generate a large number of nonparametric bootstrap samples, (b) do desired analysis on all these samples, and (c) form the bootstrap confidence intervals for selected statistics using the OMS commands. We developed these tools to help researchers apply nonparametric bootstrapping to any statistics for which this method is appropriate, including statistics derived from other statistics, such as standardized effect size measures computed from the t test results. We also discussed how researchers can extend the tools for other statistics and scenarios they encounter.


Assuntos
Intervalos de Confiança , Estatística como Assunto
19.
BMC Genomics ; 23(1): 491, 2022 Jul 06.
Artigo em Inglês | MEDLINE | ID: mdl-35794534

RESUMO

BACKGROUND: To detect changes in biological processes, samples are often studied at several time points. We examined expression data measured at different developmental stages, or more broadly, historical data. Hence, the main assumption of our proposed methodology was the independence between the examined samples over time. In addition, however, the examinations were clustered at each time point by measuring littermates from relatively few mother mice at each developmental stage. As each examination was lethal, we had an independent data structure over the entire history, but a dependent data structure at a particular time point. Over the course of these historical data, we wanted to identify abrupt changes in the parameter of interest - change points. RESULTS: In this study, we demonstrated the application of generalized hypothesis testing using a linear mixed effects model as a possible method to detect change points. The coefficients from the linear mixed model were used in multiple contrast tests and the effect estimates were visualized with their respective simultaneous confidence intervals. The latter were used to determine the change point(s). In small simulation studies, we modelled different courses with abrupt changes and compared the influence of different contrast matrices. We found two contrasts, both capable of answering different research questions in change point detection: The Sequen contrast to detect individual change points and the McDermott contrast to find change points due to overall progression. We provide the R code for direct use with provided examples. The applicability of those tests for real experimental data was shown with in-vivo data from a preclinical study. CONCLUSION: Simultaneous confidence intervals estimated by multiple contrast tests using the model fit from a linear mixed model were capable to determine change points in clustered expression data. The confidence intervals directly delivered interpretable effect estimates representing the strength of the potential change point. Hence, scientists can define biologically relevant threshold of effect strength depending on their research question. We found two rarely used contrasts best fitted for detection of a possible change point: the Sequen and McDermott contrasts.


Assuntos
Modelos Lineares , Animais , Simulação por Computador , Camundongos
20.
Biostatistics ; 22(1): 181-197, 2021 01 28.
Artigo em Inglês | MEDLINE | ID: mdl-31301173

RESUMO

The goal of expression quantitative trait loci (eQTL) studies is to identify the genetic variants that influence the expression levels of the genes in an organism. High throughput technology has made such studies possible: in a given tissue sample, it enables us to quantify the expression levels of approximately 20 000 genes and to record the alleles present at millions of genetic polymorphisms. While obtaining this data is relatively cheap once a specimen is at hand, obtaining human tissue remains a costly endeavor: eQTL studies continue to be based on relatively small sample sizes, with this limitation particularly serious for tissues as brain, liver, etc.-often the organs of most immediate medical relevance. Given the high-dimensional nature of these datasets and the large number of hypotheses tested, the scientific community has adopted early on multiplicity adjustment procedures. These testing procedures primarily control the false discoveries rate for the identification of genetic variants with influence on the expression levels. In contrast, a problem that has not received much attention to date is that of providing estimates of the effect sizes associated with these variants, in a way that accounts for the considerable amount of selection. Yet, given the difficulty of procuring additional samples, this challenge is of practical importance. We illustrate in this work how the recently developed conditional inference approach can be deployed to obtain confidence intervals for the eQTL effect sizes with reliable coverage. The procedure we propose is based on a randomized hierarchical strategy with a 2-fold contribution: (1) it reflects the selection steps typically adopted in state of the art investigations and (2) it introduces the use of randomness instead of data-splitting to maximize the use of available data. Analysis of the GTEx Liver dataset (v6) suggests that naively obtained confidence intervals would likely not cover the true values of effect sizes and that the number of local genetic polymorphisms influencing the expression level of genes might be underestimated.


Assuntos
Estudo de Associação Genômica Ampla , Locos de Características Quantitativas , Alelos , Intervalos de Confiança , Humanos , Polimorfismo de Nucleotídeo Único/genética , Locos de Características Quantitativas/genética , Tamanho da Amostra
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA