Your browser doesn't support javascript.
loading
Prior-Preconditioned Conjugate Gradient Method for Accelerated Gibbs Sampling in "Large n, Large p" Bayesian Sparse Regression.
Nishimura, Akihiko; Suchard, Marc A.
Afiliação
  • Nishimura A; Department of Biostatistics, Johns Hopkins University, Baltimore, MD.
  • Suchard MA; Department of Biomathematics, Biostatistics, and Human Genetics, University of California-Los Angeles, Los Angeles, CA.
J Am Stat Assoc ; 118(544): 2468-2481, 2023.
Article em En | MEDLINE | ID: mdl-38550789
ABSTRACT
In a modern observational study based on healthcare databases, the number of observations and of predictors typically range in the order of 105-106 and of 104-105. Despite the large sample size, data rarely provide sufficient information to reliably estimate such a large number of parameters. Sparse regression techniques provide potential solutions, one notable approach being the Bayesian method based on shrinkage priors. In the "large n and large p" setting, however, the required posterior computation encounters a bottleneck at repeated sampling from a high-dimensional Gaussian distribution, whose precision matrix Φ is expensive to compute and factorize. In this article, we present a novel algorithm to speed up this bottleneck based on the following observation We can cheaply generate a random vector b such that the solution to the linear system Φß = b has the desired Gaussian distribution. We can then solve the linear system by the conjugate gradient (CG) algorithm through matrix-vector multiplications by Φ; this involves no explicit factorization or calculation of Φ itself. Rapid convergence of CG in this context is guaranteed by the theory of prior-preconditioning we develop. We apply our algorithm to a clinically relevant large-scale observational study with n = 72,489 patients and p = 22,175 clinical covariates, designed to assess the relative risk of adverse events from two alternative blood anti-coagulants. Our algorithm demonstrates an order of magnitude speed-up in posterior inference, in our case cutting the computation time from two weeks to less than a day. Supplementary materials for this article are available online.
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2023 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2023 Tipo de documento: Article