Your browser doesn't support javascript.
loading
Gradient Regularization as Approximate Variational Inference.
Unlu, Ali; Aitchison, Laurence.
Afiliação
  • Unlu A; Department of Infomatics, University of Sussex, Brighton BN1 9QJ, UK.
  • Aitchison L; Department of Computer Science, University of Bristol, Bristol BS8 1UB, UK.
Entropy (Basel) ; 23(12)2021 Dec 03.
Article em En | MEDLINE | ID: mdl-34945935
ABSTRACT
We developed Variational Laplace for Bayesian neural networks (BNNs), which exploits a local approximation of the curvature of the likelihood to estimate the ELBO without the need for stochastic sampling of the neural-network weights. The Variational Laplace objective is simple to evaluate, as it is the log-likelihood plus weight-decay, plus a squared-gradient regularizer. Variational Laplace gave better test performance and expected calibration errors than maximum a posteriori inference and standard sampling-based variational inference, despite using the same variational approximate posterior. Finally, we emphasize the care needed in benchmarking standard VI, as there is a risk of stopping before the variance parameters have converged. We show that early-stopping can be avoided by increasing the learning rate for the variance parameters.
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2021 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2021 Tipo de documento: Article