Your browser doesn't support javascript.
loading
A robust variational autoencoder using beta divergence.
Akrami, Haleh; Joshi, Anand A; Li, Jian; Aydöre, Sergül; Leahy, Richard M.
Afiliación
  • Akrami H; Signal and Image Processing Institute, University of Southern California, Los Angeles, CA, USA.
  • Joshi AA; Signal and Image Processing Institute, University of Southern California, Los Angeles, CA, USA.
  • Li J; Athinoula A. Martinos Center for Biomedical Imaging Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, USA.
  • Aydöre S; Center for Neurotechnology and Neurorecovery, Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA.
  • Leahy RM; Amazon Web Services, New York, NY, USA.
Knowl Based Syst ; 2382022 Feb 28.
Article en En | MEDLINE | ID: mdl-36714396
The presence of outliers can severely degrade learned representations and performance of deep learning methods and hence disproportionately affect the training process, leading to incorrect conclusions about the data. For example, anomaly detection using deep generative models is typically only possible when similar anomalies (or outliers) are not present in the training data. Here we focus on variational autoencoders (VAEs). While the VAE is a popular framework for anomaly detection tasks, we observe that the VAE is unable to detect outliers when the training data contains anomalies that have the same distribution as those in test data. In this paper we focus on robustness to outliers in training data in VAE settings using concepts from robust statistics. We propose a variational lower bound that leads to a robust VAE model that has the same computational complexity as the standard VAE and contains a single automatically-adjusted tuning parameter to control the degree of robustness. We present mathematical formulations for robust variational autoencoders (RVAEs) for Bernoulli, Gaussian and categorical variables. The RVAE model is based on beta-divergence rather than the standard Kullback-Leibler (KL) divergence. We demonstrate the performance of our proposed ß-divergence-based autoencoder for a variety of image and categorical datasets showing improved robustness to outliers both qualitatively and quantitatively. We also illustrate the use of our robust VAE for detection of lesions in brain images, formulated as an anomaly detection task. Finally, we suggest a method to tune the hyperparameter of RVAE which makes our model completely unsupervised.
Palabras clave

Texto completo: 1 Colección: 01-internacional Banco de datos: MEDLINE Tipo de estudio: Prognostic_studies Idioma: En Revista: Knowl Based Syst Año: 2022 Tipo del documento: Article País de afiliación: Estados Unidos

Texto completo: 1 Colección: 01-internacional Banco de datos: MEDLINE Tipo de estudio: Prognostic_studies Idioma: En Revista: Knowl Based Syst Año: 2022 Tipo del documento: Article País de afiliación: Estados Unidos