arXiv Analytics

Sign in

arXiv:2105.14866 [stat.ML]AbstractReferencesReviewsResources

Variational Autoencoders: A Harmonic Perspective

Alexander Camuto, Matthew Willetts

Published 2021-05-31Version 1

In this work we study Variational Autoencoders (VAEs) from the perspective of harmonic analysis. By viewing a VAE's latent space as a Gaussian Space, a variety of measure space, we derive a series of results that show that the encoder variance of a VAE controls the frequency content of the functions parameterised by the VAE encoder and decoder neural networks. In particular we demonstrate that larger encoder variances reduce the high frequency content of these functions. Our analysis allows us to show that increasing this variance effectively induces a soft Lipschitz constraint on the decoder network of a VAE, which is a core contributor to the adversarial robustness of VAEs. We further demonstrate that adding Gaussian noise to the input of a VAE allows us to more finely control the frequency content and the Lipschitz constant of the VAE encoder networks. To support our theoretical analysis we run experiments with VAEs with small fully-connected neural networks and with larger convolutional networks, demonstrating empirically that our theory holds for a variety of neural network architectures.

Comments: 18 pages including Appendix, 7 Figures
Categories: stat.ML, cs.LG, eess.SP
Related articles: Most relevant | Search more
arXiv:1806.02997 [stat.ML] (Published 2018-06-08)
q-Space Novelty Detection with Variational Autoencoders
arXiv:1811.04576 [stat.ML] (Published 2018-11-12)
Estimation of Dimensions Contributing to Detected Anomalies with Variational Autoencoders
arXiv:2109.04561 [stat.ML] (Published 2021-09-09)
Supervising the Decoder of Variational Autoencoders to Improve Scientific Utility