arXiv Analytics

Sign in

arXiv:1912.03845 [cs.LG]AbstractReferencesReviewsResources

No Representation without Transformation

Giorgio Giannone, Jonathan Masci, Christian Osendorfer

Published 2019-12-09Version 1

We propose to extend Latent Variable Models with a simple idea: learn to encode not only samples but also transformations of such samples. This means that the latent space is not only populated by embeddings but also by higher order objects that map between these embeddings. We show how a hierarchical graphical model can be utilized to enforce desirable algebraic properties of such latent mappings. These mappings in turn structure the latent space and hence can have a core impact on downstream tasks that are solved in the latent space. We demonstrate this impact on a set of experiments and also show that the representation of these latent mappings reflects interpretable properties.

Comments: Accepted at Perception as Generative Reasoning and Bayesian Deep Learning workshops at NeurIPS 2019
Categories: cs.LG, stat.ML
Related articles: Most relevant | Search more
arXiv:1901.08479 [cs.LG] (Published 2019-01-24)
On the Transformation of Latent Space in Autoencoders
arXiv:2008.01487 [cs.LG] (Published 2020-08-04)
Faithful Autoencoder Interpolation by Shaping the Latent Space
arXiv:2106.05319 [cs.LG] (Published 2021-06-09)
Stein Latent Optimization for GANs