arXiv Analytics

Sign in

arXiv:1906.11881 [cs.CV]AbstractReferencesReviewsResources

Explicit Disentanglement of Appearance and Perspective in Generative Models

Nicki Skafte Detlefsen, Søren Hauberg

Published 2019-06-11Version 1

Disentangled representation learning finds compact, independent and easy-to-interpret factors of the data. Learning such has been shown to require an inductive bias, which we explicitly encode in a generative model of images. Specifically, we propose a model with two latent spaces: one that represents spatial transformations of the input data, and another that represents the transformed data. We find that the latter naturally captures the intrinsic appearance of the data. To realize the generative model, we propose a Variationally Inferred Transformational Autoencoder (VITAE) that incorporates a spatial transformer into a variational autoencoder. We show how to perform inference in the model efficiently by carefully designing the encoders and restricting the transformation class to be diffeomorphic. Empirically, our model separates the visual style from digit type on MNIST, and separates shape and pose in images of the human body.

Comments: 8 main pages + 2 pages references + 9 pages of supplementary material
Categories: cs.CV, cs.LG, stat.ML
Related articles: Most relevant | Search more
arXiv:1508.04035 [cs.CV] (Published 2015-08-17)
A Generative Model for Multi-Dialect Representation
arXiv:1712.09196 [cs.CV] (Published 2017-12-26)
The Robust Manifold Defense: Adversarial Training using Generative Models
arXiv:2207.13691 [cs.CV] (Published 2022-07-27)
ShAPO: Implicit Representations for Multi-Object Shape, Appearance, and Pose Optimization