arXiv Analytics

Sign in

arXiv:1901.08479 [cs.LG]AbstractReferencesReviewsResources

On the Transformation of Latent Space in Autoencoders

Jaehoon Cha, Kyeong Soo Kim, Sanghyuk Lee

Published 2019-01-24Version 1

Noting the importance of the latent variables in inference and learning, we propose a novel framework for autoencoders based on the homeomorphic transformation of latent variables --- which could reduce the distance between vectors in the transformed space, while preserving the topological properties of the original space --- and investigate the effect of the transformation in both learning generative models and denoising corrupted data. The results of our experiments show that the proposed model can work as both a generative model and a denoising model with improved performance due to the transformation compared to conventional variational and denoising autoencoders.

Comments: 9 pages and 9 figures. The paper has been submitted to ICML (The International Conference on Machine Learning) 2019
Categories: cs.LG, stat.ML
Related articles: Most relevant | Search more
arXiv:2103.04662 [cs.LG] (Published 2021-03-08)
Anomaly Detection Based on Selection and Weighting in Latent Space
arXiv:1912.03845 [cs.LG] (Published 2019-12-09)
No Representation without Transformation
arXiv:2008.01487 [cs.LG] (Published 2020-08-04)
Faithful Autoencoder Interpolation by Shaping the Latent Space