arXiv Analytics

Sign in

arXiv:1812.05069 [cs.LG]AbstractReferencesReviewsResources

Recent Advances in Autoencoder-Based Representation Learning

Michael Tschannen, Olivier Bachem, Mario Lucic

Published 2018-12-12Version 1

Learning useful representations with little or no supervision is a key challenge in artificial intelligence. We provide an in-depth review of recent advances in representation learning with a focus on autoencoder-based models. To organize these results we make use of meta-priors believed useful for downstream tasks, such as disentanglement and hierarchical organization of features. In particular, we uncover three main mechanisms to enforce such properties, namely (i) regularizing the (approximate or aggregate) posterior distribution, (ii) factorizing the encoding and decoding distribution, or (iii) introducing a structured prior distribution. While there are some promising results, implicit or explicit supervision remains a key enabler and all current methods use strong inductive biases and modeling assumptions. Finally, we provide an analysis of autoencoder-based representation learning through the lens of rate-distortion theory and identify a clear tradeoff between the amount of prior knowledge available about the downstream tasks, and how useful the representation is for this task.

Comments: Presented at the third workshop on Bayesian Deep Learning (NeurIPS 2018)
Categories: cs.LG, cs.CV, stat.ML
Related articles: Most relevant | Search more
arXiv:2211.03782 [cs.LG] (Published 2022-11-07)
On minimal variations for unsupervised representation learning
arXiv:2309.17002 [cs.LG] (Published 2023-09-29)
Understanding and Mitigating the Label Noise in Pre-training on Downstream Tasks
Hao Chen et al.
arXiv:2403.07185 [cs.LG] (Published 2024-03-11)
Uncertainty in Graph Neural Networks: A Survey