arXiv Analytics

Sign in

arXiv:1810.02334 [cs.LG]AbstractReferencesReviewsResources

Unsupervised Learning via Meta-Learning

Kyle Hsu, Sergey Levine, Chelsea Finn

Published 2018-10-04Version 1

A central goal of unsupervised learning is to acquire representations from unlabeled data or experience that can be used for more effective learning of downstream tasks from modest amounts of labeled data. Many prior unsupervised learning works aim to do so by developing proxy objectives based on reconstruction, disentanglement, prediction, and other metrics. Instead, we develop an unsupervised learning method that explicitly optimizes for the ability to learn a variety of tasks from small amounts of data. To do so, we construct tasks from unlabeled data in an automatic way and run meta-learning over the constructed tasks. Surprisingly, we find that relatively simple mechanisms for task design, such as clustering unsupervised representations, lead to good performance on a variety of downstream tasks. Our experiments across four image datasets indicate that our unsupervised meta-learning approach acquires a learning algorithm without any labeled data that is applicable to a wide range of downstream classification tasks, improving upon the representation learned by four prior unsupervised learning methods.

Related articles: Most relevant | Search more
arXiv:2211.03782 [cs.LG] (Published 2022-11-07)
On minimal variations for unsupervised representation learning
arXiv:2309.17002 [cs.LG] (Published 2023-09-29)
Understanding and Mitigating the Label Noise in Pre-training on Downstream Tasks
Hao Chen et al.
arXiv:2007.12446 [cs.LG] (Published 2020-07-24)
Transferred Discrepancy: Quantifying the Difference Between Representations