arXiv Analytics

Sign in

arXiv:2003.11539 [cs.CV]AbstractReferencesReviewsResources

Rethinking Few-Shot Image Classification: a Good Embedding Is All You Need?

Yonglong Tian, Yue Wang, Dilip Krishnan, Joshua B. Tenenbaum, Phillip Isola

Published 2020-03-25Version 1

The focus of recent meta-learning research has been on the development of learning algorithms that can quickly adapt to test time tasks with limited data and low computational cost. Few-shot learning is widely used as one of the standard benchmarks in meta-learning. In this work, we show that a simple baseline: learning a supervised or self-supervised representation on the meta-training set, followed by training a linear classifier on top of this representation, outperforms state-of-the-art few-shot learning methods. An additional boost can be achieved through the use of self-distillation. This demonstrates that using a good learned embedding model can be more effective than sophisticated meta-learning algorithms. We believe that our findings motivate a rethinking of few-shot image classification benchmarks and the associated role of meta-learning algorithms. Code is available at: http://github.com/WangYueFt/rfs/.

Comments: First two authors contributed equally. Code: http://github.com/WangYueFt/rfs/
Categories: cs.CV, cs.LG
Related articles: Most relevant | Search more
arXiv:1606.02170 [cs.CV] (Published 2016-06-07)
Latent Constrained Correlation Filters for Object Localization
arXiv:1704.04037 [cs.CV] (Published 2017-04-13)
Zero-order Reverse Filtering
arXiv:1812.07763 [cs.CV] (Published 2018-12-19)
Light Weight Color Image Warping with Inter-Channel Information