arXiv Analytics

Sign in

arXiv:2002.07522 [cs.CV]AbstractReferencesReviewsResources

Few-Shot Few-Shot Learning and the role of Spatial Attention

Yann Lifchitz, Yannis Avrithis, Sylvaine Picard

Published 2020-02-18Version 1

Few-shot learning is often motivated by the ability of humans to learn new tasks from few examples. However, standard few-shot classification benchmarks assume that the representation is learned on a limited amount of base class data, ignoring the amount of prior knowledge that a human may have accumulated before learning new tasks. At the same time, even if a powerful representation is available, it may happen in some domain that base class data are limited or non-existent. This motivates us to study a problem where the representation is obtained from a classifier pre-trained on a large-scale dataset of a different domain, assuming no access to its training process, while the base class data are limited to few examples per class and their role is to adapt the representation to the domain at hand rather than learn from scratch. We adapt the representation in two stages, namely on the few base class data if available and on the even fewer data of new tasks. In doing so, we obtain from the pre-trained classifier a spatial attention map that allows focusing on objects and suppressing background clutter. This is important in the new problem, because when base class data are few, the network cannot learn where to focus implicitly. We also show that a pre-trained network may be easily adapted to novel classes, without meta-learning.

Related articles: Most relevant | Search more
arXiv:2112.04564 [cs.CV] (Published 2021-12-08, updated 2022-05-13)
CoSSL: Co-Learning of Representation and Classifier for Imbalanced Semi-Supervised Learning
arXiv:1412.6134 [cs.CV] (Published 2014-12-18)
Representation using the Weyl Transform
arXiv:1511.02492 [cs.CV] (Published 2015-11-08)
VideoStory Embeddings Recognize Events when Examples are Scarce