arXiv Analytics

Sign in

arXiv:1910.09217 [cs.CV]AbstractReferencesReviewsResources

Decoupling Representation and Classifier for Long-Tailed Recognition

Bingyi Kang, Saining Xie, Marcus Rohrbach, Zhicheng Yan, Albert Gordo, Jiashi Feng, Yannis Kalantidis

Published 2019-10-21Version 1

The long-tail distribution of the visual world poses great challenges for deep learning based classification models on how to handle the class imbalance problem. Existing solutions usually involve class-balancing strategies, e.g., by loss re-weighting, data re-sampling, or transfer learning from head- to tail-classes, but most of them adhere to the scheme of jointly learning representations and classifiers. In this work, we decouple the learning procedure into representation learning and classification, and systematically explore how different balancing strategies affect them for long-tailed recognition. The findings are surprising: (1) data imbalance might not be an issue in learning high-quality representations; (2) with representations learned with the simplest instance-balanced (natural) sampling, it is also possible to achieve strong long-tailed recognition ability at little cost by adjusting only the classifier. We conduct extensive experiments and set new state-of-the-art performance on common long-tailed benchmarks like ImageNet-LT, Places-LT and iNaturalist, showing that it is possible to outperform carefully designed losses, sampling strategies, even complex modules with memory, by using a straightforward approach that decouples representation and classification.

Related articles: Most relevant | Search more
arXiv:1803.01356 [cs.CV] (Published 2018-03-04)
Classification based Grasp Detection using Spatial Transformer Network
arXiv:1804.10167 [cs.CV] (Published 2018-04-26)
fMRI: preprocessing, classification and pattern recognition
arXiv:1709.02245 [cs.CV] (Published 2017-09-02)
Deep Galaxy: Classification of Galaxies based on Deep Convolutional Neural Networks