arXiv Analytics

Sign in

arXiv:2108.11096 [cs.CV]AbstractReferencesReviewsResources

Learning From Long-Tailed Data With Noisy Labels

Shyamgopal Karthik, Jérome Revaud, Chidlovskii Boris

Published 2021-08-25Version 1

Class imbalance and noisy labels are the norm rather than the exception in many large-scale classification datasets. Nevertheless, most works in machine learning typically assume balanced and clean data. There have been some recent attempts to tackle, on one side, the problem of learning from noisy labels and, on the other side, learning from long-tailed data. Each group of methods make simplifying assumptions about the other. Due to this separation, the proposed solutions often underperform when both assumptions are violated. In this work, we present a simple two-stage approach based on recent advances in self-supervised learning to treat both challenges simultaneously. It consists of, first, task-agnostic self-supervised pre-training, followed by task-specific fine-tuning using an appropriate loss. Most significantly, we find that self-supervised learning approaches are effectively able to cope with severe class imbalance. In addition, the resulting learned representations are also remarkably robust to label noise, when fine-tuned with an imbalance- and noise-resistant loss function. We validate our claims with experiments on CIFAR-10 and CIFAR-100 augmented with synthetic imbalance and noise, as well as the large-scale inherently noisy Clothing-1M dataset.

Related articles: Most relevant | Search more
arXiv:2104.09563 [cs.CV] (Published 2021-04-19)
A Framework using Contrastive Learning for Classification with Noisy Labels
arXiv:1806.02612 [cs.CV] (Published 2018-06-07)
Dimensionality-Driven Learning with Noisy Labels
Xingjun Ma et al.
arXiv:2202.02200 [cs.CV] (Published 2022-02-04)
Learning with Neighbor Consistency for Noisy Labels