arXiv Analytics

Sign in

arXiv:1806.02612 [cs.CV]AbstractReferencesReviewsResources

Dimensionality-Driven Learning with Noisy Labels

Xingjun Ma, Yisen Wang, Michael E. Houle, Shuo Zhou, Sarah M. Erfani, Shu-Tao Xia, Sudanthi Wijewickrema, James Bailey

Published 2018-06-07Version 1

Datasets with significant proportions of noisy (incorrect) class labels present challenges for training accurate Deep Neural Networks (DNNs). We propose a new perspective for understanding DNN generalization for such datasets, by investigating the dimensionality of the deep representation subspace of training samples. We show that from a dimensionality perspective, DNNs exhibit quite distinctive learning styles when trained with clean labels versus when trained with a proportion of noisy labels. Based on this finding, we develop a new dimensionality-driven learning strategy, which monitors the dimensionality of subspaces during training and adapts the loss function accordingly. We empirically demonstrate that our approach is highly tolerant to significant proportions of noisy labels, and can effectively learn low-dimensional local subspaces that capture the data distribution.

Related articles: Most relevant | Search more
arXiv:2007.05836 [cs.CV] (Published 2020-07-11)
Meta Soft Label Generation for Noisy Labels
arXiv:1912.02911 [cs.CV] (Published 2019-12-05)
Deep learning with noisy labels: exploring techniques and remedies in medical image analysis
arXiv:2108.11096 [cs.CV] (Published 2021-08-25)
Learning From Long-Tailed Data With Noisy Labels