arXiv Analytics

Sign in

arXiv:2305.10293 [cs.CV]AbstractReferencesReviewsResources

Infinite Class Mixup

Thomas Mensink, Pascal Mettes

Published 2023-05-17Version 1

Mixup is a widely adopted strategy for training deep networks, where additional samples are augmented by interpolating inputs and labels of training pairs. Mixup has shown to improve classification performance, network calibration, and out-of-distribution generalisation. While effective, a cornerstone of Mixup, namely that networks learn linear behaviour patterns between classes, is only indirectly enforced since the output interpolation is performed at the probability level. This paper seeks to address this limitation by mixing the classifiers directly instead of mixing the labels for each mixed pair. We propose to define the target of each augmented sample as a uniquely new classifier, whose parameters are a linear interpolation of the classifier vectors of the input pair. The space of all possible classifiers is continuous and spans all interpolations between classifier pairs. To make optimisation tractable, we propose a dual-contrastive Infinite Class Mixup loss, where we contrast the classifier of a mixed pair to both the classifiers and the predicted outputs of other mixed pairs in a batch. Infinite Class Mixup is generic in nature and applies to many variants of Mixup. Empirically, we show that it outperforms standard Mixup and variants such as RegMixup and Remix on balanced, long-tailed, and data-constrained benchmarks, highlighting its broad applicability.

Related articles: Most relevant | Search more
arXiv:1906.09453 [cs.CV] (Published 2019-06-06)
Computer Vision with a Single (Robust) Classifier
arXiv:2011.08145 [cs.CV] (Published 2020-11-16)
Decoupling Representation and Classifier for Noisy Label Learning
arXiv:2307.12560 [cs.CV] (Published 2023-07-24)
Interpolating between Images with Diffusion Models