arXiv Analytics

Sign in

arXiv:2007.01293 [cs.LG]AbstractReferencesReviewsResources

Not All Unlabeled Data are Equal: Learning to Weight Data in Semi-supervised Learning

Zhongzheng Ren, Raymond A. Yeh, Alexander G. Schwing

Published 2020-07-02Version 1

Existing semi-supervised learning (SSL) algorithms use a single weight to balance the loss of labeled and unlabeled examples, i.e., all unlabeled examples are equally weighted. But not all unlabeled data are equal. In this paper we study how to use a different weight for every unlabeled example. Manual tuning of all those weights -- as done in prior work -- is no longer possible. Instead, we adjust those weights via an algorithm based on the influence function, a measure of a model's dependency on one training example. To make the approach efficient, we propose a fast and effective approximation of the influence function. We demonstrate that this technique outperforms state-of-the-art methods on semi-supervised image and language classification tasks.

Related articles: Most relevant | Search more
arXiv:1906.10343 [cs.LG] (Published 2019-06-25)
Semi-Supervised Learning with Self-Supervised Networks
arXiv:1905.02249 [cs.LG] (Published 2019-05-06)
MixMatch: A Holistic Approach to Semi-Supervised Learning
arXiv:1908.09574 [cs.LG] (Published 2019-08-26)
Improvability Through Semi-Supervised Learning: A Survey of Theoretical Results