arXiv Analytics

Sign in

arXiv:1802.09841 [cs.LG]AbstractReferencesReviewsResources

Adversarial Active Learning for Deep Networks: a Margin Based Approach

Melanie Ducoffe, Frederic Precioso

Published 2018-02-27Version 1

We propose a new active learning strategy designed for deep neural networks. The goal is to minimize the number of data annotation queried from an oracle during training. Previous active learning strategies scalable for deep networks were mostly based on uncertain sample selection. In this work, we focus on examples lying close to the decision boundary. Based on theoretical works on margin theory for active learning, we know that such examples may help to considerably decrease the number of annotations. While measuring the exact distance to the decision boundaries is intractable, we propose to rely on adversarial examples. We do not consider anymore them as a threat instead we exploit the information they provide on the distribution of the input space in order to approximate the distance to decision boundaries. We demonstrate empirically that adversarial active queries yield faster convergence of CNNs trained on MNIST, the Shoe-Bag and the Quick-Draw datasets.

Related articles: Most relevant | Search more
arXiv:1805.05532 [cs.LG] (Published 2018-05-15)
Improving Knowledge Distillation with Supporting Adversarial Samples
arXiv:1709.08524 [cs.LG] (Published 2017-09-25)
Generative learning for deep networks
arXiv:2009.13853 [cs.LG] (Published 2020-09-29)
Efficient SVDD Sampling with Approximation Guarantees for the Decision Boundary