arXiv Analytics

Sign in

arXiv:1905.00180 [cs.LG]AbstractReferencesReviewsResources

Dropping Pixels for Adversarial Robustness

Hossein Hosseini, Sreeram Kannan, Radha Poovendran

Published 2019-05-01Version 1

Deep neural networks are vulnerable against adversarial examples. In this paper, we propose to train and test the networks with randomly subsampled images with high drop rates. We show that this approach significantly improves robustness against adversarial examples in all cases of bounded L0, L2 and L_inf perturbations, while reducing the standard accuracy by a small value. We argue that subsampling pixels can be thought to provide a set of robust features for the input image and, thus, improves robustness without performing adversarial training.

Related articles: Most relevant | Search more
arXiv:1711.09404 [cs.LG] (Published 2017-11-26)
Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients
arXiv:1901.10513 [cs.LG] (Published 2019-01-29)
Adversarial Examples Are a Natural Consequence of Test Error in Noise
arXiv:1903.08778 [cs.LG] (Published 2019-03-20)
Provable Certificates for Adversarial Examples: Fitting a Ball in the Union of Polytopes