arXiv Analytics

Sign in

arXiv:2102.08868 [cs.LG]AbstractReferencesReviewsResources

Bridging the Gap Between Adversarial Robustness and Optimization Bias

Fartash Faghri, Cristina Vasconcelos, David J. Fleet, Fabian Pedregosa, Nicolas Le Roux

Published 2021-02-17Version 1

Adversarial robustness is an open challenge in deep learning, most often tackled using adversarial training. Adversarial training is computationally costly, involving alternated optimization with a trade-off between standard generalization and adversarial robustness. We explore training robust models without adversarial training by revisiting a known result linking maximally robust classifiers and minimum norm solutions, and combining it with recent results on the implicit bias of optimizers. First, we show that, under certain conditions, it is possible to achieve both perfect standard accuracy and a certain degree of robustness without a trade-off, simply by training an overparameterized model using the implicit bias of the optimization. In that regime, there is a direct relationship between the type of the optimizer and the attack to which the model is robust. Second, we investigate the role of the architecture in designing robust models. In particular, we characterize the robustness of linear convolutional models, showing that they resist attacks subject to a constraint on the Fourier-$\ell_\infty$ norm. This result explains the property of $\ell_p$-bounded adversarial perturbations that tend to be concentrated in the Fourier domain. This leads us to a novel attack in the Fourier domain that is inspired by the well-known frequency-dependent sensitivity of human perception. We evaluate Fourier-$\ell_\infty$ robustness of recent CIFAR-10 models with robust training and visualize adversarial perturbations.

Related articles: Most relevant | Search more
arXiv:2102.01356 [cs.LG] (Published 2021-02-02)
Recent Advances in Adversarial Training for Adversarial Robustness
arXiv:2006.16427 [cs.LG] (Published 2020-06-29)
Biologically Inspired Mechanisms for Adversarial Robustness
arXiv:2003.09461 [cs.LG] (Published 2020-03-20)
Adversarial Robustness on In- and Out-Distribution Improves Explainability