arXiv Analytics

Sign in

arXiv:1901.10513 [cs.LG]AbstractReferencesReviewsResources

Adversarial Examples Are a Natural Consequence of Test Error in Noise

Nic Ford, Justin Gilmer, Nicolas Carlini, Dogus Cubuk

Published 2019-01-29Version 1

Over the last few years, the phenomenon of adversarial examples --- maliciously constructed inputs that fool trained machine learning models --- has captured the attention of the research community, especially when the adversary is restricted to small modifications of a correctly handled input. Less surprisingly, image classifiers also lack human-level performance on randomly corrupted images, such as images with additive Gaussian noise. In this paper we provide both empirical and theoretical evidence that these are two manifestations of the same underlying phenomenon, establishing close connections between the adversarial robustness and corruption robustness research programs. This suggests that improving adversarial robustness should go hand in hand with improving performance in the presence of more general and realistic image corruptions. Based on our results we recommend that future adversarial defenses consider evaluating the robustness of their methods to distributional shift with benchmarks such as Imagenet-C.

Related articles: Most relevant | Search more
arXiv:1905.00180 [cs.LG] (Published 2019-05-01)
Dropping Pixels for Adversarial Robustness
arXiv:2002.08859 [cs.LG] (Published 2020-02-20)
A Bayes-Optimal View on Adversarial Examples
arXiv:2002.04599 [cs.LG] (Published 2020-02-11)
Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations