arXiv Analytics

Sign in

arXiv:1901.10861 [cs.LG]AbstractReferencesReviewsResources

A Simple Explanation for the Existence of Adversarial Examples with Small Hamming Distance

Adi Shamir, Itay Safran, Eyal Ronen, Orr Dunkelman

Published 2019-01-30Version 1

The existence of adversarial examples in which an imperceptible change in the input can fool well trained neural networks was experimentally discovered by Szegedy et al in 2013, who called them "Intriguing properties of neural networks". Since then, this topic had become one of the hottest research areas within machine learning, but the ease with which we can switch between any two decisions in targeted attacks is still far from being understood, and in particular it is not clear which parameters determine the number of input coordinates we have to change in order to mislead the network. In this paper we develop a simple mathematical framework which enables us to think about this baffling phenomenon from a fresh perspective, turning it into a natural consequence of the geometry of $\mathbb{R}^n$ with the $L_0$ (Hamming) metric, which can be quantitatively analyzed. In particular, we explain why we should expect to find targeted adversarial examples with Hamming distance of roughly $m$ in arbitrarily deep neural networks which are designed to distinguish between $m$ input classes.

Related articles: Most relevant | Search more
arXiv:1903.08778 [cs.LG] (Published 2019-03-20)
Provable Certificates for Adversarial Examples: Fitting a Ball in the Union of Polytopes
arXiv:1906.07982 [cs.LG] (Published 2019-06-19)
A unified view on differential privacy and robustness to adversarial examples
arXiv:2002.02196 [cs.LG] (Published 2020-02-06)
AI-GAN: Attack-Inspired Generation of Adversarial Examples