arXiv Analytics

Sign in

arXiv:2402.02095 [cs.LG]AbstractReferencesReviewsResources

Seeing is not always believing: The Space of Harmless Perturbations

Lu Chen, Shaofeng Li, Benhao Huang, Fan Yang, Zheng Li, Jie Li, Yuan Luo

Published 2024-02-03, updated 2024-05-23Version 2

Existing works have extensively studied adversarial examples, which are minimal perturbations that can mislead the output of deep neural networks (DNNs) while remaining imperceptible to humans. However, in this work, we reveal the existence of a harmless perturbation space, in which perturbations drawn from this space, regardless of their magnitudes, leave the network output unchanged when applied to inputs. Essentially, the harmless perturbation space emerges from the usage of non-injective functions (linear or non-linear layers) within DNNs, enabling multiple distinct inputs to be mapped to the same output. For linear layers with input dimensions exceeding output dimensions, any linear combination of the orthogonal bases of the nullspace of the parameter consistently yields no change in their output. For non-linear layers, the harmless perturbation space may expand, depending on the properties of the layers and input samples. Inspired by this property of DNNs, we solve for a family of general perturbation spaces that are redundant for the DNN's decision, and can be used to hide sensitive data and serve as a means of model identification. Our work highlights the distinctive robustness of DNNs (i.e., consistency under large magnitude perturbations) in contrast to adversarial examples (vulnerability for small imperceptible noises).

Related articles: Most relevant | Search more
arXiv:1910.07517 [cs.LG] (Published 2019-10-15)
Adversarial Examples for Models of Code
arXiv:1902.06044 [cs.LG] (Published 2019-02-16)
Adversarial Examples in RF Deep Learning: Detection of the Attack and its Physical Robustness
arXiv:2004.04479 [cs.LG] (Published 2020-04-09)
On Adversarial Examples and Stealth Attacks in Artificial Intelligence Systems