arXiv Analytics

Sign in

arXiv:1911.04636 [cs.LG]AbstractReferencesReviewsResources

Robust Design of Deep Neural Networks against Adversarial Attacks based on Lyapunov Theory

Arash Rahnama, Andre T. Nguyen, Edward Raff

Published 2019-11-12Version 1

Deep neural networks (DNNs) are vulnerable to subtle adversarial perturbations applied to the input. These adversarial perturbations, though imperceptible, can easily mislead the DNN. In this work, we take a control theoretic approach to the problem of robustness in DNNs. We treat each individual layer of the DNN as a nonlinear dynamical system and use Lyapunov theory to prove stability and robustness locally. We then proceed to prove stability and robustness globally for the entire DNN. We develop empirically tight bounds on the response of the output layer, or any hidden layer, to adversarial perturbations added to the input, or the input of hidden layers. Recent works have proposed spectral norm regularization as a solution for improving robustness against l2 adversarial attacks. Our results give new insights into how spectral norm regularization can mitigate the adversarial effects. Finally, we evaluate the power of our approach on a variety of data sets and network architectures and against some of the well-known adversarial attacks.

Related articles: Most relevant | Search more
arXiv:1909.08072 [cs.LG] (Published 2019-09-17)
Adversarial Attacks and Defenses in Images, Graphs and Text: A Review
arXiv:2002.10252 [cs.LG] (Published 2020-02-18)
TensorShield: Tensor-based Defense Against Adversarial Attacks on Images
arXiv:1811.01443 [cs.LG] (Published 2018-11-04)
SSCNets: A Selective Sobel Convolution-based Technique to Enhance the Robustness of Deep Neural Networks against Security Attacks