arXiv Analytics

Sign in

arXiv:1810.09619 [cs.LG]AbstractReferencesReviewsResources

Sparse DNNs with Improved Adversarial Robustness

Yiwen Guo, Chao Zhang, Changshui Zhang, Yurong Chen

Published 2018-10-23Version 1

Deep neural networks (DNNs) are computationally/memory-intensive and vulnerable to adversarial attacks, making them prohibitive in some real-world applications. By converting dense models into sparse ones, pruning appears to be a promising solution to reducing the computation/memory cost. This paper studies classification models, especially DNN-based ones, to demonstrate that there exists intrinsic relationships between their sparsity and adversarial robustness. Our analyses reveal, both theoretically and empirically, that nonlinear DNN-based classifiers behave differently under $l_2$ attacks from some linear ones. We further demonstrate that an appropriately higher model sparsity implies better robustness of nonlinear DNNs, whereas over-sparsified models can be more difficult to resist adversarial examples.

Related articles: Most relevant | Search more
arXiv:1910.10679 [cs.LG] (Published 2019-10-23)
A Useful Taxonomy for Adversarial Robustness of Neural Networks
arXiv:2102.08868 [cs.LG] (Published 2021-02-17)
Bridging the Gap Between Adversarial Robustness and Optimization Bias
arXiv:2006.16427 [cs.LG] (Published 2020-06-29)
Biologically Inspired Mechanisms for Adversarial Robustness