arXiv Analytics

Sign in

arXiv:1910.10679 [cs.LG]AbstractReferencesReviewsResources

A Useful Taxonomy for Adversarial Robustness of Neural Networks

Leslie N. Smith

Published 2019-10-23Version 1

Adversarial attacks and defenses are currently active areas of research for the deep learning community. A recent review paper divided the defense approaches into three categories; gradient masking, robust optimization, and adversarial example detection. We divide gradient masking and robust optimization differently: (1) increasing intra-class compactness and inter-class separation of the feature vectors improves adversarial robustness, and (2) marginalization or removal of non-robust image features also improves adversarial robustness. By reframing these topics differently, we provide a fresh perspective that provides insight into the underlying factors that enable training more robust networks and can help inspire novel solutions. In addition, there are several papers in the literature of adversarial defenses that claim there is a cost for adversarial robustness, or a trade-off between robustness and accuracy but, under this proposed taxonomy, we hypothesis that this is not universal. We follow up on our taxonomy with several challenges to the deep learning research community that builds on the connections and insights in this paper.

Related articles: Most relevant | Search more
arXiv:1810.09619 [cs.LG] (Published 2018-10-23)
Sparse DNNs with Improved Adversarial Robustness
arXiv:2003.09461 [cs.LG] (Published 2020-03-20)
Adversarial Robustness on In- and Out-Distribution Improves Explainability
arXiv:2102.08868 [cs.LG] (Published 2021-02-17)
Bridging the Gap Between Adversarial Robustness and Optimization Bias