arXiv Analytics

Sign in

arXiv:2006.00387 [cs.LG]AbstractReferencesReviewsResources

Exploring Model Robustness with Adaptive Networks and Improved Adversarial Training

Zheng Xu, Ali Shafahi, Tom Goldstein

Published 2020-05-30Version 1

Adversarial training has proven to be effective in hardening networks against adversarial examples. However, the gained robustness is limited by network capacity and number of training samples. Consequently, to build more robust models, it is common practice to train on widened networks with more parameters. To boost robustness, we propose a conditional normalization module to adapt networks when conditioned on input samples. Our adaptive networks, once adversarially trained, can outperform their non-adaptive counterparts on both clean validation accuracy and robustness. Our method is objective agnostic and consistently improves both the conventional adversarial training objective and the TRADES objective. Our adaptive networks also outperform larger widened non-adaptive architectures that have 1.5 times more parameters. We further introduce several practical ``tricks'' in adversarial training to improve robustness and empirically verify their efficiency.

Related articles: Most relevant | Search more
arXiv:2006.08403 [cs.LG] (Published 2020-06-15)
On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them
arXiv:1611.03383 [cs.LG] (Published 2016-11-10)
Disentangling factors of variation in deep representations using adversarial training
arXiv:2007.04472 [cs.LG] (Published 2020-07-08)
Evaluation of Adversarial Training on Different Types of Neural Networks in Deep Learning-based IDSs