arXiv Analytics

Sign in

arXiv:2408.12670 [cs.LG]AbstractReferencesReviewsResources

Leveraging Information Consistency in Frequency and Spatial Domain for Adversarial Attacks

Zhibo Jin, Jiayu Zhang, Zhiyu Zhu, Xinyi Wang, Yiyun Huang, Huaming Chen

Published 2024-08-22Version 1

Adversarial examples are a key method to exploit deep neural networks. Using gradient information, such examples can be generated in an efficient way without altering the victim model. Recent frequency domain transformation has further enhanced the transferability of such adversarial examples, such as spectrum simulation attack. In this work, we investigate the effectiveness of frequency domain-based attacks, aligning with similar findings in the spatial domain. Furthermore, such consistency between the frequency and spatial domains provides insights into how gradient-based adversarial attacks induce perturbations across different domains, which is yet to be explored. Hence, we propose a simple, effective, and scalable gradient-based adversarial attack algorithm leveraging the information consistency in both frequency and spatial domains. We evaluate the algorithm for its effectiveness against different models. Extensive experiments demonstrate that our algorithm achieves state-of-the-art results compared to other gradient-based algorithms. Our code is available at: https://github.com/LMBTough/FSA.

Related articles: Most relevant | Search more
arXiv:1910.01589 [cs.LG] (Published 2019-10-03)
Graph Analysis and Graph Pooling in the Spatial Domain
arXiv:2003.11702 [cs.LG] (Published 2020-03-26)
Bridging the Gap Between Spectral and Spatial Domains in Graph Neural Networks
arXiv:2401.09071 [cs.LG] (Published 2024-01-17)
Rethinking Spectral Graph Neural Networks with Spatially Adaptive Filtering