arXiv Analytics

Sign in

arXiv:2002.10252 [cs.LG]AbstractReferencesReviewsResources

TensorShield: Tensor-based Defense Against Adversarial Attacks on Images

Negin Entezari, Evangelos E. Papalexakis

Published 2020-02-18Version 1

Recent studies have demonstrated that machine learning approaches like deep neural networks (DNNs) are easily fooled by adversarial attacks. Subtle and imperceptible perturbations of the data are able to change the result of deep neural networks. Leveraging vulnerable machine learning methods raises many concerns especially in domains where security is an important factor. Therefore, it is crucial to design defense mechanisms against adversarial attacks. For the task of image classification, unnoticeable perturbations mostly occur in the high-frequency spectrum of the image. In this paper, we utilize tensor decomposition techniques as a preprocessing step to find a low-rank approximation of images which can significantly discard high-frequency perturbations. Recently a defense framework called Shield could "vaccinate" Convolutional Neural Networks (CNN) against adversarial examples by performing random-quality JPEG compressions on local patches of images on the ImageNet dataset. Our tensor-based defense mechanism outperforms the SLQ method from Shield by 14% against FastGradient Descent (FGSM) adversarial attacks, while maintaining comparable speed.

Related articles: Most relevant | Search more
arXiv:1909.08072 [cs.LG] (Published 2019-09-17)
Adversarial Attacks and Defenses in Images, Graphs and Text: A Review
arXiv:1911.04636 [cs.LG] (Published 2019-11-12)
Robust Design of Deep Neural Networks against Adversarial Attacks based on Lyapunov Theory
arXiv:1803.08680 [cs.LG] (Published 2018-03-23, updated 2018-07-09)
Improving DNN Robustness to Adversarial Attacks using Jacobian Regularization