arXiv Analytics

Sign in

arXiv:2401.02633 [cs.CR]AbstractReferencesReviewsResources

A Random Ensemble of Encrypted models for Enhancing Robustness against Adversarial Examples

Ryota Iijima, Sayaka Shiota, Hitoshi Kiya

Published 2024-01-05Version 1

Deep neural networks (DNNs) are well known to be vulnerable to adversarial examples (AEs). In addition, AEs have adversarial transferability, which means AEs generated for a source model can fool another black-box model (target model) with a non-trivial probability. In previous studies, it was confirmed that the vision transformer (ViT) is more robust against the property of adversarial transferability than convolutional neural network (CNN) models such as ConvMixer, and moreover encrypted ViT is more robust than ViT without any encryption. In this article, we propose a random ensemble of encrypted ViT models to achieve much more robust models. In experiments, the proposed scheme is verified to be more robust against not only black-box attacks but also white-box ones than convention methods.

Related articles: Most relevant | Search more
arXiv:2307.13985 [cs.CR] (Published 2023-07-26)
Enhanced Security against Adversarial Examples Using a Random Ensemble of Encrypted Vision Transformer Models
arXiv:2011.05976 [cs.CR] (Published 2020-11-02)
Vulnerability of the Neural Networks Against Adversarial Examples: A Survey
arXiv:2010.16204 [cs.CR] (Published 2020-10-30)
Capture the Bot: Using Adversarial Examples to Improve CAPTCHA Robustness to Bot Attacks