arXiv Analytics

Sign in

arXiv:2001.03460 [cs.CV]AbstractReferencesReviewsResources

Cloud-based Image Classification Service Is Not Robust To Adversarial Examples: A Forgotten Battlefield

Dou Goodman

Published 2020-01-08Version 1

In recent years, Deep Learning(DL) techniques have been extensively deployed for computer vision tasks, particularly visual classification problems, where new algorithms reported to achieve or even surpass the human performance . While many recent works demonstrated that DL models are vulnerable to adversarial examples.Fortunately, generating adversarial examples usually requires white-box access to the victim model, and real-world cloud-based image classification services are more complex than white-box classifier,the architecture and parameters of DL models on cloud platforms cannot be obtained by the attacker. The attacker can only access the APIs opened by cloud platforms. Thus, keeping models in the cloud can usually give a (false) sense of security.In this paper, we mainly focus on studying the security of real-world cloud-based image classification services. Specifically, (1) We propose two novel attack methods, Image Fusion(IF) attack and Fast Featuremap Loss PGD (FFL-PGD) attack based on Substitution model ,which achieve a high bypass rate with a very limited number of queries. Instead of millions of queries in previous studies, our methods find the adversarial examples using only two queries per image ; and (2) we make the first attempt to conduct an extensive empirical study of black-box attacks against real-world cloud-based classification services. Through evaluations on four popular cloud platforms including Amazon, Google, Microsoft, Clarifai, we demonstrate that Spatial Transformation (ST) attack has a success rate of approximately 100\% except Amazon approximately 50\%, IF and FFL-PGD attack have a success rate over 90\% among different classification services. (3) We discuss the possible defenses to address these security challenges in cloud-based classification services.Our defense technology is mainly divided into model training stage and image preprocessing stage.

Comments: Accepted by Defcon China 2019. arXiv admin note: substantial text overlap with arXiv:1906.07997; text overlap with arXiv:1901.01223, arXiv:1704.05051 by other authors
Categories: cs.CV, cs.CR, cs.LG
Related articles: Most relevant | Search more
arXiv:1804.08529 [cs.CV] (Published 2018-04-23)
VectorDefense: Vectorization as a Defense to Adversarial Examples
arXiv:2001.00116 [cs.CV] (Published 2020-01-01)
Erase and Restore: Simple, Accurate and Resilient Detection of $L_2$ Adversarial Examples
arXiv:1911.11946 [cs.CV] (Published 2019-11-27)
Can Attention Masks Improve Adversarial Robustness?