arXiv Analytics

Sign in

arXiv:1911.05268 [cs.LG]AbstractReferencesReviewsResources

Adversarial Examples in Modern Machine Learning: A Review

Rey Reza Wiyatno, Anqi Xu, Ousmane Dia, Archy de Berker

Published 2019-11-13Version 1

Recent research has found that many families of machine learning models are vulnerable to adversarial examples: inputs that are specifically designed to cause the target model to produce erroneous outputs. In this survey, we focus on machine learning models in the visual domain, where methods for generating and detecting such examples have been most extensively studied. We explore a variety of adversarial attack methods that apply to image-space content, real world adversarial attacks, adversarial defenses, and the transferability property of adversarial examples. We also discuss strengths and weaknesses of various methods of adversarial attack and defense. Our aim is to provide an extensive coverage of the field, furnishing the reader with an intuitive understanding of the mechanics of adversarial attack and defense mechanisms and enlarging the community of researchers studying this fundamental set of problems.

Related articles: Most relevant | Search more
arXiv:1812.01804 [cs.LG] (Published 2018-12-05)
Random Spiking and Systematic Evaluation of Defenses Against Adversarial Examples
arXiv:1901.10861 [cs.LG] (Published 2019-01-30)
A Simple Explanation for the Existence of Adversarial Examples with Small Hamming Distance
arXiv:2002.02196 [cs.LG] (Published 2020-02-06)
AI-GAN: Attack-Inspired Generation of Adversarial Examples