arXiv Analytics

Sign in

arXiv:2006.06861 [cs.LG]AbstractReferencesReviewsResources

Robustness to Adversarial Attacks in Learning-Enabled Controllers

Zikang Xiong, Joe Eappen, He Zhu, Suresh Jagannathan

Published 2020-06-11Version 1

Learning-enabled controllers used in cyber-physical systems (CPS) are known to be susceptible to adversarial attacks. Such attacks manifest as perturbations to the states generated by the controller's environment in response to its actions. We consider state perturbations that encompass a wide variety of adversarial attacks and describe an attack scheme for discovering adversarial states. To be useful, these attacks need to be natural, yielding states in which the controller can be reasonably expected to generate a meaningful response. We consider shield-based defenses as a means to improve controller robustness in the face of such perturbations. Our defense strategy allows us to treat the controller and environment as black-boxes with unknown dynamics. We provide a two-stage approach to construct this defense and show its effectiveness through a range of experiments on realistic continuous control domains such as the navigation control-loop of an F16 aircraft and the motion control system of humanoid robots.

Related articles: Most relevant | Search more
arXiv:2003.03778 [cs.LG] (Published 2020-03-08)
Adversarial Attacks on Probabilistic Autoregressive Forecasting Models
arXiv:2007.06381 [cs.LG] (Published 2020-07-13)
A simple defense against adversarial attacks on heatmap explanations
arXiv:1811.06492 [cs.LG] (Published 2018-11-15)
Mathematical Analysis of Adversarial Attacks