arXiv Analytics

Sign in

arXiv:1907.05418 [cs.CR]AbstractReferencesReviewsResources

Adversarial Objects Against LiDAR-Based Autonomous Driving Systems

Yulong Cao, Chaowei Xiao, Dawei Yang, Jing Fang, Ruigang Yang, Mingyan Liu, Bo Li

Published 2019-07-11Version 1

Deep neural networks (DNNs) are found to be vulnerable against adversarial examples, which are carefully crafted inputs with a small magnitude of perturbation aiming to induce arbitrarily incorrect predictions. Recent studies show that adversarial examples can pose a threat to real-world security-critical applications: a "physical adversarial Stop Sign" can be synthesized such that the autonomous driving cars will misrecognize it as others (e.g., a speed limit sign). However, these image-space adversarial examples cannot easily alter 3D scans of widely equipped LiDAR or radar on autonomous vehicles. In this paper, we reveal the potential vulnerabilities of LiDAR-based autonomous driving detection systems, by proposing an optimization based approach LiDAR-Adv to generate adversarial objects that can evade the LiDAR-based detection system under various conditions. We first show the vulnerabilities using a blackbox evolution-based algorithm, and then explore how much a strong adversary can do, using our gradient-based approach LiDAR-Adv. We test the generated adversarial objects on the Baidu Apollo autonomous driving platform and show that such physical systems are indeed vulnerable to the proposed attacks. We also 3D-print our adversarial objects and perform physical experiments to illustrate that such vulnerability exists in the real world. Please find more visualizations and results on the anonymous website: https://sites.google.com/view/lidar-adv.

Related articles: Most relevant | Search more
arXiv:1909.08526 [cs.CR] (Published 2019-09-17)
Defending against Machine Learning based Inference Attacks via Adversarial Examples: Opportunities and Challenges
arXiv:2007.06993 [cs.CR] (Published 2020-07-14)
Adversarial Examples and Metrics
arXiv:2401.02633 [cs.CR] (Published 2024-01-05)
A Random Ensemble of Encrypted models for Enhancing Robustness against Adversarial Examples