arXiv Analytics

Sign in

arXiv:2307.12204 [cs.LG]AbstractReferencesReviewsResources

Adversarial Agents For Attacking Inaudible Voice Activated Devices

Forrest McKee, David Noever

Published 2023-07-23Version 1

Our analysis of inaudible attacks on voice-activated devices confirms the alarming risk factor of 7.6 out of 10, underlining significant security vulnerabilities scored independently by NIST National Vulnerability Database (NVD). Our baseline network model showcases a scenario in which an attacker uses inaudible voice commands to gain unauthorized access to confidential information on a secured laptop. We simulated many attack scenarios on this baseline network model, revealing the potential for mass exploitation of interconnected devices to discover and own privileged information through physical access without adding new hardware or amplifying device skills. Using Microsoft's CyberBattleSim framework, we evaluated six reinforcement learning algorithms and found that Deep-Q learning with exploitation proved optimal, leading to rapid ownership of all nodes in fewer steps. Our findings underscore the critical need for understanding non-conventional networks and new cybersecurity measures in an ever-expanding digital landscape, particularly those characterized by mobile devices, voice activation, and non-linear microphones susceptible to malicious actors operating stealth attacks in the near-ultrasound or inaudible ranges. By 2024, this new attack surface might encompass more digital voice assistants than people on the planet yet offer fewer remedies than conventional patching or firmware fixes since the inaudible attacks arise inherently from the microphone design and digital signal processing.

Related articles:
arXiv:2103.06473 [cs.LG] (Published 2021-03-11)
Multi-Task Federated Reinforcement Learning with Adversaries
arXiv:2206.02834 [cs.LG] (Published 2022-06-06)
Collaborative Linear Bandits with Adversarial Agents: Near-Optimal Regret Bounds
arXiv:2403.09940 [cs.LG] (Published 2024-03-15)
Global Convergence Guarantees for Federated Policy Gradient Methods with Adversaries