arXiv Analytics

Sign in

arXiv:2103.06473 [cs.LG]AbstractReferencesReviewsResources

Multi-Task Federated Reinforcement Learning with Adversaries

Aqeel Anwar, Arijit Raychowdhury

Published 2021-03-11Version 1

Reinforcement learning algorithms, just like any other Machine learning algorithm pose a serious threat from adversaries. The adversaries can manipulate the learning algorithm resulting in non-optimal policies. In this paper, we analyze the Multi-task Federated Reinforcement Learning algorithms, where multiple collaborative agents in various environments are trying to maximize the sum of discounted return, in the presence of adversarial agents. We argue that the common attack methods are not guaranteed to carry out a successful attack on Multi-task Federated Reinforcement Learning and propose an adaptive attack method with better attack performance. Furthermore, we modify the conventional federated reinforcement learning algorithm to address the issue of adversaries that works equally well with and without the adversaries. Experimentation on different small to mid-size reinforcement learning problems show that the proposed attack method outperforms other general attack methods and the proposed modification to federated reinforcement learning algorithm was able to achieve near-optimal policies in the presence of adversarial agents.

Related articles: Most relevant | Search more
arXiv:2403.09940 [cs.LG] (Published 2024-03-15)
Global Convergence Guarantees for Federated Policy Gradient Methods with Adversaries
arXiv:1810.00069 [cs.LG] (Published 2018-09-28)
Adversarial Attacks and Defences: A Survey
arXiv:1904.02841 [cs.LG] (Published 2019-04-05)
Minimum Uncertainty Based Detection of Adversaries in Deep Neural Networks