arXiv Analytics

Sign in

arXiv:2204.09801 [cs.LG]AbstractReferencesReviewsResources

Exact Formulas for Finite-Time Estimation Errors of Decentralized Temporal Difference Learning with Linear Function Approximation

Xingang Guo, Bin Hu

Published 2022-04-20Version 1

In this paper, we consider the policy evaluation problem in multi-agent reinforcement learning (MARL) and derive exact closed-form formulas for the finite-time mean-squared estimation errors of decentralized temporal difference (TD) learning with linear function approximation. Our analysis hinges upon the fact that the decentralized TD learning method can be viewed as a Markov jump linear system (MJLS). Then standard MJLS theory can be applied to quantify the mean and covariance matrix of the estimation error of the decentralized TD method at every time step. Various implications of our exact formulas on the algorithm performance are also discussed. An interesting finding is that under a necessary and sufficient stability condition, the mean-squared TD estimation error will converge to an exact limit at a specific exponential rate.

Related articles: Most relevant | Search more
arXiv:1911.00934 [cs.LG] (Published 2019-11-03)
Finite-Sample Analysis of Decentralized Temporal-Difference Learning with Linear Function Approximation
arXiv:2102.08940 [cs.LG] (Published 2021-02-17)
Nearly Optimal Regret for Learning Adversarial MDPs with Linear Function Approximation
arXiv:2106.11960 [cs.LG] (Published 2021-06-22)
Variance-Aware Off-Policy Evaluation with Linear Function Approximation