arXiv Analytics

Sign in

arXiv:1909.09621 [stat.ML]AbstractReferencesReviewsResources

On the Convergence of Approximate and Regularized Policy Iteration Schemes

Elena Smirnova, Elvis Dohmatob

Published 2019-09-20Version 1

Algorithms based on the entropy regularized framework, such as Soft Q-learning and Soft Actor-Critic, recently showed state-of-the-art performance on a number of challenging reinforcement learning (RL) tasks. The regularized formulation modifies the standard RL objective and thus, generally, converges to a policy different from the optimal greedy policy of the original RL problem. Practically, it is important to control the suboptimality of the regularized optimal policy. In this paper, we propose the optimality-preserving regularized modified policy iteration (MPI) scheme that simultaneously (a) provides desirable properties to intermediate policies such as targeted exploration, and (b) guarantees convergence to the optimal policy with explicit rates depending on the decrease rate of the regularization parameter. This result is based on two more general results. First, we show that the approximate MPI scheme converges as fast as the exact MPI if the decrease rate of error sequence is sufficiently fast; otherwise, its rate of convergence slows down to the errors decrease rate. Second, we show the regularized MPI is an instance of the approximate MPI where regularization plays the role of errors. In a special case of negative entropy regularizer (leading to a popular Soft Q-learning algorithm), our result explicitly links the convergence rate of policy / value iterates to exploration.

Related articles: Most relevant | Search more
arXiv:2003.02395 [stat.ML] (Published 2020-03-05)
On the Convergence of Adam and Adagrad
arXiv:2209.02305 [stat.ML] (Published 2022-09-06)
Rates of Convergence for Regression with the Graph Poly-Laplacian
arXiv:2306.01122 [stat.ML] (Published 2023-06-01)
On the Convergence of Coordinate Ascent Variational Inference