arXiv Analytics

Sign in

arXiv:2406.02295 [cs.LG]AbstractReferencesReviewsResources

How to Explore with Belief: State Entropy Maximization in POMDPs

Riccardo Zamboni, Duilio Cirino, Marcello Restelli, Mirco Mutti

Published 2024-06-04Version 1

Recent works have studied *state entropy maximization* in reinforcement learning, in which the agent's objective is to learn a policy inducing high entropy over states visitation (Hazan et al., 2019). They typically assume full observability of the state of the system, so that the entropy of the observations is maximized. In practice, the agent may only get *partial* observations, e.g., a robot perceiving the state of a physical space through proximity sensors and cameras. A significant mismatch between the entropy over observations and true states of the system can arise in those settings. In this paper, we address the problem of entropy maximization over the *true states* with a decision policy conditioned on partial observations *only*. The latter is a generalization of POMDPs, which is intractable in general. We develop a memory and computationally efficient *policy gradient* method to address a first-order relaxation of the objective defined on *belief* states, providing various formal characterizations of approximation gaps, the optimization landscape, and the *hallucination* problem. This paper aims to generalize state entropy maximization to more realistic domains that meet the challenges of applications.

Related articles: Most relevant | Search more
arXiv:2401.10518 [cs.LG] (Published 2024-01-19)
Spatial-temporal Forecasting for Regions without Observations
arXiv:2404.00657 [cs.LG] (Published 2024-03-31)
Observations on Building RAG Systems for Technical Documents
arXiv:1805.07805 [cs.LG] (Published 2018-05-20)
Safe Policy Learning from Observations