{ "id": "2212.07946", "version": "v1", "published": "2022-12-15T16:28:06.000Z", "updated": "2022-12-15T16:28:06.000Z", "title": "Combining information-seeking exploration and reward maximization: Unified inference on continuous state and action spaces under partial observability", "authors": [ "Parvin Malekzadeh", "Konstantinos N. Plataniotis" ], "comment": "34 pages, 7 figures", "categories": [ "cs.LG", "cs.AI" ], "abstract": "Reinforcement learning (RL) gained considerable attention by creating decision-making agents that maximize rewards received from fully observable environments. However, many real-world problems are partially or noisily observable by nature, where agents do not receive the true and complete state of the environment. Such problems are formulated as partially observable Markov decision processes (POMDPs). Some studies applied RL to POMDPs by recalling previous decisions and observations or inferring the true state of the environment from received observations. Nevertheless, aggregating observations and decisions over time is impractical for environments with high-dimensional continuous state and action spaces. Moreover, so-called inference-based RL approaches require large number of samples to perform well since agents eschew uncertainty in the inferred state for the decision-making. Active inference is a framework that is naturally formulated in POMDPs and directs agents to select decisions by minimising expected free energy (EFE). This supplies reward-maximising (exploitative) behaviour in RL, with an information-seeking (exploratory) behaviour. Despite this exploratory behaviour of active inference, its usage is limited to discrete state and action spaces due to the computational difficulty of the EFE. We propose a unified principle for joint information-seeking and reward maximization that clarifies a theoretical connection between active inference and RL, unifies active inference and RL, and overcomes their aforementioned limitations. Our findings are supported by strong theoretical analysis. The proposed framework's superior exploration property is also validated by experimental results on partial observable tasks with high-dimensional continuous state and action spaces. Moreover, the results show that our model solves reward-free problems, making task reward design optional.", "revisions": [ { "version": "v1", "updated": "2022-12-15T16:28:06.000Z" } ], "analyses": { "keywords": [ "action spaces", "combining information-seeking exploration", "reward maximization", "partial observability", "observable markov decision processes" ], "note": { "typesetting": "TeX", "pages": 34, "language": "en", "license": "arXiv", "status": "editable" } } }