{ "id": "2306.17693", "version": "v1", "published": "2023-06-30T14:19:44.000Z", "updated": "2023-06-30T14:19:44.000Z", "title": "Thompson sampling for improved exploration in GFlowNets", "authors": [ "Jarrid Rector-Brooks", "Kanika Madan", "Moksh Jain", "Maksym Korablyov", "Cheng-Hao Liu", "Sarath Chandar", "Nikolay Malkin", "Yoshua Bengio" ], "comment": "Structured Probabilistic Inference and Generative Modeling (SPIGM) workshop @ ICML 2023", "categories": [ "cs.LG" ], "abstract": "Generative flow networks (GFlowNets) are amortized variational inference algorithms that treat sampling from a distribution over compositional objects as a sequential decision-making problem with a learnable action policy. Unlike other algorithms for hierarchical sampling that optimize a variational bound, GFlowNet algorithms can stably run off-policy, which can be advantageous for discovering modes of the target distribution. Despite this flexibility in the choice of behaviour policy, the optimal way of efficiently selecting trajectories for training has not yet been systematically explored. In this paper, we view the choice of trajectories for training as an active learning problem and approach it using Bayesian techniques inspired by methods for multi-armed bandits. The proposed algorithm, Thompson sampling GFlowNets (TS-GFN), maintains an approximate posterior distribution over policies and samples trajectories from this posterior for training. We show in two domains that TS-GFN yields improved exploration and thus faster convergence to the target distribution than the off-policy exploration strategies used in past work.", "revisions": [ { "version": "v1", "updated": "2023-06-30T14:19:44.000Z" } ], "analyses": { "keywords": [ "thompson sampling", "target distribution", "amortized variational inference algorithms", "off-policy exploration strategies", "approximate posterior distribution" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }