arXiv Analytics

Sign in

arXiv:2302.01324 [cs.LG]AbstractReferencesReviewsResources

Randomized Greedy Learning for Non-monotone Stochastic Submodular Maximization Under Full-bandit Feedback

Fares Fourati, Vaneet Aggarwal, Christopher John Quinn, Mohamed-Slim Alouini

Published 2023-02-02Version 1

We investigate the problem of unconstrained combinatorial multi-armed bandits with full-bandit feedback and stochastic rewards for submodular maximization. Previous works investigate the same problem assuming a submodular and monotone reward function. In this work, we study a more general problem, i.e., when the reward function is not necessarily monotone, and the submodularity is assumed only in expectation. We propose Randomized Greedy Learning (RGL) algorithm and theoretically prove that it achieves a $\frac{1}{2}$-regret upper bound of $\tilde{\mathcal{O}}(n T^{\frac{2}{3}})$ for horizon $T$ and number of arms $n$. We also show in experiments that RGL empirically outperforms other full-bandit variants in submodular and non-submodular settings.

Related articles: Most relevant | Search more
arXiv:2106.03498 [cs.LG] (Published 2021-06-07)
Identifiability in inverse reinforcement learning
arXiv:1902.10582 [cs.LG] (Published 2019-02-27)
Polynomial-time Algorithms for Combinatorial Pure Exploration with Full-bandit Feedback
arXiv:1801.09624 [cs.LG] (Published 2018-01-29)
Learning the Reward Function for a Misspecified Model