arXiv Analytics

Sign in

arXiv:2406.15648 [cs.LG]AbstractReferencesReviewsResources

Testing the Feasibility of Linear Programs with Bandit Feedback

Aditya Gangrade, Aditya Gopalan, Venkatesh Saligrama, Clayton Scott

Published 2024-06-21Version 1

While the recent literature has seen a surge in the study of constrained bandit problems, all existing methods for these begin by assuming the feasibility of the underlying problem. We initiate the study of testing such feasibility assumptions, and in particular address the problem in the linear bandit setting, thus characterising the costs of feasibility testing for an unknown linear program using bandit feedback. Concretely, we test if $\exists x: Ax \ge 0$ for an unknown $A \in \mathbb{R}^{m \times d}$, by playing a sequence of actions $x_t\in \mathbb{R}^d$, and observing $Ax_t + \mathrm{noise}$ in response. By identifying the hypothesis as determining the sign of the value of a minimax game, we construct a novel test based on low-regret algorithms and a nonasymptotic law of iterated logarithms. We prove that this test is reliable, and adapts to the `signal level,' $\Gamma,$ of any instance, with mean sample costs scaling as $\widetilde{O}(d^2/\Gamma^2)$. We complement this by a minimax lower bound of $\Omega(d/\Gamma^2)$ for sample costs of reliable tests, dominating prior asymptotic lower bounds by capturing the dependence on $d$, and thus elucidating a basic insight missing in the extant literature on such problems.

Related articles: Most relevant | Search more
arXiv:2203.16810 [cs.LG] (Published 2022-03-31)
Adaptive Estimation of Random Vectors with Bandit Feedback
arXiv:1202.3079 [cs.LG] (Published 2012-02-14)
Towards minimax policies for online linear optimization with bandit feedback
arXiv:2004.13106 [cs.LG] (Published 2020-04-27)
Learning to Rank in the Position Based Model with Bandit Feedback