arXiv Analytics

Sign in

arXiv:1903.08600 [cs.LG]AbstractReferencesReviewsResources

Contextual Bandits with Random Projection

Xiaotian Yu

Published 2019-03-20Version 1

Contextual bandits with linear payoffs, which are also known as linear bandits, provide a powerful alternative for solving practical problems of sequential decisions, e.g., online advertisements. In the era of big data, contextual data usually tend to be high-dimensional, which leads to new challenges for traditional linear bandits mostly designed for the setting of low-dimensional contextual data. Due to the curse of dimensionality, there are two challenges in most of the current bandit algorithms: the first is high time-complexity; and the second is extreme large upper regret bounds with high-dimensional data. In this paper, in order to attack the above two challenges effectively, we develop an algorithm of Contextual Bandits via RAndom Projection (\texttt{CBRAP}) in the setting of linear payoffs, which works especially for high-dimensional contextual data. The proposed \texttt{CBRAP} algorithm is time-efficient and flexible, because it enables players to choose an arm in a low-dimensional space, and relaxes the sparsity assumption of constant number of non-zero components in previous work. Besides, we provide a linear upper regret bound for the proposed algorithm, which is associated with reduced dimensions.

Related articles: Most relevant | Search more
arXiv:2106.06483 [cs.LG] (Published 2021-06-11)
Optimal Model Selection in Contextual Bandits with Many Classes via Offline Oracles
arXiv:2307.15154 [cs.LG] (Published 2023-07-27)
A/B Testing and Best-arm Identification for Linear Bandits with Robustness to Non-stationarity
arXiv:2110.03177 [cs.LG] (Published 2021-10-07, updated 2022-02-11)
EE-Net: Exploitation-Exploration Neural Networks in Contextual Bandits