arXiv Analytics

Sign in

arXiv:1603.05359 [cs.LG]AbstractReferencesReviewsResources

Cascading Bandits for Large-Scale Recommendation Problems

Shi Zong, Hao Ni, Kenny Sung, Nan Rosemary Ke, Zheng Wen, Branislav Kveton

Published 2016-03-17Version 1

Most recommender systems recommend a list of items. The user examines the list, from the first item to the last, and often chooses the first attractive item and does not examine the rest. This type of user behavior can be modeled by the cascade model. In this work, we study cascading bandits, an online learning variant of the cascade model where the goal is to recommend $K$ most attractive items from a large set of $L$ candidate items. We propose two algorithms for solving this problem, which are based on the idea of linear generalization. The key idea in our solutions is that we learn a predictor of the attraction probabilities of items from their features, as opposing to learning the attraction probability of each item independently as in the existing work. This results in practical learning algorithms whose regret does not depend on the number of items $L$. We bound the regret of one algorithm and comprehensively evaluate the other on a range of recommendation problems. The algorithm performs well and outperforms all baselines.

Related articles:
arXiv:1502.02763 [cs.LG] (Published 2015-02-10)
Cascading Bandits
arXiv:2001.08655 [cs.LG] (Published 2020-01-23)
Best Arm Identification for Cascading Bandits in the Fixed Confidence Setting
arXiv:2202.05664 [cs.LG] (Published 2022-02-11)
Predictive modeling of microbiological seawater quality classification in karst region using cascade model
Ivana Lučin et al.