arXiv Analytics

Sign in

arXiv:2205.03699 [cs.LG]AbstractReferencesReviewsResources

Rate-Optimal Contextual Online Matching Bandit

Yuantong Li, Chi-hua Wang, Guang Cheng, Will Wei Sun

Published 2022-05-07Version 1

Two-sided online matching platforms have been employed in various markets. However, agents' preferences in present market are usually implicit and unknown and must be learned from data. With the growing availability of side information involved in the decision process, modern online matching methodology demands the capability to track preference dynamics for agents based on their contextual information. This motivates us to consider a novel Contextual Online Matching Bandit prOblem (COMBO), which allows dynamic preferences in matching decisions. Existing works focus on multi-armed bandit with static preference, but this is insufficient: the two-sided preference changes as along as one-side's contextual information updates, resulting in non-static matching. In this paper, we propose a Centralized Contextual - Explore Then Commit (CC-ETC) algorithm to adapt to the COMBO. CC-ETC solves online matching with dynamic preference. In theory, we show that CC-ETC achieves a sublinear regret upper bound O(log(T)) and is a rate-optimal algorithm by proving a matching lower bound. In the experiments, we demonstrate that CC-ETC is robust to variant preference schemes, dimensions of contexts, reward noise levels, and contexts variation levels.

Related articles: Most relevant | Search more
arXiv:2403.11782 [cs.LG] (Published 2024-03-18, updated 2024-03-24)
A tutorial on learning from preferences and choices with Gaussian Processes
arXiv:1805.04686 [cs.LG] (Published 2018-05-12)
Adversarial Task Transfer from Preference
arXiv:2210.11692 [cs.LG] (Published 2022-10-21)
Competing Bandits in Time Varying Matching Markets