{ "id": "2210.11692", "version": "v1", "published": "2022-10-21T02:36:57.000Z", "updated": "2022-10-21T02:36:57.000Z", "title": "Competing Bandits in Time Varying Matching Markets", "authors": [ "Deepan Muthirayan", "Chinmay Maheshwari", "Pramod P. Khargonekar", "Shankar Sastry" ], "categories": [ "cs.LG", "cs.GT", "cs.MA" ], "abstract": "We study the problem of online learning in two-sided non-stationary matching markets, where the objective is to converge to a stable match. In particular, we consider the setting where one side of the market, the arms, has fixed known set of preferences over the other side, the players. While this problem has been studied when the players have fixed but unknown preferences, in this work we study the problem of how to learn when the preferences of the players are time varying. We propose the {\\it Restart Competing Bandits (RCB)} algorithm, which combines a simple {\\it restart strategy} to handle the non-stationarity with the {\\it competing bandits} algorithm \\citep{liu2020competing} designed for the stationary case. We show that, with the proposed algorithm, each player receives a uniform sub-linear regret of {$\\widetilde{\\mathcal{O}}(L^{1/2}_TT^{1/2})$} up to the number of changes in the underlying preference of agents, $L_T$. We also discuss extensions of this algorithm to the case where the number of changes need not be known a priori.", "revisions": [ { "version": "v1", "updated": "2022-10-21T02:36:57.000Z" } ], "analyses": { "keywords": [ "time varying matching markets", "competing bandits", "preference", "uniform sub-linear regret" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }