arXiv Analytics

Sign in

arXiv:1508.00842 [cs.LG]AbstractReferencesReviewsResources

Perceptron like Algorithms for Online Learning to Rank

Sougata Chaudhuri, Ambuj Tewari

Published 2015-08-04Version 1

Perceptron is a classic online algorithm for learning a classification function. We provide the first extension of the perceptron algorithm to the learning to rank problem under popular performance measures such as NDCG and MAP. A modern perspective on perceptron in classification is to interpret it as online gradient descent (OGD), during mistake rounds, on the hinge loss function. Motivated by this interpretation, we propose a novel family of \emph{listwise} large margin ranking surrogates. Members of this family can be thought of as analogues of the hinge loss. Using the proposed family, we provide a guaranteed upper bound on the cumulative NDCG (or MAP) induced loss incurred by our perceptron like algorithm. In a guarantee reminiscent of Novikoff's convergence theorem for the classic perceptron, we show that if there exists an oracle ranker which can correctly rank each instance in an online sequence of ranking data, our perceptron algorithm makes no more than a finite number of mistakes on that sequence. We provide empirical results on large commercial datasets to corroborate our theoretical results.

Comments: arXiv admin note: text overlap with arXiv:1405.0591
Categories: cs.LG, stat.ML
Related articles: Most relevant | Search more
arXiv:1905.12721 [cs.LG] (Published 2019-05-29)
Matrix-Free Preconditioning in Online Learning
arXiv:2001.06105 [cs.LG] (Published 2020-01-16)
Better Boosting with Bandits for Online Learning
arXiv:1711.03343 [cs.LG] (Published 2017-11-09)
Analysis of Dropout in Online Learning