arXiv:1508.00842 [cs.LG]AbstractReferencesReviewsResources
Perceptron like Algorithms for Online Learning to Rank
Sougata Chaudhuri, Ambuj Tewari
Published 2015-08-04Version 1
Perceptron is a classic online algorithm for learning a classification function. We provide the first extension of the perceptron algorithm to the learning to rank problem under popular performance measures such as NDCG and MAP. A modern perspective on perceptron in classification is to interpret it as online gradient descent (OGD), during mistake rounds, on the hinge loss function. Motivated by this interpretation, we propose a novel family of \emph{listwise} large margin ranking surrogates. Members of this family can be thought of as analogues of the hinge loss. Using the proposed family, we provide a guaranteed upper bound on the cumulative NDCG (or MAP) induced loss incurred by our perceptron like algorithm. In a guarantee reminiscent of Novikoff's convergence theorem for the classic perceptron, we show that if there exists an oracle ranker which can correctly rank each instance in an online sequence of ranking data, our perceptron algorithm makes no more than a finite number of mistakes on that sequence. We provide empirical results on large commercial datasets to corroborate our theoretical results.