arXiv Analytics

Sign in

arXiv:1707.04025 [cs.LG]AbstractReferencesReviewsResources

On Measuring and Quantifying Performance: Error Rates, Surrogate Loss, and an Example in SSL

Marco Loog, Jesse H. Krijthe, Are C. Jensen

Published 2017-07-13Version 1

In various approaches to learning, notably in domain adaptation, active learning, learning under covariate shift, semi-supervised learning, learning with concept drift, and the like, one often wants to compare a baseline classifier to one or more advanced (or at least different) strategies. In this chapter, we basically argue that if such classifiers, in their respective training phases, optimize a so-called surrogate loss that it may also be valuable to compare the behavior of this loss on the test set, next to the regular classification error rate. It can provide us with an additional view on the classifiers' relative performances that error rates cannot capture. As an example, limited but convincing empirical results demonstrates that we may be able to find semi-supervised learning strategies that can guarantee performance improvements with increasing numbers of unlabeled data in terms of log-likelihood. In contrast, the latter may be impossible to guarantee for the classification error rate.

Journal: In Handbook of Pattern Recognition and Computer Vision (pp. 53-68) (2016)
Categories: cs.LG, cs.CV, stat.ML
Related articles: Most relevant | Search more
arXiv:2106.11905 [cs.LG] (Published 2021-06-22)
Dangers of Bayesian Model Averaging under Covariate Shift
arXiv:1608.00250 [cs.LG] (Published 2016-07-31)
On Regularization Parameter Estimation under Covariate Shift
arXiv:2003.00343 [cs.LG] (Published 2020-02-29)
Calibrated Prediction with Covariate Shift via Unsupervised Domain Adaptation