arXiv Analytics

Sign in

arXiv:1505.00401 [cs.LG]AbstractReferencesReviewsResources

Visualization of Tradeoff in Evaluation: from Precision-Recall & PN to LIFT, ROC & BIRD

David M. W. Powers

Published 2015-05-03Version 1

Evaluation often aims to reduce the correctness or error characteristics of a system down to a single number, but that always involves trade-offs. Another way of dealing with this is to quote two numbers, such as Recall and Precision, or Sensitivity and Specificity. But it can also be useful to see more than this, and a graphical approach can explore sensitivity to cost, prevalence, bias, noise, parameters and hyper-parameters. Moreover, most techniques are implicitly based on two balanced classes, and our ability to visualize graphically is intrinsically two dimensional, but we often want to visualize in a multiclass context. We review the dichotomous approaches relating to Precision, Recall, and ROC as well as the related LIFT chart, exploring how they handle unbalanced and multiclass data, and deriving new probabilistic and information theoretic variants of LIFT that help deal with the issues associated with the handling of multiple and unbalanced classes.

Comments: 23 pages, 12 equations, 2 figures, 2 tables, 1 sidebar
Categories: cs.LG, cs.AI, cs.IR, stat.ME, stat.ML
Related articles: Most relevant | Search more
arXiv:2206.04921 [cs.LG] (Published 2022-06-10)
Offline Stochastic Shortest Path: Learning, Evaluation and Towards Optimality
arXiv:cs/0212014 [cs.LG] (Published 2002-12-08)
Extraction of Keyphrases from Text: Evaluation of Four Algorithms
arXiv:2109.11377 [cs.LG] (Published 2021-09-23)
WRENCH: A Comprehensive Benchmark for Weak Supervision