arXiv Analytics

Sign in

arXiv:1301.0579 [cs.LG]AbstractReferencesReviewsResources

Almost-everywhere algorithmic stability and generalization error

Samuel Kutin, Partha Niyogi

Published 2012-12-12Version 1

We explore in some detail the notion of algorithmic stability as a viable framework for analyzing the generalization error of learning algorithms. We introduce the new notion of training stability of a learning algorithm and show that, in a general setting, it is sufficient for good bounds on generalization error. In the PAC setting, training stability is both necessary and sufficient for learnability.\ The approach based on training stability makes no reference to VC dimension or VC entropy. There is no need to prove uniform convergence, and generalization error is bounded directly via an extended McDiarmid inequality. As a result it potentially allows us to deal with a broader class of learning algorithms than Empirical Risk Minimization. \ We also explore the relationships among VC dimension, generalization error, and various notions of stability. Several examples of learning algorithms are considered.

Comments: Appears in Proceedings of the Eighteenth Conference on Uncertainty in Artificial Intelligence (UAI2002)
Categories: cs.LG, stat.ML
Related articles: Most relevant | Search more
arXiv:1901.04609 [cs.LG] (Published 2019-01-15)
Tightening Mutual Information Based Bounds on Generalization Error
arXiv:1312.3970 [cs.LG] (Published 2013-12-13)
An Extensive Evaluation of Filtering Misclassified Instances in Supervised Classification Tasks
arXiv:1711.05482 [cs.LG] (Published 2017-11-15)
Efficient Estimation of Generalization Error and Bias-Variance Components of Ensembles