arXiv Analytics

Sign in

arXiv:1901.04609 [cs.LG]AbstractReferencesReviewsResources

Tightening Mutual Information Based Bounds on Generalization Error

Yuheng Bu, Shaofeng Zou, Venugopal V. Veeravalli

Published 2019-01-15Version 1

A mutual information based upper bound on the generalization error of a supervised learning algorithm is derived in this paper. The bound is constructed in terms of the mutual information between each individual training sample and the output of the learning algorithm, which requires weaker conditions on the loss function, but provides a tighter characterization of the generalization error than existing studies. Examples are further provided to demonstrate that the bound derived in this paper is tighter, and has a broader range of applicability. Application to noisy and iterative algorithms, e.g., stochastic gradient Langevin dynamics (SGLD), is also studied, where the constructed bound provides a tighter characterization of the generalization error than existing results.

Related articles: Most relevant | Search more
arXiv:1301.0579 [cs.LG] (Published 2012-12-12)
Almost-everywhere algorithmic stability and generalization error
arXiv:1312.3970 [cs.LG] (Published 2013-12-13)
An Extensive Evaluation of Filtering Misclassified Instances in Supervised Classification Tasks
arXiv:1711.05482 [cs.LG] (Published 2017-11-15)
Efficient Estimation of Generalization Error and Bias-Variance Components of Ensembles