arXiv Analytics

Sign in

arXiv:1711.05482 [cs.LG]AbstractReferencesReviewsResources

Efficient Estimation of Generalization Error and Bias-Variance Components of Ensembles

Dhruv Mahajan, Vivek Gupta, S Sathiya Keerthi, Sellamanickam Sundararajan, Shravan Narayanamurthy, Rahul Kidambi

Published 2017-11-15Version 1

For many applications, an ensemble of base classifiers is an effective solution. The tuning of its parameters(number of classes, amount of data on which each classifier is to be trained on, etc.) requires G, the generalization error of a given ensemble. The efficient estimation of G is the focus of this paper. The key idea is to approximate the variance of the class scores/probabilities of the base classifiers over the randomness imposed by the training subset by normal/beta distribution at each point x in the input feature space. We estimate the parameters of the distribution using a small set of randomly chosen base classifiers and use those parameters to give efficient estimation schemes for G. We give empirical evidence for the quality of the various estimators. We also demonstrate their usefulness in making design choices such as the number of classifiers in the ensemble and the size of a subset of data used for training that is needed to achieve a certain value of generalization error. Our approach also has great potential for designing distributed ensemble classifiers.

Comments: 12 Pages, 4 Figures, 12 Pages, Under Review in SDM 2018
Categories: cs.LG, stat.ML
Related articles: Most relevant | Search more
arXiv:1206.3274 [cs.LG] (Published 2012-06-13)
Small Sample Inference for Generalization Error in Classification Using the CUD Bound
arXiv:2107.03633 [cs.LG] (Published 2021-07-08)
Generalization Error of GAN from the Discriminator's Perspective
arXiv:1301.0579 [cs.LG] (Published 2012-12-12)
Almost-everywhere algorithmic stability and generalization error