arXiv Analytics

Sign in

arXiv:2210.06143 [cs.LG]AbstractReferencesReviewsResources

On the Importance of Gradient Norm in PAC-Bayesian Bounds

Itai Gat, Yossi Adi, Alexander Schwing, Tamir Hazan

Published 2022-10-12Version 1

Generalization bounds which assess the difference between the true risk and the empirical risk, have been studied extensively. However, to obtain bounds, current techniques use strict assumptions such as a uniformly bounded or a Lipschitz loss function. To avoid these assumptions, in this paper, we follow an alternative approach: we relax uniform bounds assumptions by using on-average bounded loss and on-average bounded gradient norm assumptions. Following this relaxation, we propose a new generalization bound that exploits the contractivity of the log-Sobolev inequalities. These inequalities add an additional loss-gradient norm term to the generalization bound, which is intuitively a surrogate of the model complexity. We apply the proposed bound on Bayesian deep nets and empirically analyze the effect of this new loss-gradient norm term on different neural architectures.

Comments: NeurIPS 22. arXiv admin note: text overlap with arXiv:2002.09866
Categories: cs.LG, stat.ML
Related articles: Most relevant | Search more
arXiv:1810.09746 [cs.LG] (Published 2018-10-23)
On PAC-Bayesian Bounds for Random Forests
arXiv:1810.02180 [cs.LG] (Published 2018-10-04)
Improved generalization bounds for robust learning
arXiv:2410.08026 [cs.LG] (Published 2024-10-10)
Generalization Bounds and Model Complexity for Kolmogorov-Arnold Networks