arXiv Analytics

Sign in

arXiv:1806.05159 [cs.LG]AbstractReferencesReviewsResources

On Tighter Generalization Bound for Deep Neural Networks: CNNs, ResNets, and Beyond

Xingguo Li, Junwei Lu, Zhaoran Wang, Jarvis Haupt, Tuo Zhao

Published 2018-06-13Version 1

Our paper proposes a generalization error bound for a general family of deep neural networks based on the spectral norm of weight matrices. Through introducing a novel characterization of the Lipschitz properties of neural network family, we achieve a tighter generalization error bound for ultra-deep neural networks, whose depth is much larger than the square root of its width. Besides the general deep neural networks, our results can be applied to derive new bounds for several popular architectures, including convolutional neural networks (CNNs), residual networks (ResNets), and hyperspherical networks (SphereNets). In the regime that the depth of these architectures is dominating, our bounds allow for the choice of much larger parameter spaces of weight matrices, inducing potentially stronger expressive ability.

Related articles: Most relevant | Search more
arXiv:1206.4639 [cs.LG] (Published 2012-06-18)
Adaptive Regularization for Weight Matrices
arXiv:1703.00144 [cs.LG] (Published 2017-03-01)
Theoretical Properties for Neural Networks with Weight Matrices of Low Displacement Rank
arXiv:1611.03819 [cs.LG] (Published 2016-11-11)
Recovery Guarantee of Non-negative Matrix Factorization via Alternating Updates