arXiv Analytics

Sign in

arXiv:1908.06395 [stat.ML]AbstractReferencesReviewsResources

Towards Better Generalization: BP-SVRG in Training Deep Neural Networks

Hao Jin, Dachao Lin, Zhihua Zhang

Published 2019-08-18Version 1

Stochastic variance-reduced gradient (SVRG) is a classical optimization method. Although it is theoretically proved to have better convergence performance than stochastic gradient descent (SGD), the generalization performance of SVRG remains open. In this paper we investigate the effects of some training techniques, mini-batching and learning rate decay, on the generalization performance of SVRG, and verify the generalization performance of Batch-SVRG (B-SVRG). In terms of the relationship between optimization and generalization, we believe that the average norm of gradients on each training sample as well as the norm of average gradient indicate how flat the landscape is and how well the model generalizes. Based on empirical observations of such metrics, we perform a sign switch on B-SVRG and derive a practical algorithm, BatchPlus-SVRG (BP-SVRG), which is numerically shown to enjoy better generalization performance than B-SVRG, even SGD in some scenarios of deep neural networks.

Related articles: Most relevant | Search more
arXiv:1812.00542 [stat.ML] (Published 2018-12-03)
Towards Theoretical Understanding of Large Batch Training in Stochastic Gradient Descent
arXiv:2207.04922 [stat.ML] (Published 2022-07-11)
On uniform-in-time diffusion approximation for stochastic gradient descent
arXiv:2204.01365 [stat.ML] (Published 2022-04-04)
Deep learning, stochastic gradient descent and diffusion maps