arXiv Analytics

Sign in

arXiv:1801.06159 [stat.ML]AbstractReferencesReviewsResources

When Does Stochastic Gradient Algorithm Work Well?

Lam M. Nguyen, Nam H. Nguyen, Dzung T. Phan, Jayant R. Kalagnanam, Katya Scheinberg

Published 2018-01-18Version 1

In this paper, we consider a general stochastic optimization problem which is often at the core of supervised learning, such as deep learning and linear classification. We consider a standard stochastic gradient descent (SGD) method with a fixed, large step size and propose a novel assumption on the objective function, under which this method has the improved convergence rates (to a neighborhood of the optimal solutions). We then empirically demonstrate that these assumptions hold for logistic regression and standard deep neural networks on classical data sets. Thus our analysis helps to explain when efficient behavior can be expected from the SGD method in training classification models and deep neural networks.

Related articles:
arXiv:1903.09668 [stat.ML] (Published 2019-03-22)
Scalable Data Augmentation for Deep Learning