arXiv Analytics

Sign in

arXiv:2010.10502 [cs.LG]AbstractReferencesReviewsResources

Dual Averaging is Surprisingly Effective for Deep Learning Optimization

Samy Jelassi, Aaron Defazio

Published 2020-10-20Version 1

First-order stochastic optimization methods are currently the most widely used class of methods for training deep neural networks. However, the choice of the optimizer has become an ad-hoc rule that can significantly affect the performance. For instance, SGD with momentum (SGD+M) is typically used in computer vision (CV) and Adam is used for training transformer models for Natural Language Processing (NLP). Using the wrong method can lead to significant performance degradation. Inspired by the dual averaging algorithm, we propose Modernized Dual Averaging (MDA), an optimizer that is able to perform as well as SGD+M in CV and as Adam in NLP. Our method is not adaptive and is significantly simpler than Adam. We show that MDA induces a decaying uncentered $L_2$-regularization compared to vanilla SGD+M and hypothesize that this may explain why it works on NLP problems where SGD+M fails.

Related articles: Most relevant | Search more
arXiv:2006.08877 [cs.LG] (Published 2020-06-16)
Practical Quasi-Newton Methods for Training Deep Neural Networks
arXiv:1808.03408 [cs.LG] (Published 2018-08-10)
On the Convergence of AdaGrad with Momentum for Training Deep Neural Networks
arXiv:1802.04626 [cs.LG] (Published 2018-02-13)
Barista - a Graphical Tool for Designing and Training Deep Neural Networks