arXiv Analytics

Sign in

arXiv:1605.09593 [cs.LG]AbstractReferencesReviewsResources

Controlling Exploration Improves Training for Deep Neural Networks

Yasutoshi Ida, Yasuhiro Fujiwara, Sotetsu Iwamura

Published 2016-05-31Version 1

Stochastic optimization methods are widely used for training of deep neural networks. However, it is still a challenging research problem to achieve effective training by using stochastic optimization methods. This is due to the difficulties in finding good parameters on a loss function that have many saddle points. In this paper, we propose a stochastic optimization method called STDProp for effective training of deep neural networks. Its key idea is to effectively explore parameters on a complex surface of a loss function. We additionally develop momentum version of STDProp. While our approaches are easy to implement with high memory efficiency, it is more effective than other practical stochastic optimization methods for deep neural networks.

Related articles: Most relevant | Search more
arXiv:1708.01911 [cs.LG] (Published 2017-08-06)
Training of Deep Neural Networks based on Distance Measures using RMSProp
arXiv:1706.05098 [cs.LG] (Published 2017-06-15)
An Overview of Multi-Task Learning in Deep Neural Networks
arXiv:1711.02114 [cs.LG] (Published 2017-11-06)
Bounding and Counting Linear Regions of Deep Neural Networks