arXiv:1605.09593 [cs.LG]AbstractReferencesReviewsResources
Controlling Exploration Improves Training for Deep Neural Networks
Yasutoshi Ida, Yasuhiro Fujiwara, Sotetsu Iwamura
Published 2016-05-31Version 1
Stochastic optimization methods are widely used for training of deep neural networks. However, it is still a challenging research problem to achieve effective training by using stochastic optimization methods. This is due to the difficulties in finding good parameters on a loss function that have many saddle points. In this paper, we propose a stochastic optimization method called STDProp for effective training of deep neural networks. Its key idea is to effectively explore parameters on a complex surface of a loss function. We additionally develop momentum version of STDProp. While our approaches are easy to implement with high memory efficiency, it is more effective than other practical stochastic optimization methods for deep neural networks.