arXiv Analytics

Sign in

arXiv:1708.01911 [cs.LG]AbstractReferencesReviewsResources

Training of Deep Neural Networks based on Distance Measures using RMSProp

Thomas Kurbiel, Shahrzad Khaleghian

Published 2017-08-06Version 1

The vanishing gradient problem was a major obstacle for the success of deep learning. In recent years it was gradually alleviated through multiple different techniques. However the problem was not really overcome in a fundamental way, since it is inherent to neural networks with activation functions based on dot products. In a series of papers, we are going to analyze alternative neural network structures which are not based on dot products. In this first paper, we revisit neural networks built up of layers based on distance measures and Gaussian activation functions. These kinds of networks were only sparsely used in the past since they are hard to train when using plain stochastic gradient descent methods. We show that by using Root Mean Square Propagation (RMSProp) it is possible to efficiently learn multi-layer neural networks. Furthermore we show that when appropriately initialized these kinds of neural networks suffer much less from the vanishing and exploding gradient problem than traditional neural networks even for deep networks.

Related articles: Most relevant | Search more
arXiv:1605.09593 [cs.LG] (Published 2016-05-31)
Controlling Exploration Improves Training for Deep Neural Networks
arXiv:1706.05098 [cs.LG] (Published 2017-06-15)
An Overview of Multi-Task Learning in Deep Neural Networks
arXiv:1711.02114 [cs.LG] (Published 2017-11-06)
Bounding and Counting Linear Regions of Deep Neural Networks