arXiv Analytics

Sign in

arXiv:1709.05804 [cs.LG]AbstractReferencesReviewsResources

Minimal Effort Back Propagation for Convolutional Neural Networks

Bingzhen Wei, Xu Sun, Xuancheng Ren, Jingjing Xu

Published 2017-09-18Version 1

As traditional neural network consumes a significant amount of computing resources during back propagation, \citet{Sun2017mePropSB} propose a simple yet effective technique to alleviate this problem. In this technique, only a small subset of the full gradients are computed to update the model parameters. In this paper we extend this technique into the Convolutional Neural Network(CNN) to reduce calculation in back propagation, and the surprising results verify its validity in CNN: only 5\% of the gradients are passed back but the model still achieves the same effect as the traditional CNN, or even better. We also show that the top-$k$ selection of gradients leads to a sparse calculation in back propagation, which may bring significant computational benefits for high computational complexity of convolution operation in CNN.

Related articles: Most relevant | Search more
arXiv:1912.03789 [cs.LG] (Published 2019-12-08)
Feature Engineering Combined with 1 D Convolutional Neural Network for Improved Mortality Prediction
arXiv:1912.03760 [cs.LG] (Published 2019-12-08)
A Convolutional Neural Network for User Identification based on Motion Sensors
arXiv:1905.11669 [cs.LG] (Published 2019-05-28)
CompactNet: Platform-Aware Automatic Optimization for Convolutional Neural Networks