arXiv Analytics

Sign in

arXiv:1901.01939 [cs.CV]AbstractReferencesReviewsResources

GASL: Guided Attention for Sparsity Learning in Deep Neural Networks

Amirsina Torfi, Rouzbeh A. Shirvani, Sobhan Soleymani, Naser M. Nasrabadi

Published 2019-01-07Version 1

The main goal of network pruning is imposing sparsity on the neural network by increasing the number of parameters with zero value in order to reduce the architecture size and the computational speedup. In most of the previous research works, sparsity is imposed stochastically without considering any prior knowledge of the weights distribution or other internal network characteristics. Enforcing too much sparsity may induce accuracy drop due to the fact that a lot of important elements might have been eliminated. In this paper, we propose Guided Attention for Sparsity Learning (GASL) to achieve (1) model compression by having less number of elements and speed-up; (2) prevent the accuracy drop by supervising the sparsity operation via a guided attention mechanism and (3) introduce a generic mechanism that can be adapted for any type of architecture; Our work is aimed at providing a framework based on interpretable attention mechanisms for imposing structured and non-structured sparsity in deep neural networks. For Cifar-100 experiments, we achieved the state-of-the-art sparsity level and 2.91x speedup with competitive accuracy compared to the best method. For MNIST and LeNet architecture we also achieved the highest sparsity and speedup level.

Related articles: Most relevant | Search more
arXiv:1505.03540 [cs.CV] (Published 2015-05-13)
Brain Tumor Segmentation with Deep Neural Networks
arXiv:2206.10041 [cs.CV] (Published 2022-06-20)
MPA: MultiPath++ Based Architecture for Motion Prediction
arXiv:1907.00274 [cs.CV] (Published 2019-06-29)
NetTailor: Tuning the Architecture, Not Just the Weights