arXiv Analytics

Sign in

arXiv:1705.08052 [cs.LG]AbstractReferencesReviewsResources

Compressing Recurrent Neural Network with Tensor Train

Andros Tjandra, Sakriani Sakti, Satoshi Nakamura

Published 2017-05-23Version 1

Recurrent Neural Network (RNN) are a popular choice for modeling temporal and sequential tasks and achieve many state-of-the-art performance on various complex problems. However, most of the state-of-the-art RNNs have millions of parameters and require many computational resources for training and predicting new data. This paper proposes an alternative RNN model to reduce the number of parameters significantly by representing the weight parameters based on Tensor Train (TT) format. In this paper, we implement the TT-format representation for several RNN architectures such as simple RNN and Gated Recurrent Unit (GRU). We compare and evaluate our proposed RNN model with uncompressed RNN model on sequence classification and sequence prediction tasks. Our proposed RNNs with TT-format are able to preserve the performance while reducing the number of RNN parameters significantly up to 40 times smaller.

Comments: Accepted at IJCNN 2017
Categories: cs.LG
Related articles: Most relevant | Search more
arXiv:1802.10410 [cs.LG] (Published 2018-02-28)
Tensor Decomposition for Compressing Recurrent Neural Network
arXiv:1806.01248 [cs.LG] (Published 2018-06-04)
Dynamically Hierarchy Revolution: DirNet for Compressing Recurrent Neural Network on Mobile Devices
arXiv:2306.13264 [cs.LG] (Published 2023-06-23)
FedSelect: Customized Selection of Parameters for Fine-Tuning during Personalized Federated Learning