{ "id": "1802.10410", "version": "v1", "published": "2018-02-28T13:52:22.000Z", "updated": "2018-02-28T13:52:22.000Z", "title": "Tensor Decomposition for Compressing Recurrent Neural Network", "authors": [ "Andros Tjandra", "Sakriani Sakti", "Satoshi Nakamura" ], "categories": [ "cs.LG" ], "abstract": "In the machine learning fields, Recurrent Neural Network (RNN) has become a popular algorithm for sequential data modeling. However, behind the impressive performance, RNNs require a large number of parameters for both training and inference. In this paper, we are trying to reduce the number of parameters and maintain the expressive power from RNN simultaneously. We utilize several tensor decompositions method including CANDECOMP/PARAFAC (CP), Tucker decomposition and Tensor Train(TT) to re-parameterize the Gated Recurrent Unit (GRU) RNN. We evaluate all tensor-based RNNs performance on sequence modeling tasks with a various number of parameters. Based on our experiment results, TT-GRU achieved the best results in a various number of parameters compared to other decomposition methods.", "revisions": [ { "version": "v1", "updated": "2018-02-28T13:52:22.000Z" } ], "analyses": { "keywords": [ "compressing recurrent neural network", "parameters", "tensor decompositions method", "experiment results", "sequence modeling tasks" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }