arXiv Analytics

Sign in

arXiv:1801.09866 [cs.CL]AbstractReferencesReviewsResources

Accelerating recurrent neural network language model based online speech recognition system

Kyungmin Lee, Chiyoun Park, Namhoon Kim, Jaewon Lee

Published 2018-01-30Version 1

This paper presents methods to accelerate recurrent neural network based language models (RNNLMs) for online speech recognition systems. Firstly, a lossy compression of the past hidden layer outputs (history vector) with caching is introduced in order to reduce the number of LM queries. Next, RNNLM computations are deployed in a CPU-GPU hybrid manner, which computes each layer of the model on a more advantageous platform. The added overhead by data exchanges between CPU and GPU is compensated through a frame-wise batching strategy. The performance of the proposed methods evaluated on LibriSpeech test sets indicates that the reduction in history vector precision improves the average recognition speed by 1.23 times with minimum degradation in accuracy. On the other hand, the CPU-GPU hybrid parallelization enables RNNLM based real-time recognition with a four times improvement in speed.

Comments: 4 pages, 4 figures, 3 tables, ICASSP2018(Accepted)
Categories: cs.CL, cs.LG
Related articles: Most relevant | Search more
arXiv:1506.01192 [cs.CL] (Published 2015-06-03)
Personalizing a Universal Recurrent Neural Network Language Model with User Characteristic Features by Crowdsouring over Social Networks
arXiv:2007.11794 [cs.CL] (Published 2020-07-23)
Applying GPGPU to Recurrent Neural Network Language Model based Fast Network Search in the Real-Time LVCSR
arXiv:1611.00196 [cs.CL] (Published 2016-11-01)
Recurrent Neural Network Language Model Adaptation Derived Document Vector