arXiv Analytics

Sign in

arXiv:2007.11794 [cs.CL]AbstractReferencesReviewsResources

Applying GPGPU to Recurrent Neural Network Language Model based Fast Network Search in the Real-Time LVCSR

Kyungmin Lee, Chiyoun Park, Ilhwan Kim, Namhoon Kim, Jaewon Lee

Published 2020-07-23Version 1

Recurrent Neural Network Language Models (RNNLMs) have started to be used in various fields of speech recognition due to their outstanding performance. However, the high computational complexity of RNNLMs has been a hurdle in applying the RNNLM to a real-time Large Vocabulary Continuous Speech Recognition (LVCSR). In order to accelerate the speed of RNNLM-based network searches during decoding, we apply the General Purpose Graphic Processing Units (GPGPUs). This paper proposes a novel method of applying GPGPUs to RNNLM-based graph traversals. We have achieved our goal by reducing redundant computations on CPUs and amount of transfer between GPGPUs and CPUs. The proposed approach was evaluated on both WSJ corpus and in-house data. Experiments shows that the proposed approach achieves the real-time speed in various circumstances while maintaining the Word Error Rate (WER) to be relatively 10% lower than that of n-gram models.

Comments: 4 pages, 2 figures, Interspeech2015(Accepted)
Categories: cs.CL, cs.LG
Related articles: Most relevant | Search more
arXiv:1801.09866 [cs.CL] (Published 2018-01-30)
Accelerating recurrent neural network language model based online speech recognition system
arXiv:1506.01192 [cs.CL] (Published 2015-06-03)
Personalizing a Universal Recurrent Neural Network Language Model with User Characteristic Features by Crowdsouring over Social Networks
arXiv:1611.00196 [cs.CL] (Published 2016-11-01)
Recurrent Neural Network Language Model Adaptation Derived Document Vector