arXiv Analytics

Sign in

arXiv:1705.00861 [cs.CL]AbstractReferencesReviewsResources

Deep Neural Machine Translation with Linear Associative Unit

Mingxuan Wang, Zhengdong Lu, Jie Zhou, Qun Liu

Published 2017-05-02Version 1

Deep Neural Networks (DNNs) have provably enhanced the state-of-the-art Neural Machine Translation (NMT) with their capability in modeling complex functions and capturing complex linguistic structures. However NMT systems with deep architecture in their encoder or decoder RNNs often suffer from severe gradient diffusion due to the non-linear recurrent activations, which often make the optimization much more difficult. To address this problem we propose novel linear associative units (LAU) to reduce the gradient propagation length inside the recurrent unit. Different from conventional approaches (LSTM unit and GRU), LAUs utilizes linear associative connections between input and output of the recurrent unit, which allows unimpeded information flow through both space and time direction. The model is quite simple, but it is surprisingly effective. Our empirical study on Chinese-English translation shows that our model with proper configuration can improve by 11.7 BLEU upon Groundhog and the best reported results in the same setting. On WMT14 English-German task and a larger WMT14 English-French task, our model achieves comparable results with the state-of-the-art.

Related articles: Most relevant | Search more
arXiv:1805.04185 [cs.CL] (Published 2018-05-10)
Deep Neural Machine Translation with Weakly-Recurrent Units
arXiv:2010.04924 [cs.CL] (Published 2020-10-10)
On Long-Tailed Phenomena in Neural Machine Translation
arXiv:2001.08140 [cs.CL] (Published 2020-01-22)
Unsupervised Domain Adaptation for Neural Machine Translation with Iterative Back Translation