arXiv Analytics

Sign in

arXiv:1809.00129 [cs.CL]AbstractReferencesReviewsResources

Contextual Encoding for Translation Quality Estimation

Junjie Hu, Wei-Cheng Chang, Yuexin Wu, Graham Neubig

Published 2018-09-01Version 1

The task of word-level quality estimation (QE) consists of taking a source sentence and machine-generated translation, and predicting which words in the output are correct and which are wrong. In this paper, propose a method to effectively encode the local and global contextual information for each target word using a three-part neural network approach. The first part uses an embedding layer to represent words and their part-of-speech tags in both languages. The second part leverages a one-dimensional convolution layer to integrate local context information for each target word. The third part applies a stack of feed-forward and recurrent neural networks to further encode the global context in the sentence before making the predictions. This model was submitted as the CMU entry to the WMT2018 shared task on QE, and achieves strong results, ranking first in three of the six tracks.

Comments: 6 pages, 2018 Third Conference on Machine Translation (WMT18)
Categories: cs.CL
Related articles: Most relevant | Search more
arXiv:2109.03914 [cs.CL] (Published 2021-09-08)
Ensemble Fine-tuned mBERT for Translation Quality Estimation
arXiv:2011.01536 [cs.CL] (Published 2020-11-01)
TransQuest: Translation Quality Estimation with Cross-lingual Transformers
arXiv:1610.04841 [cs.CL] (Published 2016-10-16)
Translation Quality Estimation using Recurrent Neural Network