arXiv Analytics

Sign in

arXiv:2005.03642 [cs.CL]AbstractReferencesReviewsResources

On Exposure Bias, Hallucination and Domain Shift in Neural Machine Translation

Chaojun Wang, Rico Sennrich

Published 2020-05-07Version 1

The standard training algorithm in neural machine translation (NMT) suffers from exposure bias, and alternative algorithms have been proposed to mitigate this. However, the practical impact of exposure bias is under debate. In this paper, we link exposure bias to another well-known problem in NMT, namely the tendency to generate hallucinations under domain shift. In experiments on three datasets with multiple test domains, we show that exposure bias is partially to blame for hallucinations, and that training with Minimum Risk Training, which avoids exposure bias, can mitigate this. Our analysis explains why exposure bias is more problematic under domain shift, and also links exposure bias to the beam search problem, i.e. performance deterioration with increasing beam size. Our results provide a new justification for methods that reduce exposure bias: even if they do not increase performance on in-domain test sets, they can increase model robustness to domain shift.

Related articles: Most relevant | Search more
arXiv:1610.10099 [cs.CL] (Published 2016-10-31)
Neural Machine Translation in Linear Time
arXiv:1610.00388 [cs.CL] (Published 2016-10-03)
Learning to Translate in Real-time with Neural Machine Translation
arXiv:1709.03980 [cs.CL] (Published 2017-09-12)
Refining Source Representations with Relation Networks for Neural Machine Translation