arXiv Analytics

Sign in

arXiv:2004.14589 [cs.CL]AbstractReferencesReviewsResources

Improved Natural Language Generation via Loss Truncation

Daniel Kang, Tatsunori Hashimoto

Published 2020-04-30Version 1

Neural language models are usually trained to match the distributional properties of a large-scale corpus by minimizing the log loss. While straightforward to optimize, this approach forces the model to reproduce all variations in the dataset, including noisy and invalid references (e.g., misannotation and hallucinated facts). Worse, the commonly used log loss is overly sensitive to such phenomena and even a small fraction of noisy data can degrade performance. In this work, we show that the distinguishability of the models and reference serves as a principled and robust alternative for handling invalid references. To optimize distinguishability, we propose loss truncation, which adaptively removes high loss examples during training. We show this is as easy to optimize as log loss and tightly bounds distinguishability under noise. Empirically, we demonstrate that loss truncation outperforms existing baselines on distinguishability on a summarization task, and show that samples generated by the loss truncation model have factual accuracy ratings that exceed those of baselines and match human references.

Related articles: Most relevant | Search more
arXiv:2206.11871 [cs.CL] (Published 2022-06-05)
Offline RL for Natural Language Generation with Implicit Language Q Learning
arXiv:2110.06273 [cs.CL] (Published 2021-10-12, updated 2022-02-13)
Småprat: DialoGPT for Natural Language Generation of Swedish Dialogue by Transfer Learning
arXiv:2010.00910 [cs.CL] (Published 2020-10-02)
Continual Learning for Natural Language Generation in Task-oriented Dialog Systems