arXiv Analytics

Sign in

arXiv:1911.02541 [cs.CL]AbstractReferencesReviewsResources

Optimizing the Factual Correctness of a Summary: A Study of Summarizing Radiology Reports

Yuhao Zhang, Derek Merck, Emily Bao Tsai, Christopher D. Manning, Curtis P. Langlotz

Published 2019-11-06Version 1

Neural abstractive summarization models are able to generate summaries which have high overlap with human references. However, existing models are not optimized for factual correctness, a critical metric in real-world applications. In this work, we propose to evaluate the factual correctness of a generated summary by fact-checking it against its reference using an information extraction module. We further propose a training strategy which optimizes a neural summarization model with a factual correctness reward via reinforcement learning. We apply the proposed method to the summarization of radiology reports, where factual correctness is a key requirement. On two separate datasets collected from real hospitals, we show via both automatic and human evaluation that the proposed approach substantially improves the factual correctness and overall quality of outputs from a competitive neural summarization system.

Related articles: Most relevant | Search more
arXiv:2205.12416 [cs.CL] (Published 2022-05-25)
Counterfactual Data Augmentation improves Factuality of Abstractive Summarization
arXiv:2210.12186 [cs.CL] (Published 2022-10-21)
Improving the Factual Correctness of Radiology Report Generation with Semantic Rewards
arXiv:2212.01956 [cs.CL] (Published 2022-12-04)
Grounded Keys-to-Text Generation: Towards Factual Open-Ended Generation