arXiv Analytics

Sign in

arXiv:2204.04869 [cs.CL]AbstractReferencesReviewsResources

Evaluation of Automatic Text Summarization using Synthetic Facts

Jay Ahn, Foaad Khosmood

Published 2022-04-11Version 1

Despite some recent advances, automatic text summarization remains unreliable, elusive, and of limited practical use in applications. Two main problems with current summarization methods are well known: evaluation and factual consistency. To address these issues, we propose a new automatic reference-less text summarization evaluation system that can measure the quality of any text summarization model with a set of generated facts based on factual consistency, comprehensiveness, and compression rate. As far as we know, our evaluation system is the first system that measures the overarching quality of the text summarization models based on factuality, information coverage, and compression rate.

Related articles: Most relevant | Search more
arXiv:1908.09119 [cs.CL] (Published 2019-08-24)
Automatic Text Summarization of Legal Cases: A Hybrid Approach
arXiv:1505.06228 [cs.CL] (Published 2015-05-22)
Keyphrase Based Evaluation of Automatic Text Summarization
arXiv:2006.01997 [cs.CL] (Published 2020-06-03)
Automatic Text Summarization of COVID-19 Medical Research Articles using BERT and GPT-2