arXiv Analytics

Sign in

arXiv:1705.02012 [cs.CL]AbstractReferencesReviewsResources

Machine Comprehension by Text-to-Text Neural Question Generation

Xingdi Yuan, Tong Wang, Caglar Gulcehre, Alessandro Sordoni, Philip Bachman, Sandeep Subramanian, Saizheng Zhang, Adam Trischler

Published 2017-05-04Version 1

We propose a recurrent neural model that generates natural-language questions from documents, conditioned on answers. We show how to train the model using a combination of supervised and reinforcement learning. After teacher forcing for standard maximum likelihood training, we fine-tune the model using policy gradient techniques to maximize several rewards that measure question quality. Most notably, one of these rewards is the performance of a question-answering system. We motivate question generation as a means to improve the performance of question answering systems. Our model is trained and evaluated on the recent question-answering dataset SQuAD.

Related articles: Most relevant | Search more
arXiv:1710.10504 [cs.CL] (Published 2017-10-28)
Phase Conductor on Multi-layered Attentions for Machine Comprehension
arXiv:1706.03815 [cs.CL] (Published 2017-06-12)
Encoding of phonology in a recurrent neural model of grounded speech
arXiv:1606.05250 [cs.CL] (Published 2016-06-16)
SQuAD: 100,000+ Questions for Machine Comprehension of Text