arXiv Analytics

Sign in

arXiv:1703.00993 [cs.CL]AbstractReferencesReviewsResources

A Comparative Study of Word Embeddings for Reading Comprehension

Bhuwan Dhingra, Hanxiao Liu, Ruslan Salakhutdinov, William W. Cohen

Published 2017-03-02Version 1

The focus of past machine learning research for Reading Comprehension tasks has been primarily on the design of novel deep learning architectures. Here we show that seemingly minor choices made on (1) the use of pre-trained word embeddings, and (2) the representation of out-of-vocabulary tokens at test time, can turn out to have a larger impact than architectural choices on the final performance. We systematically explore several options for these choices, and provide recommendations to researchers working in this area.

Related articles: Most relevant | Search more
arXiv:2005.11313 [cs.CL] (Published 2020-05-22)
Comparative Study of Machine Learning Models and BERT on SQuAD
arXiv:2007.05976 [cs.CL] (Published 2020-07-12)
Stance Detection in Web and Social Media: A Comparative Study
arXiv:2410.20315 [cs.CL] (Published 2024-10-27)
Deep Learning Based Dense Retrieval: A Comparative Study