arXiv Analytics

Sign in

arXiv:1801.03911 [cs.CL]AbstractReferencesReviewsResources

Stochastic Learning of Nonstationary Kernels for Natural Language Modeling

Sahil Garg, Greg Ver Steeg, Aram Galstyan

Published 2018-01-11Version 1

Natural language processing often involves computations with semantic or syntactic graphs to facilitate sophisticated reasoning based on structural relationships. While convolution kernels provide a powerful tool for comparing graph structure based on node (word) level relationships, they are difficult to customize and can be computationally expensive. We propose a generalization of convolution kernels, with a nonstationary model, for better expressibility of natural languages in supervised settings. For a scalable learning of the parameters introduced with our model, we propose a novel algorithm that leverages stochastic sampling on k-nearest neighbor graphs, along with approximations based on locality-sensitive hashing. We demonstrate the advantages of our approach on a challenging real-world (structured inference) problem of automatically extracting biological models from the text of scientific papers.

Related articles:
arXiv:cs/0001020 [cs.CL] (Published 2000-01-24)
Exploiting Syntactic Structure for Natural Language Modeling