arXiv Analytics

Sign in

arXiv:1812.03825 [cs.LG]AbstractReferencesReviewsResources

Asynchronous Training of Word Embeddings for Large Text Corpora

Avishek Anand, Megha Khosla, Jaspreet Singh, Jan-Hendrik Zab, Zijian Zhang

Published 2018-12-07Version 1

Word embeddings are a powerful approach for analyzing language and have been widely popular in numerous tasks in information retrieval and text mining. Training embeddings over huge corpora is computationally expensive because the input is typically sequentially processed and parameters are synchronously updated. Distributed architectures for asynchronous training that have been proposed either focus on scaling vocabulary sizes and dimensionality or suffer from expensive synchronization latencies. In this paper, we propose a scalable approach to train word embeddings by partitioning the input space instead in order to scale to massive text corpora while not sacrificing the performance of the embeddings. Our training procedure does not involve any parameter synchronization except a final sub-model merge phase that typically executes in a few minutes. Our distributed training scales seamlessly to large corpus sizes and we get comparable and sometimes even up to 45% performance improvement in a variety of NLP benchmarks using models trained by our distributed procedure which requires $1/10$ of the time taken by the baseline approach. Finally we also show that we are robust to missing words in sub-models and are able to effectively reconstruct word representations.

Comments: This paper contains 9 pages and has been accepted in the WSDM2019
Categories: cs.LG, cs.DC, stat.ML
Related articles: Most relevant | Search more
arXiv:1909.12340 [cs.LG] (Published 2019-09-26)
At Stability's Edge: How to Adjust Hyperparameters to Preserve Minima Selection in Asynchronous Training of Neural Networks?
arXiv:2205.11048 [cs.LG] (Published 2022-05-23)
GBA: A Tuning-free Approach to Switch between Synchronous and Asynchronous Training for Recommendation Model
Wenbo Su et al.
arXiv:1906.08858 [cs.LG] (Published 2019-06-20)
One-vs-All Models for Asynchronous Training: An Empirical Analysis