arXiv Analytics

Sign in

arXiv:1107.0922 [cs.LG]AbstractReferencesReviewsResources

GraphLab: A Distributed Framework for Machine Learning in the Cloud

Yucheng Low, Joseph Gonzalez, Aapo Kyrola, Danny Bickson, Carlos Guestrin

Published 2011-07-05Version 1

Machine Learning (ML) techniques are indispensable in a wide range of fields. Unfortunately, the exponential increase of dataset sizes are rapidly extending the runtime of sequential algorithms and threatening to slow future progress in ML. With the promise of affordable large-scale parallel computing, Cloud systems offer a viable platform to resolve the computational challenges in ML. However, designing and implementing efficient, provably correct distributed ML algorithms is often prohibitively challenging. To enable ML researchers to easily and efficiently use parallel systems, we introduced the GraphLab abstraction which is designed to represent the computational patterns in ML algorithms while permitting efficient parallel and distributed implementations. In this paper we provide a formal description of the GraphLab parallel abstraction and present an efficient distributed implementation. We conduct a comprehensive evaluation of GraphLab on three state-of-the-art ML algorithms using real large-scale data and a 64 node EC2 cluster of 512 processors. We find that GraphLab achieves orders of magnitude performance gains over Hadoop while performing comparably or superior to hand-tuned MPI implementations.

Comments: CMU Tech Report, GraphLab project webpage: http://graphlab.org
Categories: cs.LG
Related articles: Most relevant | Search more
arXiv:1610.07183 [cs.LG] (Published 2016-10-23)
How to be Fair and Diverse?
arXiv:1602.02823 [cs.LG] (Published 2016-02-09)
Poor starting points in machine learning
arXiv:1709.02840 [cs.LG] (Published 2017-09-08)
A Brief Introduction to Machine Learning for Engineers