{ "id": "1811.03600", "version": "v1", "published": "2018-11-08T18:33:41.000Z", "updated": "2018-11-08T18:33:41.000Z", "title": "Measuring the Effects of Data Parallelism on Neural Network Training", "authors": [ "Christopher J. Shallue", "Jaehoon Lee", "Joe Antognini", "Jascha Sohl-Dickstein", "Roy Frostig", "George E. Dahl" ], "categories": [ "cs.LG", "stat.ML" ], "abstract": "Recent hardware developments have made unprecedented amounts of data parallelism available for accelerating neural network training. Among the simplest ways to harness next-generation accelerators is to increase the batch size in standard mini-batch neural network training algorithms. In this work, we aim to experimentally characterize the effects of increasing the batch size on training time, as measured in the number of steps necessary to reach a goal out-of-sample error. Eventually, increasing the batch size will no longer reduce the number of training steps required, but the exact relationship between the batch size and how many training steps are necessary is of critical importance to practitioners, researchers, and hardware designers alike. We study how this relationship varies with the training algorithm, model, and dataset and find extremely large variation between workloads. Along the way, we reconcile disagreements in the literature on whether batch size affects model quality. Finally, we discuss the implications of our results for efforts to train neural networks much faster in the future.", "revisions": [ { "version": "v1", "updated": "2018-11-08T18:33:41.000Z" } ], "analyses": { "keywords": [ "data parallelism", "mini-batch neural network training algorithms", "standard mini-batch neural network", "batch size affects model quality" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }