{ "id": "2006.08517", "version": "v1", "published": "2020-06-15T16:18:05.000Z", "updated": "2020-06-15T16:18:05.000Z", "title": "The Limit of the Batch Size", "authors": [ "Yang You", "Yuhui Wang", "Huan Zhang", "Zhao Zhang", "James Demmel", "Cho-Jui Hsieh" ], "categories": [ "cs.LG", "cs.CV", "cs.DC", "stat.ML" ], "abstract": "Large-batch training is an efficient approach for current distributed deep learning systems. It has enabled researchers to reduce the ImageNet/ResNet-50 training from 29 hours to around 1 minute. In this paper, we focus on studying the limit of the batch size. We think it may provide a guidance to AI supercomputer and algorithm designers. We provide detailed numerical optimization instructions for step-by-step comparison. Moreover, it is important to understand the generalization and optimization performance of huge batch training. Hoffer et al. introduced \"ultra-slow diffusion\" theory to large-batch training. However, our experiments show contradictory results with the conclusion of Hoffer et al. We provide comprehensive experimental results and detailed analysis to study the limitations of batch size scaling and \"ultra-slow diffusion\" theory. For the first time we scale the batch size on ImageNet to at least a magnitude larger than all previous work, and provide detailed studies on the performance of many state-of-the-art optimization schemes under this setting. We propose an optimization recipe that is able to improve the top-1 test accuracy by 18% compared to the baseline.", "revisions": [ { "version": "v1", "updated": "2020-06-15T16:18:05.000Z" } ], "analyses": { "keywords": [ "batch size", "ultra-slow diffusion", "state-of-the-art optimization schemes", "current distributed deep learning systems", "efficient approach" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }