{ "id": "1612.05086", "version": "v1", "published": "2016-12-15T14:42:45.000Z", "updated": "2016-12-15T14:42:45.000Z", "title": "Coupling Adaptive Batch Sizes with Learning Rates", "authors": [ "Lukas Balles", "Javier Romero", "Philipp Hennig" ], "categories": [ "cs.LG", "cs.CV", "stat.ML" ], "abstract": "Mini-batch stochastic gradient descent and variants thereof have become standard for large-scale empirical risk minimization like the training of neural networks. These methods are usually used with a constant batch size chosen by simple empirical inspection. The batch size significantly influences the behavior of the stochastic optimization algorithm, though, since it determines the variance of the gradient estimates. This variance also changes over the optimization process; when using a constant batch size, stability and convergence is thus often enforced by means of a (manually tuned) decreasing learning rate schedule. We propose a practical method for dynamic batch size adaptation. It estimates the variance of the stochastic gradients and adapts the batch size to decrease the variance proportionally to the value of the objective function, removing the need for the aforementioned learning rate decrease. In contrast to recent related work, our algorithm couples the batch size to the learning rate, directly reflecting the known relationship between the two. On three image classification benchmarks, our batch size adaptation yields faster optimization convergence, while simultaneously simplifying learning rate tuning. A TensorFlow implementation is available.", "revisions": [ { "version": "v1", "updated": "2016-12-15T14:42:45.000Z" } ], "analyses": { "keywords": [ "learning rate", "coupling adaptive batch sizes", "size adaptation yields faster", "adaptation yields faster optimization convergence", "batch size adaptation" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }