{ "id": "2302.09376", "version": "v1", "published": "2023-02-18T16:29:06.000Z", "updated": "2023-02-18T16:29:06.000Z", "title": "Parameter Averaging for SGD Stabilizes the Implicit Bias towards Flat Regions", "authors": [ "Atsushi Nitanda", "Ryuhei Kikuchi", "Shugo Maeda" ], "categories": [ "stat.ML", "cs.LG" ], "abstract": "Stochastic gradient descent is a workhorse for training deep neural networks due to its excellent generalization performance. Several studies demonstrated this success is attributed to the implicit bias of the method that prefers a flat minimum and developed new methods based on this perspective. Recently, Izmailov et al. (2018) empirically observed that an averaged stochastic gradient descent with a large step size can bring out the implicit bias more effectively and can converge more stably to a flat minimum than the vanilla stochastic gradient descent. In our work, we theoretically justify this observation by showing that the averaging scheme improves the bias-optimization tradeoff coming from the stochastic gradient noise: a large step size amplifies the bias but makes convergence unstable, and vice versa. Specifically, we show that the averaged stochastic gradient descent can get closer to a solution of a penalized objective on the sharpness than the vanilla stochastic gradient descent using the same step size under certain conditions. In experiments, we verify our theory and show this learning scheme significantly improves performance.", "revisions": [ { "version": "v1", "updated": "2023-02-18T16:29:06.000Z" } ], "analyses": { "keywords": [ "implicit bias", "sgd stabilizes", "vanilla stochastic gradient descent", "flat regions", "averaged stochastic gradient descent" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }