arXiv Analytics

Sign in

arXiv:1707.04199 [cs.LG]AbstractReferencesReviewsResources

Be Careful What You Backpropagate: A Case For Linear Output Activations & Gradient Boosting

Anders Oland, Aayush Bansal, Roger B. Dannenberg, Bhiksha Raj

Published 2017-07-13Version 1

In this work, we show that saturating output activation functions, such as the softmax, impede learning on a number of standard classification tasks. Moreover, we present results showing that the utility of softmax does not stem from the normalization, as some have speculated. In fact, the normalization makes things worse. Rather, the advantage is in the exponentiation of error gradients. This exponential gradient boosting is shown to speed up convergence and improve generalization. To this end, we demonstrate faster convergence and better performance on diverse classification tasks: image classification using CIFAR-10 and ImageNet, and semantic segmentation using PASCAL VOC 2012. In the latter case, using the state-of-the-art neural network architecture, the model converged 33% faster with our method (roughly two days of training less) than with the standard softmax activation, and with a slightly better performance to boot.

Related articles: Most relevant | Search more
arXiv:1909.12098 [cs.LG] (Published 2019-09-26)
Sequential Training of Neural Networks with Gradient Boosting
arXiv:2209.12309 [cs.LG] (Published 2022-09-25)
Feature Encodings for Gradient Boosting with Automunge
arXiv:2204.06895 [cs.LG] (Published 2022-04-14)
Gradient boosting for convex cone predict and optimize problems