arXiv Analytics

Sign in

arXiv:1808.04293 [cs.LG]AbstractReferencesReviewsResources

Fast, Better Training Trick --- Random Gradient

Jiakai Wei

Published 2018-08-13Version 1

In this paper, we will show an unprecedented method to accelerate training and improve performance, which called random gradient (RG). This method can be easier to the training of any model without extra calculation cost, we use Image classification, Semantic segmentation, and GANs to confirm this method can improve speed which is training model in computer vision. The central idea is using the loss multiplied by a random number to random reduce the back-propagation gradient. We can use this method to produce a better result in Pascal VOC, Cifar, Cityscapes datasets.

Related articles: Most relevant | Search more
arXiv:1807.05597 [cs.LG] (Published 2018-07-15)
Deep Learning for Semantic Segmentation on Minimal Hardware
arXiv:1309.4061 [cs.LG] (Published 2013-09-16)
Learning a Loopy Model For Semantic Segmentation Exactly
arXiv:2402.10665 [cs.LG] (Published 2024-02-16)
Selective Prediction for Semantic Segmentation using Post-Hoc Confidence Estimation and Its Performance under Distribution Shift