arXiv Analytics

Sign in

arXiv:1612.01589 [cs.LG]AbstractReferencesReviewsResources

Improving the Performance of Neural Networks in Regression Tasks Using Drawering

Konrad Zolna

Published 2016-12-05Version 1

The method presented extends a given regression neural network to make its performance improve. The modification affects the learning procedure only, hence the extension may be easily omitted during evaluation without any change in prediction. It means that the modified model may be evaluated as quickly as the original one but tends to perform better. This improvement is possible because the modification gives better expressive power, provides better behaved gradients and works as a regularization. The knowledge gained by the temporarily extended neural network is contained in the parameters shared with the original neural network. The only cost is an increase in learning time.

Related articles: Most relevant | Search more
arXiv:1804.07152 [cs.LG] (Published 2018-04-17)
Scalable attribute-aware network embedding with localily
arXiv:1906.09669 [cs.LG] (Published 2019-06-23)
Nested Cavity Classifier: performance and remedy
arXiv:1906.03193 [cs.LG] (Published 2019-06-07)
Fighting Quantization Bias With Bias