arXiv Analytics

Sign in

arXiv:1708.04483 [cs.CV]AbstractReferencesReviewsResources

Learning with Rethinking: Recurrently Improving Convolutional Neural Networks through Feedback

Xin Li, Zequn Jie, Jiashi Feng, Changsong Liu, Shuicheng Yan

Published 2017-08-15Version 1

Recent years have witnessed the great success of convolutional neural network (CNN) based models in the field of computer vision. CNN is able to learn hierarchically abstracted features from images in an end-to-end training manner. However, most of the existing CNN models only learn features through a feedforward structure and no feedback information from top to bottom layers is exploited to enable the networks to refine themselves. In this paper, we propose a "Learning with Rethinking" algorithm. By adding a feedback layer and producing the emphasis vector, the model is able to recurrently boost the performance based on previous prediction. Particularly, it can be employed to boost any pre-trained models. This algorithm is tested on four object classification benchmark datasets: CIFAR-100, CIFAR-10, MNIST-background-image and ILSVRC-2012 dataset. These results have demonstrated the advantage of training CNN models with the proposed feedback mechanism.

Related articles:
arXiv:1512.07155 [cs.CV] (Published 2015-12-22)
Do Less and Achieve More: Training CNNs for Action Recognition Utilizing Action Images from the Web
arXiv:1707.07103 [cs.CV] (Published 2017-07-22)
PatchShuffle Regularization
arXiv:1804.07573 [cs.CV] (Published 2018-04-20)
MobileFaceNets: Efficient CNNs for Accurate Real-time Face Verification on Mobile Devices