arXiv:1604.03640 [cs.LG]AbstractReferencesReviewsResources
Bridging the Gaps Between Residual Learning, Recurrent Neural Networks and Visual Cortex
Published 2016-04-13Version 1
We discuss relations between Residual Networks (ResNet), Recurrent Neural Networks (RNNs) and the primate visual cortex. We begin with the observation that a shallow RNN is exactly equivalent to a very deep ResNet with weight sharing among the layers. A direct implementation of such a RNN, although having orders of magnitude fewer parameters, leads to a performance similar to the corresponding ResNet. We propose 1) a generalization of both RNN and ResNet architectures and 2) the conjecture that a class of moderately deep RNNs is a biologically-plausible model of the ventral stream in visual cortex. We demonstrate the effectiveness of the architectures by testing them on the CIFAR-10 dataset.
Related articles: Most relevant | Search more
arXiv:1512.01322 [cs.LG] (Published 2015-12-04)
Fixed Point Performance Analysis of Recurrent Neural Networks
arXiv:1602.08210 [cs.LG] (Published 2016-02-26)
Architectural Complexity Measures of Recurrent Neural Networks
Saizheng Zhang et al.
arXiv:1604.01946 [cs.LG] (Published 2016-04-07)
Optimizing Performance of Recurrent Neural Networks on GPUs