arXiv Analytics

Sign in

arXiv:1811.12273 [cs.LG]AbstractReferencesReviewsResources

On the Transferability of Representations in Neural Networks Between Datasets and Tasks

Haytham M. Fayek, Lawrence Cavedon, Hong Ren Wu

Published 2018-11-29Version 1

Deep networks, composed of multiple layers of hierarchical distributed representations, tend to learn low-level features in initial layers and transition to high-level features towards final layers. Paradigms such as transfer learning, multi-task learning, and continual learning leverage this notion of generic hierarchical distributed representations to share knowledge across datasets and tasks. Herein, we study the layer-wise transferability of representations in deep networks across a few datasets and tasks and note some interesting empirical observations.

Comments: Accepted Paper in the Continual Learning Workshop, NeurIPS 2018
Journal: Continual Learning Workshop, 32nd Neural Information Processing Systems (NeurIPS 2018), Montreal, Canada
Categories: cs.LG, stat.ML
Related articles: Most relevant | Search more
arXiv:2007.10099 [cs.LG] (Published 2020-07-20)
Early Stopping in Deep Networks: Double Descent and How to Eliminate it
arXiv:1908.09375 [cs.LG] (Published 2019-08-25)
Theoretical Issues in Deep Networks: Approximation, Optimization and Generalization
arXiv:1807.09011 [cs.LG] (Published 2018-07-24)
Uncertainty Modelling in Deep Networks: Forecasting Short and Noisy Series