arXiv Analytics

Sign in

arXiv:2107.12657 [cs.LG]AbstractReferencesReviewsResources

Continual Learning with Neuron Activation Importance

Sohee Kim, Seungkyu Lee

Published 2021-07-27Version 1

Continual learning is a concept of online learning with multiple sequential tasks. One of the critical barriers of continual learning is that a network should learn a new task keeping the knowledge of old tasks without access to any data of the old tasks. In this paper, we propose a neuron activation importance-based regularization method for stable continual learning regardless of the order of tasks. We conduct comprehensive experiments on existing benchmark data sets to evaluate not just the stability and plasticity of our method with improved classification accuracy also the robustness of the performance along the changes of task order.

Related articles: Most relevant | Search more
arXiv:1811.11682 [cs.LG] (Published 2018-11-28)
Experience Replay for Continual Learning
arXiv:2105.01946 [cs.LG] (Published 2021-05-05)
Continual Learning on the Edge with TensorFlow Lite
arXiv:2006.13772 [cs.LG] (Published 2020-06-24)
OvA-INN: Continual Learning with Invertible Neural Networks