arXiv Analytics

Sign in

arXiv:2006.15078 [cs.LG]AbstractReferencesReviewsResources

Continual Learning from the Perspective of Compression

Xu He, Min Lin

Published 2020-06-26Version 1

Connectionist models such as neural networks suffer from catastrophic forgetting. In this work, we study this problem from the perspective of information theory and define forgetting as the increase of description lengths of previous data when they are compressed with a sequentially learned model. In addition, we show that continual learning approaches based on variational posterior approximation and generative replay can be considered as approximations to two prequential coding methods in compression, namely, the Bayesian mixture code and maximum likelihood (ML) plug-in code. We compare these approaches in terms of both compression and forgetting and empirically study the reasons that limit the performance of continual learning methods based on variational posterior approximation. To address these limitations, we propose a new continual learning method that combines ML plug-in and Bayesian mixture codes.

Comments: 4th Lifelong Learning Workshop at ICML 2020
Categories: cs.LG, cs.NE, stat.ML
Related articles: Most relevant | Search more
arXiv:1802.06944 [cs.LG] (Published 2018-02-20)
DeepThin: A Self-Compressing Library for Deep Neural Networks
arXiv:2012.05152 [cs.LG] (Published 2020-12-09)
Binding and Perspective Taking as Inference in a Generative Neural Network Model
arXiv:1912.08335 [cs.LG] (Published 2019-12-18)
Learning from i.i.d. data under model miss-specification