arXiv Analytics

Sign in

arXiv:2108.12641 [cs.LG]AbstractReferencesReviewsResources

Prototypes-Guided Memory Replay for Continual Learning

Stella Ho, Ming Liu, Lan Du, Longxiang Gao, Yong Xiang

Published 2021-08-28Version 1

Continual learning (CL) refers to a machine learning paradigm that using only a small account of training samples and previously learned knowledge to enhance learning performance. CL models learn tasks from various domains in a sequential manner. The major difficulty in CL is catastrophic forgetting of previously learned tasks, caused by shifts in data distributions. The existing CL models often employ a replay-based approach to diminish catastrophic forgetting. Most CL models stochastically select previously seen samples to retain learned knowledge. However, occupied memory size keeps enlarging along with accumulating learned tasks. Hereby, we propose a memory-efficient CL method. We devise a dynamic prototypes-guided memory replay module, incorporating it into an online meta-learning model. We conduct extensive experiments on text classification and additionally investigate the effect of training set orders on CL model performance. The experimental results testify the superiority of our method in alleviating catastrophic forgetting and enabling efficient knowledge transfer.

Related articles: Most relevant | Search more
arXiv:1811.11682 [cs.LG] (Published 2018-11-28)
Experience Replay for Continual Learning
arXiv:2101.07295 [cs.LG] (Published 2021-01-18)
Does Continual Learning = Catastrophic Forgetting?
arXiv:2403.05175 [cs.LG] (Published 2024-03-08)
Continual Learning and Catastrophic Forgetting