arXiv Analytics

Sign in

arXiv:2403.13249 [cs.LG]AbstractReferencesReviewsResources

A Unified and General Framework for Continual Learning

Zhenyi Wang, Yan Li, Li Shen, Heng Huang

Published 2024-03-20Version 1

Continual Learning (CL) focuses on learning from dynamic and changing data distributions while retaining previously acquired knowledge. Various methods have been developed to address the challenge of catastrophic forgetting, including regularization-based, Bayesian-based, and memory-replay-based techniques. However, these methods lack a unified framework and common terminology for describing their approaches. This research aims to bridge this gap by introducing a comprehensive and overarching framework that encompasses and reconciles these existing methodologies. Notably, this new framework is capable of encompassing established CL approaches as special instances within a unified and general optimization objective. An intriguing finding is that despite their diverse origins, these methods share common mathematical structures. This observation highlights the compatibility of these seemingly distinct techniques, revealing their interconnectedness through a shared underlying optimization objective. Moreover, the proposed general framework introduces an innovative concept called refresh learning, specifically designed to enhance the CL performance. This novel approach draws inspiration from neuroscience, where the human brain often sheds outdated information to improve the retention of crucial knowledge and facilitate the acquisition of new information. In essence, refresh learning operates by initially unlearning current data and subsequently relearning it. It serves as a versatile plug-in that seamlessly integrates with existing CL methods, offering an adaptable and effective enhancement to the learning process. Extensive experiments on CL benchmarks and theoretical analysis demonstrate the effectiveness of the proposed refresh learning. Code is available at \url{}.

Related articles: Most relevant | Search more
arXiv:1904.07734 [cs.LG] (Published 2019-04-15)
Three scenarios for continual learning
arXiv:2205.08013 [cs.LG] (Published 2022-05-16)
Continual learning on 3D point clouds with random compressed rehearsal
arXiv:2109.14035 [cs.LG] (Published 2021-09-28, updated 2022-02-24)
Formalizing the Generalization-Forgetting Trade-off in Continual Learning