arXiv Analytics

Sign in

arXiv:2206.11849 [cs.LG]AbstractReferencesReviewsResources

Sample Condensation in Online Continual Learning

Mattia Sangermano, Antonio Carta, Andrea Cossu, Davide Bacciu

Published 2022-06-23Version 1

Online Continual learning is a challenging learning scenario where the model must learn from a non-stationary stream of data where each sample is seen only once. The main challenge is to incrementally learn while avoiding catastrophic forgetting, namely the problem of forgetting previously acquired knowledge while learning from new data. A popular solution in these scenario is to use a small memory to retain old data and rehearse them over time. Unfortunately, due to the limited memory size, the quality of the memory will deteriorate over time. In this paper we propose OLCGM, a novel replay-based continual learning strategy that uses knowledge condensation techniques to continuously compress the memory and achieve a better use of its limited size. The sample condensation step compresses old samples, instead of removing them like other replay strategies. As a result, the experiments show that, whenever the memory budget is limited compared to the complexity of the data, OLCGM improves the final accuracy compared to state-of-the-art replay strategies.

Comments: Accepted as a conference paper at 2022 International Joint Conference on Neural Networks (IJCNN 2022). Part of 2022 IEEE World Congress on Computational Intelligence (IEEE WCCI 2022)
Categories: cs.LG, cs.AI, cs.CV
Related articles: Most relevant | Search more
arXiv:2403.10853 [cs.LG] (Published 2024-03-16)
Just Say the Name: Online Continual Learning with Category Names Only via Data Generation
arXiv:2302.01047 [cs.LG] (Published 2023-02-02)
Real-Time Evaluation in Online Continual Learning: A New Paradigm
arXiv:2305.09275 [cs.LG] (Published 2023-05-16)
Rapid Adaptation in Online Continual Learning: Are We Evaluating It Right?