arXiv Analytics

Sign in

arXiv:2407.00176 [cs.LG]AbstractReferencesReviewsResources

The impact of model size on catastrophic forgetting in Online Continual Learning

Eunhae Lee

Published 2024-06-28Version 1

This study investigates the impact of model size on Online Continual Learning performance, with a focus on catastrophic forgetting. Employing ResNet architectures of varying sizes, the research examines how network depth and width affect model performance in class-incremental learning using the SplitCIFAR-10 dataset. Key findings reveal that larger models do not guarantee better Continual Learning performance; in fact, they often struggle more in adapting to new tasks, particularly in online settings. These results challenge the notion that larger models inherently mitigate catastrophic forgetting, highlighting the nuanced relationship between model size and Continual Learning efficacy. This study contributes to a deeper understanding of model scalability and its practical implications in Continual Learning scenarios.

Related articles: Most relevant | Search more
arXiv:2101.10423 [cs.LG] (Published 2021-01-25)
Online Continual Learning in Image Classification: An Empirical Survey
arXiv:2306.03364 [cs.LG] (Published 2023-06-06)
Learning Representations on the Unit Sphere: Application to Online Continual Learning
arXiv:2003.09114 [cs.LG] (Published 2020-03-20)
Online Continual Learning on Sequences