arXiv Analytics

Sign in

arXiv:2112.08654 [cs.LG]AbstractReferencesReviewsResources

Learning to Prompt for Continual Learning

Zifeng Wang, Zizhao Zhang, Chen-Yu Lee, Han Zhang, Ruoxi Sun, Xiaoqi Ren, Guolong Su, Vincent Perot, Jennifer Dy, Tomas Pfister

Published 2021-12-16, updated 2022-03-21Version 2

The mainstream paradigm behind continual learning has been to adapt the model parameters to non-stationary data distributions, where catastrophic forgetting is the central challenge. Typical methods rely on a rehearsal buffer or known task identity at test time to retrieve learned knowledge and address forgetting, while this work presents a new paradigm for continual learning that aims to train a more succinct memory system without accessing task identity at test time. Our method learns to dynamically prompt (L2P) a pre-trained model to learn tasks sequentially under different task transitions. In our proposed framework, prompts are small learnable parameters, which are maintained in a memory space. The objective is to optimize prompts to instruct the model prediction and explicitly manage task-invariant and task-specific knowledge while maintaining model plasticity. We conduct comprehensive experiments under popular image classification benchmarks with different challenging continual learning settings, where L2P consistently outperforms prior state-of-the-art methods. Surprisingly, L2P achieves competitive results against rehearsal-based methods even without a rehearsal buffer and is directly applicable to challenging task-agnostic continual learning. Source code is available at https://github.com/google-research/l2p.

Comments: Published at CVPR 2022 as a conference paper
Categories: cs.LG, cs.CV
Related articles: Most relevant | Search more
arXiv:2006.13772 [cs.LG] (Published 2020-06-24)
OvA-INN: Continual Learning with Invertible Neural Networks
arXiv:2205.08013 [cs.LG] (Published 2022-05-16)
Continual learning on 3D point clouds with random compressed rehearsal
arXiv:2107.12657 [cs.LG] (Published 2021-07-27)
Continual Learning with Neuron Activation Importance