arXiv Analytics

Sign in

arXiv:2402.09780 [cs.LG]AbstractReferencesReviewsResources

TinyCL: An Efficient Hardware Architecture for Continual Learning on Autonomous Systems

Eugenio Ressa, Alberto Marchisio, Maurizio Martina, Guido Masera, Muhammad Shafique

Published 2024-02-15, updated 2024-08-05Version 2

The Continuous Learning (CL) paradigm consists of continuously evolving the parameters of the Deep Neural Network (DNN) model to progressively learn to perform new tasks without reducing the performance on previous tasks, i.e., avoiding the so-called catastrophic forgetting. However, the DNN parameter update in CL-based autonomous systems is extremely resource-hungry. The existing DNN accelerators cannot be directly employed in CL because they only support the execution of the forward propagation. Only a few prior architectures execute the backpropagation and weight update, but they lack the control and management for CL. Towards this, we design a hardware architecture, TinyCL, to perform CL on resource-constrained autonomous systems. It consists of a processing unit that executes both forward and backward propagation, and a control unit that manages memory-based CL workload. To minimize the memory accesses, the sliding window of the convolutional layer moves in a snake-like fashion. Moreover, the Multiply-and-Accumulate units can be reconfigured at runtime to execute different operations. As per our knowledge, our proposed TinyCL represents the first hardware accelerator that executes CL on autonomous systems. We synthesize the complete TinyCL architecture in a 65 nm CMOS technology node with the conventional ASIC design flow. It executes 1 epoch of training on a Conv + ReLU + Dense model on the CIFAR10 dataset in 1.76 s, while 1 training epoch of the same model using an Nvidia Tesla P100 GPU takes 103 s, thus achieving a 58x speedup, consuming 86 mW in a 4.74 mm2 die.

Related articles: Most relevant | Search more
arXiv:2007.07617 [cs.LG] (Published 2020-07-15)
SpaceNet: Make Free Space For Continual Learning
arXiv:2204.10830 [cs.LG] (Published 2022-04-22)
Memory Bounds for Continual Learning
arXiv:2112.08654 [cs.LG] (Published 2021-12-16, updated 2022-03-21)
Learning to Prompt for Continual Learning
Zifeng Wang et al.