arXiv Analytics

Sign in

arXiv:2306.11426 [cs.LG]AbstractReferencesReviewsResources

Exploring the Performance and Efficiency of Transformer Models for NLP on Mobile Devices

Ioannis Panopoulos, Sokratis Nikolaidis, Stylianos I. Venieris, Iakovos S. Venieris

Published 2023-06-20Version 1

Deep learning (DL) is characterised by its dynamic nature, with new deep neural network (DNN) architectures and approaches emerging every few years, driving the field's advancement. At the same time, the ever-increasing use of mobile devices (MDs) has resulted in a surge of DNN-based mobile applications. Although traditional architectures, like CNNs and RNNs, have been successfully integrated into MDs, this is not the case for Transformers, a relatively new model family that has achieved new levels of accuracy across AI tasks, but poses significant computational challenges. In this work, we aim to make steps towards bridging this gap by examining the current state of Transformers' on-device execution. To this end, we construct a benchmark of representative models and thoroughly evaluate their performance across MDs with different computational capabilities. Our experimental results show that Transformers are not accelerator-friendly and indicate the need for software and hardware optimisations to achieve efficient deployment.

Comments: Accepted at the 3rd IEEE International Workshop on Distributed Intelligent Systems (DistInSys), 2023
Categories: cs.LG, cs.CL
Related articles: Most relevant | Search more
arXiv:2309.13643 [cs.LG] (Published 2023-09-24)
REWAFL: Residual Energy and Wireless Aware Participant Selection for Efficient Federated Learning over Mobile Devices
Y. Li et al.
arXiv:2101.04866 [cs.LG] (Published 2021-01-13)
Towards Energy Efficient Federated Learning over 5G+ Mobile Devices
arXiv:2102.06336 [cs.LG] (Published 2021-02-12)
Dancing along Battery: Enabling Transformer with Run-time Reconfigurability on Mobile Devices
Yuhong Song et al.