arXiv Analytics

Sign in

arXiv:2407.04803 [cs.LG]AbstractReferencesReviewsResources

The Impact of Quantization and Pruning on Deep Reinforcement Learning Models

Heng Lu, Mehdi Alemi, Reza Rawassizadeh

Published 2024-07-05Version 1

Deep reinforcement learning (DRL) has achieved remarkable success across various domains, such as video games, robotics, and, recently, large language models. However, the computational costs and memory requirements of DRL models often limit their deployment in resource-constrained environments. The challenge underscores the urgent need to explore neural network compression methods to make RDL models more practical and broadly applicable. Our study investigates the impact of two prominent compression methods, quantization and pruning on DRL models. We examine how these techniques influence four performance factors: average return, memory, inference time, and battery utilization across various DRL algorithms and environments. Despite the decrease in model size, we identify that these compression techniques generally do not improve the energy efficiency of DRL models, but the model size decreases. We provide insights into the trade-offs between model compression and DRL performance, offering guidelines for deploying efficient DRL models in resource-constrained settings.

Related articles: Most relevant | Search more
arXiv:2211.01910 [cs.LG] (Published 2022-11-03)
Large Language Models Are Human-Level Prompt Engineers
arXiv:2305.12356 [cs.LG] (Published 2023-05-21)
Integer or Floating Point? New Outlooks for Low-Bit Quantization on Large Language Models
Yijia Zhang et al.
arXiv:2309.02784 [cs.LG] (Published 2023-09-06)
Norm Tweaking: High-performance Low-bit Quantization of Large Language Models