arXiv Analytics

Sign in

arXiv:2309.15848 [cs.CV]AbstractReferencesReviewsResources

SHACIRA: Scalable HAsh-grid Compression for Implicit Neural Representations

Sharath Girish, Abhinav Shrivastava, Kamal Gupta

Published 2023-09-27Version 1

Implicit Neural Representations (INR) or neural fields have emerged as a popular framework to encode multimedia signals such as images and radiance fields while retaining high-quality. Recently, learnable feature grids proposed by Instant-NGP have allowed significant speed-up in the training as well as the sampling of INRs by replacing a large neural network with a multi-resolution look-up table of feature vectors and a much smaller neural network. However, these feature grids come at the expense of large memory consumption which can be a bottleneck for storage and streaming applications. In this work, we propose SHACIRA, a simple yet effective task-agnostic framework for compressing such feature grids with no additional post-hoc pruning/quantization stages. We reparameterize feature grids with quantized latent weights and apply entropy regularization in the latent space to achieve high levels of compression across various domains. Quantitative and qualitative results on diverse datasets consisting of images, videos, and radiance fields, show that our approach outperforms existing INR approaches without the need for any large datasets or domain-specific heuristics. Our project page is available at http://shacira.github.io .

Related articles: Most relevant | Search more
arXiv:2207.10395 [cs.CV] (Published 2022-07-21)
Sobolev Training for Implicit Neural Representations with Approximated Image Derivatives
arXiv:1806.06519 [cs.CV] (Published 2018-06-18)
HitNet: a neural network with capsules embedded in a Hit-or-Miss layer, extended with hybrid data augmentation and ghost capsules
arXiv:1711.10157 [cs.CV] (Published 2017-11-28)
Deformation estimation of an elastic object by partial observation using a neural network