arXiv Analytics

Sign in

arXiv:2403.09344 [cs.CV]AbstractReferencesReviewsResources

SketchINR: A First Look into Sketches as Implicit Neural Representations

Hmrishav Bandyopadhyay, Ayan Kumar Bhunia, Pinaki Nath Chowdhury, Aneeshan Sain, Tao Xiang, Timothy Hospedales, Yi-Zhe Song

Published 2024-03-14Version 1

We propose SketchINR, to advance the representation of vector sketches with implicit neural models. A variable length vector sketch is compressed into a latent space of fixed dimension that implicitly encodes the underlying shape as a function of time and strokes. The learned function predicts the $xy$ point coordinates in a sketch at each time and stroke. Despite its simplicity, SketchINR outperforms existing representations at multiple tasks: (i) Encoding an entire sketch dataset into a fixed size latent vector, SketchINR gives $60\times$ and $10\times$ data compression over raster and vector sketches, respectively. (ii) SketchINR's auto-decoder provides a much higher-fidelity representation than other learned vector sketch representations, and is uniquely able to scale to complex vector sketches such as FS-COCO. (iii) SketchINR supports parallelisation that can decode/render $\sim$$100\times$ faster than other learned vector representations such as SketchRNN. (iv) SketchINR, for the first time, emulates the human ability to reproduce a sketch with varying abstraction in terms of number and complexity of strokes. As a first look at implicit sketches, SketchINR's compact high-fidelity representation will support future work in modelling long and complex sketches.

Related articles: Most relevant | Search more
arXiv:2309.15848 [cs.CV] (Published 2023-09-27)
SHACIRA: Scalable HAsh-grid Compression for Implicit Neural Representations
arXiv:2304.08960 [cs.CV] (Published 2023-04-18)
Generative modeling of living cells with SO(3)-equivariant implicit neural representations
arXiv:2409.09566 [cs.CV] (Published 2024-09-15)
Learning Transferable Features for Implicit Neural Representations