arXiv Analytics

Sign in

arXiv:2410.12725 [cs.CV]AbstractReferencesReviewsResources

Optimizing 3D Geometry Reconstruction from Implicit Neural Representations

Shen Fan, Przemyslaw Musialski

Published 2024-10-16Version 1

Implicit neural representations have emerged as a powerful tool in learning 3D geometry, offering unparalleled advantages over conventional representations like mesh-based methods. A common type of INR implicitly encodes a shape's boundary as the zero-level set of the learned continuous function and learns a mapping from a low-dimensional latent space to the space of all possible shapes represented by its signed distance function. However, most INRs struggle to retain high-frequency details, which are crucial for accurate geometric depiction, and they are computationally expensive. To address these limitations, we present a novel approach that both reduces computational expenses and enhances the capture of fine details. Our method integrates periodic activation functions, positional encodings, and normals into the neural network architecture. This integration significantly enhances the model's ability to learn the entire space of 3D shapes while preserving intricate details and sharp features, areas where conventional representations often fall short.

Related articles: Most relevant | Search more
arXiv:2006.09661 [cs.CV] (Published 2020-06-17)
Implicit Neural Representations with Periodic Activation Functions
arXiv:2311.16344 [cs.CV] (Published 2023-11-27)
Spatially Adaptive Cloth Regression with Implicit Neural Representations
arXiv:2304.08960 [cs.CV] (Published 2023-04-18)
Generative modeling of living cells with SO(3)-equivariant implicit neural representations