arXiv Analytics

Sign in

arXiv:1812.02716 [cs.CV]AbstractReferencesReviewsResources

Cross-Domain 3D Equivariant Image Embeddings

Carlos Esteves, Avneesh Sud, Zhengyi Luo, Kostas Daniilidis, Ameesh Makadia

Published 2018-12-06Version 1

Spherical convolutional networks have been introduced recently as tools to learn powerful feature representations of 3D shapes. Spherical CNNs are equivariant to 3D rotations making them ideally suited for applications where 3D data may be observed in arbitrary orientations. In this paper we learn 2D image embeddings with a similar equivariant structure: embedding the image of a 3D object should commute with rotations of the object. We introduce a cross-domain embedding from 2D images into a spherical CNN latent space. Our model is supervised only by target embeddings obtained from a spherical CNN pretrained for 3D shape classification. The trained model learns to encode images with 3D shape properties and is equivariant to 3D rotations of the observed object. We show that learning only a rich embedding for images with appropriate geometric structure is in and of itself sufficient for tackling numerous applications. Evidence from two different applications, relative pose estimation and novel view synthesis, demonstrates that equivariant embeddings are sufficient for both applications without requiring any task-specific supervised training.

Related articles: Most relevant | Search more
arXiv:1611.07700 [cs.CV] (Published 2016-11-23)
3D Menagerie: Modeling the 3D shape and pose of animals
arXiv:1809.07917 [cs.CV] (Published 2018-09-21)
Adaptive O-CNN: A Patch-based Deep Representation of 3D Shapes
arXiv:1904.02587 [cs.CV] (Published 2019-04-04)
Geometry of the Hough transforms with applications to synthetic data