arXiv Analytics

Sign in

arXiv:1605.03557 [cs.CV]AbstractReferencesReviewsResources

View Synthesis by Appearance Flow

Tinghui Zhou, Shubham Tulsiani, Weilun Sun, Jitendra Malik, Alexei A. Efros

Published 2016-05-11Version 1

Given one or more images of an object (or a scene), is it possible to synthesize a new image of the same instance observed from an arbitrary viewpoint? In this paper, we attempt to tackle this problem, known as novel view synthesis, by re-formulating it as a pixel copying task that avoids the notorious difficulties of generating pixels from scratch. Our approach is built on the observation that the visual appearance of different views of the same instance is highly correlated. Such correlation could be explicitly learned by training a convolutional neural network (CNN) to predict appearance flows -- 2-D coordinate vectors specifying which pixels in the input view could be used to reconstruct the target view. We show that for both objects and scenes, our approach is able to generate higher-quality synthesized views with crisp texture and boundaries than previous CNN-based techniques.

Related articles: Most relevant | Search more
arXiv:1601.07532 [cs.CV] (Published 2016-01-27)
Learning to Extract Motion from Videos in Convolutional Neural Networks
arXiv:1409.4326 [cs.CV] (Published 2014-09-15)
Computing the Stereo Matching Cost with a Convolutional Neural Network
arXiv:1504.02351 [cs.CV] (Published 2015-04-09)
When Face Recognition Meets with Deep Learning: an Evaluation of Convolutional Neural Networks for Face Recognition