arXiv Analytics

Sign in

arXiv:1906.04728 [cs.CV]AbstractReferencesReviewsResources

Shapes and Context: In-the-Wild Image Synthesis & Manipulation

Aayush Bansal, Yaser Sheikh, Deva Ramanan

Published 2019-06-11Version 1

We introduce a data-driven approach for interactively synthesizing in-the-wild images from semantic label maps. Our approach is dramatically different from recent work in this space, in that we make use of no learning. Instead, our approach uses simple but classic tools for matching scene context, shapes, and parts to a stored library of exemplars. Though simple, this approach has several notable advantages over recent work: (1) because nothing is learned, it is not limited to specific training data distributions (such as cityscapes, facades, or faces); (2) it can synthesize arbitrarily high-resolution images, limited only by the resolution of the exemplar library; (3) by appropriately composing shapes and parts, it can generate an exponentially large set of viable candidate output images (that can say, be interactively searched by a user). We present results on the diverse COCO dataset, significantly outperforming learning-based approaches on standard image synthesis metrics. Finally, we explore user-interaction and user-controllability, demonstrating that our system can be used as a platform for user-driven content creation.

Comments: Project Page:
Journal: CVPR 2019
Categories: cs.CV
Related articles: Most relevant | Search more
arXiv:1607.04411 [cs.CV] (Published 2016-07-15)
Model-Driven Feed-Forward Prediction for Manipulation of Deformable Objects
Yinxiao Li et al.
arXiv:1511.02999 [cs.CV] (Published 2015-11-10)
Improvised Salient Object Detection and Manipulation
arXiv:1809.01361 [cs.CV] (Published 2018-09-05)
A Unified Feature Disentangler for Multi-Domain Image Translation and Manipulation