arXiv Analytics

Sign in

arXiv:1902.06923 [cs.CV]AbstractReferencesReviewsResources

Using Conditional Generative Adversarial Networks to Generate Ground-Level Views From Overhead Imagery

Xueqing Deng, Yi Zhu, Shawn Newsam

Published 2019-02-19Version 1

This paper develops a deep-learning framework to synthesize a ground-level view of a location given an overhead image. We propose a novel conditional generative adversarial network (cGAN) in which the trained generator generates realistic looking and representative ground-level images using overhead imagery as auxiliary information. The generator is an encoder-decoder network which allows us to compare low- and high-level features as well as their concatenation for encoding the overhead imagery. We also demonstrate how our framework can be used to perform land cover classification by modifying the trained cGAN to extract features from overhead imagery. This is interesting because, although we are using this modified cGAN as a feature extractor for overhead imagery, it incorporates knowledge of how locations look from the ground.

Related articles: Most relevant | Search more
arXiv:2001.05853 [cs.CV] (Published 2020-01-13)
Identifying Table Structure in Documents using Conditional Generative Adversarial Networks
arXiv:1701.05957 [cs.CV] (Published 2017-01-21)
Image De-raining Using a Conditional Generative Adversarial Network
arXiv:1712.01833 [cs.CV] (Published 2017-12-06)
Towards Recovery of Conditional Vectors from Conditional Generative Adversarial Networks