arXiv Analytics

Sign in

arXiv:2111.15264 [cs.CV]AbstractReferencesReviewsResources

EdiBERT, a generative model for image editing

Thibaut Issenhuth, Ugo Tanielian, Jérémie Mary, David Picard

Published 2021-11-30, updated 2022-02-04Version 2

Advances in computer vision are pushing the limits of im-age manipulation, with generative models sampling detailed images on various tasks. However, a specialized model is often developed and trained for each specific task, even though many image edition tasks share similarities. In denoising, inpainting, or image compositing, one always aims at generating a realistic image from a low-quality one. In this paper, we aim at making a step towards a unified approach for image editing. To do so, we propose EdiBERT, a bi-directional transformer trained in the discrete latent space built by a vector-quantized auto-encoder. We argue that such a bidirectional model is suited for image manipulation since any patch can be re-sampled conditionally to the whole image. Using this unique and straightforward training objective, we show that the resulting model matches state-of-the-art performances on a wide variety of tasks: image denoising, image completion, and image composition.

Related articles: Most relevant | Search more
arXiv:1508.04035 [cs.CV] (Published 2015-08-17)
A Generative Model for Multi-Dialect Representation
arXiv:1910.07169 [cs.CV] (Published 2019-10-16)
Generative Modeling for Small-Data Object Detection
arXiv:1906.11881 [cs.CV] (Published 2019-06-11)
Explicit Disentanglement of Appearance and Perspective in Generative Models