arXiv Analytics

Sign in

arXiv:2306.00986 [cs.CV]AbstractReferencesReviewsResources

Diffusion Self-Guidance for Controllable Image Generation

Dave Epstein, Allan Jabri, Ben Poole, Alexei A. Efros, Aleksander Holynski

Published 2023-06-01Version 1

Large-scale generative models are capable of producing high-quality images from detailed text descriptions. However, many aspects of an image are difficult or impossible to convey through text. We introduce self-guidance, a method that provides greater control over generated images by guiding the internal representations of diffusion models. We demonstrate that properties such as the shape, location, and appearance of objects can be extracted from these representations and used to steer sampling. Self-guidance works similarly to classifier guidance, but uses signals present in the pretrained model itself, requiring no additional models or training. We show how a simple set of properties can be composed to perform challenging image manipulations, such as modifying the position or size of objects, merging the appearance of objects in one image with the layout of another, composing objects from many images into one, and more. We also show that self-guidance can be used to edit real images. For results and an interactive demo, see our project page at https://dave.ml/selfguidance/

Related articles: Most relevant | Search more
arXiv:2211.14305 [cs.CV] (Published 2022-11-25)
SpaText: Spatio-Textual Representation for Controllable Image Generation
arXiv:2304.13722 [cs.CV] (Published 2023-04-26)
Controllable Image Generation via Collage Representations
arXiv:2006.10569 [cs.CV] (Published 2020-06-18)
Neural Graphics Pipeline for Controllable Image Generation