arXiv Analytics

Sign in

arXiv:2312.15964 [cs.CV]AbstractReferencesReviewsResources

Semantic Guidance Tuning for Text-To-Image Diffusion Models

Hyun Kang, Dohae Lee, Myungjin Shin, In-Kwon Lee

Published 2023-12-26Version 1

Recent advancements in Text-to-Image (T2I) diffusion models have demonstrated impressive success in generating high-quality images with zero-shot generalization capabilities. Yet, current models struggle to closely adhere to prompt semantics, often misrepresenting or overlooking specific attributes. To address this, we propose a simple, training-free approach that modulates the guidance direction of diffusion models during inference. We first decompose the prompt semantics into a set of concepts, and monitor the guidance trajectory in relation to each concept. Our key observation is that deviations in model's adherence to prompt semantics are highly correlated with divergence of the guidance from one or more of these concepts. Based on this observation, we devise a technique to steer the guidance direction towards any concept from which the model diverges. Extensive experimentation validates that our method improves the semantic alignment of images generated by diffusion models in response to prompts. Project page is available at: https://korguy.github.io/

Related articles: Most relevant | Search more
arXiv:2404.11589 [cs.CV] (Published 2024-04-17)
Prompt Optimizer of Text-to-Image Diffusion Models for Abstract Concept Understanding
arXiv:2302.08453 [cs.CV] (Published 2023-02-16)
T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models
arXiv:2303.15233 [cs.CV] (Published 2023-03-27)
Text-to-Image Diffusion Models are Zero-Shot Classifiers