arXiv Analytics

Sign in

arXiv:2311.13833 [cs.CV]AbstractReferencesReviewsResources

Lego: Learning to Disentangle and Invert Concepts Beyond Object Appearance in Text-to-Image Diffusion Models

Saman Motamed, Danda Pani Paudel, Luc Van Gool

Published 2023-11-23Version 1

Diffusion models have revolutionized generative content creation and text-to-image (T2I) diffusion models in particular have increased the creative freedom of users by allowing scene synthesis using natural language. T2I models excel at synthesizing concepts such as nouns, appearances, and styles. To enable customized content creation based on a few example images of a concept, methods such as Textual Inversion and DreamBooth invert the desired concept and enable synthesizing it in new scenes. However, inverting more general concepts that go beyond object appearance and style (adjectives and verbs) through natural language, remains a challenge. Two key characteristics of these concepts contribute to the limitations of current inversion methods. 1) Adjectives and verbs are entangled with nouns (subject) and can hinder appearance-based inversion methods, where the subject appearance leaks into the concept embedding and 2) describing such concepts often extends beyond single word embeddings (being frozen in ice, walking on a tightrope, etc.) that current methods do not handle. In this study, we introduce Lego, a textual inversion method designed to invert subject entangled concepts from a few example images. Lego disentangles concepts from their associated subjects using a simple yet effective Subject Separation step and employs a Context Loss that guides the inversion of single/multi-embedding concepts. In a thorough user study, Lego-generated concepts were preferred over 70% of the time when compared to the baseline. Additionally, visual question answering using a large language model suggested Lego-generated concepts are better aligned with the text description of the concept.

Related articles: Most relevant | Search more
arXiv:2301.13826 [cs.CV] (Published 2023-01-31)
Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models
arXiv:2306.09869 [cs.CV] (Published 2023-06-16)
Energy-Based Cross Attention for Bayesian Context Update in Text-to-Image Diffusion Models
arXiv:2406.12042 [cs.CV] (Published 2024-06-17)
Not All Prompts Are Made Equal: Prompt-based Pruning of Text-to-Image Diffusion Models