arXiv Analytics

Sign in

arXiv:2306.00974 [cs.CV]AbstractReferencesReviewsResources

Intriguing Properties of Text-guided Diffusion Models

Qihao Liu, Adam Kortylewski, Yutong Bai, Song Bai, Alan Yuille

Published 2023-06-01Version 1

Text-guided diffusion models (TDMs) are widely applied but can fail unexpectedly. Common failures include: (i) natural-looking text prompts generating images with the wrong content, or (ii) different random samples of the latent variables that generate vastly different, and even unrelated, outputs despite being conditioned on the same text prompt. In this work, we aim to study and understand the failure modes of TDMs in more detail. To achieve this, we propose SAGE, an adversarial attack on TDMs that uses image classifiers as surrogate loss functions, to search over the discrete prompt space and the high-dimensional latent space of TDMs to automatically discover unexpected behaviors and failure cases in the image generation. We make several technical contributions to ensure that SAGE finds failure cases of the diffusion model, rather than the classifier, and verify this in a human study. Our study reveals four intriguing properties of TDMs that have not been systematically studied before: (1) We find a variety of natural text prompts producing images that fail to capture the semantics of input texts. We categorize these failures into ten distinct types based on the underlying causes. (2) We find samples in the latent space (which are not outliers) that lead to distorted images independent of the text prompt, suggesting that parts of the latent space are not well-structured. (3) We also find latent samples that lead to natural-looking images which are unrelated to the text prompt, implying a potential misalignment between the latent and prompt spaces. (4) By appending a single adversarial token embedding to an input prompt we can generate a variety of specified target objects, while only minimally affecting the CLIP score. This demonstrates the fragility of language representations and raises potential safety concerns.

Comments: Code will be available at: https://github.com/qihao067/SAGE/
Categories: cs.CV
Related articles: Most relevant | Search more
arXiv:1709.03439 [cs.CV] (Published 2017-09-11)
Why Do Deep Neural Networks Still Not Recognize These Images?: A Qualitative Analysis on Failure Cases of ImageNet Classification
arXiv:2011.00954 [cs.CV] (Published 2020-11-02)
Learning a Deep Reinforcement Learning Policy Over the Latent Space of a Pre-trained GAN for Semantic Age Manipulation
arXiv:2007.06600 [cs.CV] (Published 2020-07-13)
Closed-Form Factorization of Latent Semantics in GANs