arXiv Analytics

Sign in

arXiv:2303.15233 [cs.CV]AbstractReferencesReviewsResources

Text-to-Image Diffusion Models are Zero-Shot Classifiers

Kevin Clark, Priyank Jaini

Published 2023-03-27Version 1

The excellent generative capabilities of text-to-image diffusion models suggest they learn informative representations of image-text data. However, what knowledge their representations capture is not fully understood, and they have not been thoroughly explored on downstream tasks. We investigate diffusion models by proposing a method for evaluating them as zero-shot classifiers. The key idea is using a diffusion model's ability to denoise a noised image given a text description of a label as a proxy for that label's likelihood. We apply our method to Imagen, using it to probe fine-grained aspects of Imagen's knowledge and comparing it with CLIP's zero-shot abilities. Imagen performs competitively with CLIP on a wide range of zero-shot image classification datasets. Additionally, it achieves state-of-the-art results on shape/texture bias tests and can successfully perform attribute binding while CLIP cannot. Although generative pre-training is prevalent in NLP, visual foundation models often use other methods such as contrastive learning. Based on our findings, we argue that generative pre-training should be explored as a compelling alternative for vision and vision-language problems.

Related articles: Most relevant | Search more
arXiv:2302.08453 [cs.CV] (Published 2023-02-16)
T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models
arXiv:2303.17591 [cs.CV] (Published 2023-03-30)
Forget-Me-Not: Learning to Forget in Text-to-Image Diffusion Models
arXiv:2404.11589 [cs.CV] (Published 2024-04-17)
Prompt Optimizer of Text-to-Image Diffusion Models for Abstract Concept Understanding