arXiv Analytics

Sign in

arXiv:2307.12560 [cs.CV]AbstractReferencesReviewsResources

Interpolating between Images with Diffusion Models

Clinton J. Wang, Polina Golland

Published 2023-07-24Version 1

One little-explored frontier of image generation and editing is the task of interpolating between two input images, a feature missing from all currently deployed image generation pipelines. We argue that such a feature can expand the creative applications of such models, and propose a method for zero-shot interpolation using latent diffusion models. We apply interpolation in the latent space at a sequence of decreasing noise levels, then perform denoising conditioned on interpolated text embeddings derived from textual inversion and (optionally) subject poses. For greater consistency, or to specify additional criteria, we can generate several candidates and use CLIP to select the highest quality image. We obtain convincing interpolations across diverse subject poses, image styles, and image content, and show that standard quantitative metrics such as FID are insufficient to measure the quality of an interpolation. Code and data are available at https://clintonjwang.github.io/interpolation.

Comments: Presented at ICML 2023 Workshop on Challenges of Deploying Generative AI
Categories: cs.CV
Related articles: Most relevant | Search more
arXiv:2211.17084 [cs.CV] (Published 2022-11-30)
High-Fidelity Guided Image Synthesis with Latent Diffusion Models
arXiv:2308.12453 [cs.CV] (Published 2023-08-23)
Augmenting medical image classifiers with synthetic data from latent diffusion models
arXiv:2406.08337 [cs.CV] (Published 2024-06-12)
WMAdapter: Adding WaterMark Control to Latent Diffusion Models