arXiv Analytics

Sign in

arXiv:2405.13540 [cs.CV]AbstractReferencesReviewsResources

Directly Denoising Diffusion Model

Dan Zhang, Jingjing Wang, Feng Luo

Published 2024-05-22Version 1

In this paper, we present the Directly Denoising Diffusion Model (DDDM): a simple and generic approach for generating realistic images with few-step sampling, while multistep sampling is still preserved for better performance. DDDMs require no delicately designed samplers nor distillation on pre-trained distillation models. DDDMs train the diffusion model conditioned on an estimated target that was generated from previous training iterations of its own. To generate images, samples generated from the previous time step are also taken into consideration, guiding the generation process iteratively. We further propose Pseudo-LPIPS, a novel metric loss that is more robust to various values of hyperparameter. Despite its simplicity, the proposed approach can achieve strong performance in benchmark datasets. Our model achieves FID scores of 2.57 and 2.33 on CIFAR-10 in one-step and two-step sampling respectively, surpassing those obtained from GANs and distillation-based models. By extending the sampling to 1000 steps, we further reduce FID score to 1.79, aligning with state-of-the-art methods in the literature. For ImageNet 64x64, our approach stands as a competitive contender against leading models.

Related articles: Most relevant | Search more
arXiv:1912.11370 [cs.CV] (Published 2019-12-24)
Large Scale Learning of General Visual Representations for Transfer
arXiv:2208.13946 [cs.CV] (Published 2022-08-30)
PercentMatch: Percentile-based Dynamic Thresholding for Multi-Label Semi-Supervised Classification
arXiv:2107.00649 [cs.CV] (Published 2021-07-01)
On the Practicality of Deterministic Epistemic Uncertainty