{ "id": "2405.13540", "version": "v1", "published": "2024-05-22T11:20:32.000Z", "updated": "2024-05-22T11:20:32.000Z", "title": "Directly Denoising Diffusion Model", "authors": [ "Dan Zhang", "Jingjing Wang", "Feng Luo" ], "categories": [ "cs.CV" ], "abstract": "In this paper, we present the Directly Denoising Diffusion Model (DDDM): a simple and generic approach for generating realistic images with few-step sampling, while multistep sampling is still preserved for better performance. DDDMs require no delicately designed samplers nor distillation on pre-trained distillation models. DDDMs train the diffusion model conditioned on an estimated target that was generated from previous training iterations of its own. To generate images, samples generated from the previous time step are also taken into consideration, guiding the generation process iteratively. We further propose Pseudo-LPIPS, a novel metric loss that is more robust to various values of hyperparameter. Despite its simplicity, the proposed approach can achieve strong performance in benchmark datasets. Our model achieves FID scores of 2.57 and 2.33 on CIFAR-10 in one-step and two-step sampling respectively, surpassing those obtained from GANs and distillation-based models. By extending the sampling to 1000 steps, we further reduce FID score to 1.79, aligning with state-of-the-art methods in the literature. For ImageNet 64x64, our approach stands as a competitive contender against leading models.", "revisions": [ { "version": "v1", "updated": "2024-05-22T11:20:32.000Z" } ], "analyses": { "keywords": [ "directly denoising diffusion model", "model achieves fid scores", "reduce fid score", "achieve strong performance", "novel metric loss" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }