arXiv Analytics

Sign in

arXiv:2209.14828 [cs.CV]AbstractReferencesReviewsResources

Denoising Diffusion Probabilistic Models for Styled Walking Synthesis

Edmund J. C. Findlay, Haozheng Zhang, Ziyi Chang, Hubert P. H. Shum

Published 2022-09-29Version 1

Generating realistic motions for digital humans is time-consuming for many graphics applications. Data-driven motion synthesis approaches have seen solid progress in recent years through deep generative models. These results offer high-quality motions but typically suffer in motion style diversity. For the first time, we propose a framework using the denoising diffusion probabilistic model (DDPM) to synthesize styled human motions, integrating two tasks into one pipeline with increased style diversity compared with traditional motion synthesis methods. Experimental results show that our system can generate high-quality and diverse walking motions.

Related articles: Most relevant | Search more
arXiv:2212.08526 [cs.CV] (Published 2022-12-16)
Unifying Human Motion Synthesis and Style Transfer with Denoising Diffusion Probabilistic Models
arXiv:2307.14648 [cs.CV] (Published 2023-07-27)
Spatial-Frequency U-Net for Denoising Diffusion Probabilistic Models
arXiv:2307.15988 [cs.CV] (Published 2023-07-29)
RGB-D-Fusion: Image Conditioned Depth Diffusion of Humanoid Subjects