Skip to yearly menu bar Skip to main content


Poster

O(d/T) Convergence Theory for Diffusion Probabilistic Models under Minimal Assumptions

Gen Li · Yuling Yan

Hall 3 + Hall 2B #542
[ ]
Thu 24 Apr midnight PDT — 2:30 a.m. PDT

Abstract: Score-based diffusion models, which generate new data by learning to reverse a diffusion process that perturbs data from the target distribution into noise, have achieved remarkable success across various generative tasks. Despite their superior empirical performance, existing theoretical guarantees are often constrained by stringent assumptions or suboptimal convergence rates. In this paper, we establish a fast convergence theory for the denoising diffusion probabilistic model (DDPM), a widely used SDE-based sampler, under minimal assumptions. Our analysis shows that, provided 22-accurate estimates of the score functions, the total variation distance between the target and generated distributions is upper bounded by O(d/T)O(d/T) (ignoring logarithmic factors), where dd is the data dimensionality and TT is the number of steps. This result holds for any target distribution with finite first-order moment. To our knowledge, this improves upon existing convergence theory for the DDPM sampler, while imposing minimal assumptions on the target data distribution and score estimates. This is achieved through a novel set of analytical tools that provides a fine-grained characterization of how the error propagates at each step of the reverse process.

Live content is unavailable. Log in and register to view live content