Skip to yearly menu bar Skip to main content


Poster

Improving Probabilistic Diffusion Models With Optimal Diagonal Covariance Matching

Zijing Ou · Mingtian Zhang · Andi Zhang · Tim Xiao · Yingzhen Li · David Barber

Hall 3 + Hall 2B #172
[ ]
Thu 24 Apr midnight PDT — 2:30 a.m. PDT
 
Oral presentation: Oral Session 1C
Wed 23 Apr 7:30 p.m. PDT — 9 p.m. PDT

Abstract:

The probabilistic diffusion model has become highly effective across various domains. Typically, sampling from a diffusion model involves using a denoising distribution characterized by a Gaussian with a learned mean and either fixed or learned covariances. In this paper, we leverage the recently proposed covariance moment matching technique and introduce a novel method for learning the diagonal covariances. Unlike traditional data-driven covariance approximation approaches, our method involves directly regressing the optimal analytic covariance using a new, unbiased objective named Optimal Covariance Matching (OCM). This approach can significantly reduce the approximation error in covariance prediction. We demonstrate how our method can substantially enhance the sampling efficiency, recall rate and likelihood of both diffusion models and latent diffusion models.

Live content is unavailable. Log in and register to view live content