Skip to yearly menu bar Skip to main content


Poster

Denoising Likelihood Score Matching for Conditional Score-based Data Generation

Chen-Hao Chao · Wei-Fang Sun · Bo-Wun Cheng · Yi-Chen Lo · Chia-Che Chang · Yu-Lun Liu · Yu-Lin Chang · Chia-Ping Chen · Chun-Yi Lee

Keywords: [ Conditional Sampling ]


Abstract:

Many existing conditional score-based data generation methods utilize Bayes' theorem to decompose the gradients of a log posterior density into a mixture of scores. These methods facilitate the training procedure of conditional score models, as a mixture of scores can be separately estimated using a score model and a classifier. However, our analysis indicates that the training objectives for the classifier in these methods may lead to a serious score mismatch issue, which corresponds to the situation that the estimated scores deviate from the true ones. Such an issue causes the samples to be misled by the deviated scores during the diffusion process, resulting in a degraded sampling quality. To resolve it, we theoretically formulate a novel training objective, called Denoising Likelihood Score Matching (DLSM) loss, for the classifier to match the gradients of the true log likelihood density. Our experimental evidences show that the proposed method outperforms the previous methods on both Cifar-10 and Cifar-100 benchmarks noticeably in terms of several key evaluation metrics. We thus conclude that, by adopting DLSM, the conditional scores can be accurately modeled, and the effect of the score mismatch issue is alleviated.

Chat is not available.