Poster
Anti-Exposure Bias in Diffusion Models
Junyu Zhang · Daochang Liu · Eunbyung Park · Shichao Zhang · Chang Xu
Hall 3 + Hall 2B #588
Diffusion models (DMs) have achieved record-breaking performance in image generation tasks.Nevertheless, in practice, the training-sampling discrepancy, caused by score estimation error and discretization error, limits the modeling ability of DMs, a phenomenon known as exposure bias.To alleviate such exposure bias and further improve the generative performance, we put forward a prompt learning framework built upon a lightweight prompt prediction model.Concretely, our model learns an anti-bias prompt for the generated sample at each sampling step, aiming to compensate for the exposure bias that arises.Following this design philosophy, our framework rectifies the sampling trajectory to match the training trajectory, thereby reducing the divergence between the target data distribution and the modeling distribution.To train the prompt prediction model, we simulate exposure bias by constructing training data and introduce a time-dependent weighting function for optimization.Empirical results on various DMs demonstrate the superiority of our prompt learning framework across three benchmark datasets.Importantly, the optimized prompt prediction model effectively improves image quality with only a 5\% increase in sampling overhead, which remains negligible.
Live content is unavailable. Log in and register to view live content