Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Discrete Contrastive Diffusion for Cross-Modal Music and Image Generation

Ye Zhu · Yu Wu · Kyle Olszewski · Jian Ren · Sergey Tulyakov · Yan Yan

MH1-2-3-4 #26

Keywords: [ image synthesis ] [ Contrastive Diffusion ] [ Conditioned Generations ] [ music generation ] [ Applications ]


Abstract:

Diffusion probabilistic models (DPMs) have become a popular approach to conditional generation, due to their promising results and support for cross-modal synthesis. A key desideratum in conditional synthesis is to achieve high correspondence between the conditioning input and generated output. Most existing methods learn such relationships implicitly, by incorporating the prior into the variational lower bound. In this work, we take a different route---we explicitly enhance input-output connections by maximizing their mutual information. To this end, we introduce a Conditional Discrete Contrastive Diffusion (CDCD) loss and design two contrastive diffusion mechanisms to effectively incorporate it into the denoising process, combining the diffusion training and contrastive learning for the first time by connecting it with the conventional variational objectives. We demonstrate the efficacy of our approach in evaluations with diverse multimodal conditional synthesis tasks: dance-to-music generation, text-to-image synthesis, as well as class-conditioned image synthesis. On each, we enhance the input-output correspondence and achieve higher or competitive general synthesis quality. Furthermore, the proposed approach improves the convergence of diffusion models, reducing the number of required diffusion steps by more than 35% on two benchmarks, significantly increasing the inference speed.

Chat is not available.