Skip to yearly menu bar Skip to main content


Poster

Think while You Generate: Discrete Diffusion with Planned Denoising

Sulin Liu · Juno Nam · Andrew Campbell · Hannes Stärk · Yilun Xu · Tommi Jaakkola · Rafael Gomez-Bombarelli

Hall 3 + Hall 2B #157
[ ] [ Project Page ]
Sat 26 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

Discrete diffusion has achieved state-of-the-art performance, outperforming or approaching autoregressive models on standard benchmarks. In this work, we introduce Discrete Diffusion with Planned Denoising (DDPD), a novel framework that separates the generation process into two models: a planner and a denoiser. At inference time, the planner selects which positions to denoise next by identifying the most corrupted positions in need of denoising, including both initially corrupted and those requiring additional refinement. This plan-and-denoise approach enables more efficient reconstruction during generation by iteratively identifying and denoising corruptions in the optimal order. DDPD outperforms traditional denoiser-only mask diffusion methods, achieving superior results on language modeling benchmarks such as text8, OpenWebText, and token-based generation on ImageNet 256 × 256. Notably, in language modeling, DDPD significantly reduces the performance gap between diffusion-based and autoregressive methods in terms of generative perplexity. Code is available at github.com/liusulin/DDPD.

Live content is unavailable. Log in and register to view live content