Skip to yearly menu bar Skip to main content


Poster

Direct Distributional Optimization for Provable Alignment of Diffusion Models

Ryotaro Kawata · Kazusato Oko · Atsushi Nitanda · Taiji Suzuki

Hall 3 + Hall 2B #456
[ ]
Fri 25 Apr midnight PDT — 2:30 a.m. PDT

Abstract: We introduce a novel alignment method for diffusion models from distribution optimization perspectives while providing rigorous convergence guarantees.We first formulate the problem as a generic regularized loss minimization over probability distributions and directly optimize the distribution using the Dual Averaging method.Next, we enable sampling from the learned distribution by approximating its score function via Doob's hh-transform technique.The proposed framework is supported by rigorous convergence guarantees and an end-to-end bound on the sampling error, which imply that when the original distribution's score is known accurately, the complexity of sampling from shifted distributions is independent of isoperimetric conditions.This framework is broadly applicable to general distribution optimization problems, including alignment tasks in Reinforcement Learning with Human Feedback (RLHF), Direct Preference Optimization (DPO), and Kahneman-Tversky Optimization (KTO). We empirically validate its performance on synthetic and image datasets using the DPO objective.

Live content is unavailable. Log in and register to view live content