Skip to yearly menu bar Skip to main content


Poster

Adding Conditional Control to Diffusion Models with Reinforcement Learning

Yulai Zhao · Masatoshi Uehara · Gabriele Scalia · Sunyuan Kung · Tommaso Biancalani · Sergey Levine · Ehsan Hajiramezanali

Hall 3 + Hall 2B #179
[ ] [ Project Page ]
Fri 25 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

Diffusion models are powerful generative models that allow for precise control over the characteristics of the generated samples. While these diffusion models trained on large datasets have achieved success, there is often a need to introduce additional controls in downstream fine-tuning processes, treating these powerful models as pre-trained diffusion models. This work presents a novel method based on reinforcement learning (RL) to add such controls using an offline dataset comprising inputs and labels. We formulate this task as an RL problem, with the classifier learned from the offline dataset and the KL divergence against pre-trained models serving as the reward functions. Our method, CTRL (Conditioning pre-Trained diffusion models with Reinforcement Learning), produces soft-optimal policies that maximize the abovementioned reward functions. We formally demonstrate that our method enables sampling from the conditional distribution with additional controls during inference.Our RL-based approach offers several advantages over existing methods. Compared to classifier-free guidance,it improves sample efficiency and can greatly simplify dataset construction by leveraging conditional independence between the inputs and additional controls. Additionally, unlike classifier guidance, it eliminates the need to train classifiers from intermediate states to additional controls.The code is available at https://github.com/zhaoyl18/CTRL.

Live content is unavailable. Log in and register to view live content