Skip to yearly menu bar Skip to main content


Poster

RB-Modulation: Training-Free Stylization using Reference-Based Modulation

Litu Rout · Yujia Chen · Nataniel Ruiz · Abhishek Kumar · Constantine Caramanis · Sanjay Shakkottai · Wen-Sheng Chu

Hall 3 + Hall 2B #161
[ ] [ Project Page ]
Thu 24 Apr 7 p.m. PDT — 9:30 p.m. PDT
 
Oral presentation: Oral Session 4D
Fri 25 Apr 12:30 a.m. PDT — 2 a.m. PDT

Abstract:

We propose Reference-Based Modulation (RB-Modulation), a new plug-and-play solution for training-free personalization of diffusion models.Existing training-free approaches exhibit difficulties in (a) style extraction from reference images in the absence of additional style or content text descriptions, (b) unwanted content leakage from reference style images, and (c) effective composition of style and content. RB-Modulation is built on a novel stochastic optimal controller where a style descriptor encodes the desired attributes through a terminal cost. The resulting drift not only overcomes the difficulties above, but also ensures high fidelity to the reference style and adheres to the given text prompt. We also introduce a cross-attention-based feature aggregation scheme that allows RB-Modulation to decouple content and style from the reference image.With theoretical justification and empirical evidence, our test-time optimization framework demonstrates precise extraction and control of content and style in a training-free manner. Further, our method allows a seamless composition of content and style, which marks a departure from the dependency on external adapters or ControlNets. See project page: https://rb-modulation.github.io/ for code and further details.

Live content is unavailable. Log in and register to view live content