Skip to yearly menu bar Skip to main content


Poster

Text-to-Image Rectified Flow as Plug-and-Play Priors

Xiaofeng Yang · Cheng Chen · xulei yang · Fayao Liu · Guosheng Lin

Hall 3 + Hall 2B #157
[ ]
Thu 24 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

Large-scale diffusion models have achieved remarkable performance in generative tasks. Beyond their initial training applications, these models have proven their ability to function as versatile plug-and-play priors. For instance, 2D diffusion models can serve as loss functions to optimize 3D implicit models. Rectified Flow, a novel class of generative models, has demonstrated superior performance across various domains. Compared to diffusion-based methods, rectified flow approaches surpass them in terms of generation quality and efficiency. In this work, we present theoretical and experimental evidence demonstrating that rectified flow based methods offer similar functionalities to diffusion models — they can also serve as effective priors. Besides the generative capabilities of diffusion priors, motivated by the unique time-symmetry properties of rectified flow models, a variant of our method can additionally perform image inversion. Experimentally, our rectified flow based priors outperform their diffusion counterparts — the SDS and VSD losses — in text-to-3D generation. Our method also displays competitive performance in image inversion and editing. Code is available at: https://github.com/yangxiaofeng/rectifiedflowprior.

Live content is unavailable. Log in and register to view live content