Skip to yearly menu bar Skip to main content


Poster

Diffusion Bridge AutoEncoders for Unsupervised Representation Learning

Yeongmin Kim · Kwanghyeon Lee · Minsang Park · Byeonghu Na · Il-chul Moon

Hall 3 + Hall 2B #179
[ ] [ Project Page ]
Fri 25 Apr 7 p.m. PDT — 9:30 p.m. PDT

Abstract: Diffusion-based representation learning has achieved substantial attention due to its promising capabilities in latent representation and sample generation. Recent studies have employed an auxiliary encoder to identify a corresponding representation from data and to adjust the dimensionality of a latent variable zz. Meanwhile, this auxiliary structure invokes an *information split problem*; the information of each data instance x0x0 is divided into diffusion endpoint xTxT and encoded zz because there exist two inference paths starting from the data. The latent variable modeled by diffusion endpoint xTxT has some disadvantages. The diffusion endpoint xTxT is computationally expensive to obtain and inflexible in dimensionality. To address this problem, we introduce Diffusion Bridge AuteEncoders (DBAE), which enables zz-dependent endpoint xTxT inference through a feed-forward architecture. This structure creates an information bottleneck at zz, so xTxT becomes dependent on zz in its generation. This results in zz holding the full information of data. We propose an objective function for DBAE to enable both reconstruction and generative modeling, with their theoretical justification. Empirical evidence supports the effectiveness of the intended design in DBAE, which notably enhances downstream inference quality, reconstruction, and disentanglement. Additionally, DBAE generates high-fidelity samples in the unconditional generation. Our code isavailable at https://github.com/aailab-kaist/DBAE.

Live content is unavailable. Log in and register to view live content