Skip to yearly menu bar Skip to main content


Poster

Lifting Architectural Constraints of Injective Flows

Peter Sorrenson · Felix Draxler · Armand Rousselot · Sander Hummerich · Lea Zimmermann · Ullrich Koethe

Halle B #43

Abstract:

Normalizing Flows explicitly maximize a full-dimensional likelihood on the training data. However, real data is typically only supported on a lower-dimensional manifold leading the model to expend significant compute on modeling noise. Injective Flows fix this by jointly learning a manifold and the distribution on it. So far, they have been limited by restrictive architectures and/or high computational cost. We lift both constraints by a new efficient estimator for the maximum likelihood loss, compatible with free-form bottleneck architectures. We further show that naively learning both the data manifold and the distribution on it can lead to divergent solutions, and use this insight to motivate a stable maximum likelihood training objective. We perform extensive experiments on toy, tabular and image data, demonstrating the competitive performance of the resulting model.

Chat is not available.