Skip to yearly menu bar Skip to main content


Poster

Multi-Scale Fusion for Object Representation

Rongzhen Zhao · Vivienne Huiling Wang · Juho Kannala · Joni Pajarinen

Hall 3 + Hall 2B #125
[ ] [ Project Page ]
Sat 26 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

Representing images or videos as object-level feature vectors, rather than pixel-level feature maps, facilitates advanced visual tasks.Object-Centric Learning (OCL) primarily achieves this by reconstructing the input under the guidance of Variational Autoencoder (VAE) intermediate representation to drive so-called slots to aggregate as much object information as possible.However, existing VAE guidance does not explicitly address that objects can vary in pixel sizes while models typically excel at specific pattern scales.We propose Multi-Scale Fusion (MSF) to enhance VAE guidance for OCL training. To ensure objects of all sizes fall within VAE's comfort zone, we adopt the image pyramid, which produces intermediate representations at multiple scales;To foster scale-invariance/variance in object super-pixels, we devise inter/intra-scale fusion, which augments low-quality object super-pixels of one scale with corresponding high-quality super-pixels from another scale.On standard OCL benchmarks, our technique improves mainstream methods, including state-of-the-art diffusion-based ones.The source code is available on https://github.com/Genera1Z/MultiScaleFusion.

Live content is unavailable. Log in and register to view live content