Skip to yearly menu bar Skip to main content


Poster
in
Workshop: First Workshop on Representational Alignment (Re-Align)

The benefits of Incorporating Shape Priors in Contrastive Learning

Junru Zhao · Tianqin Li · Tai Lee

Keywords: [ shape bias ] [ contrastive learning ] [ self-supervised learning ]


Abstract:

Human babies develop the ability of figure-ground segregation based on motion, luminance and color cues early on during infancy. The availability of the global form or shape of the objects is known to facilitate rapid learning of lexical categories in babies. Here, we explored the use of shape prototypes, computed by momentum clustering the global forms of objects, to bootstrap a form of self-supervised learning, called contrastive learning, to mimic human learning. We found that shape prototypes can play a positive role in speeding up representation learning by highlighting the importance of object boundaries and forms at the initial learning phase but might hinder learning detailed features for object recognition. Thus, a hybrid of "coarse-to-fine" or "shape-to-texture" training regimes that foster learning global shapes and local features produces high-performance object recognition systems with global shape sensitivity.

Chat is not available.