Skip to yearly menu bar Skip to main content


Poster

Learning and Evaluating Representations for Deep One-Class Classification

Kihyuk Sohn · Chun-Liang Li · Jinsung Yoon · Minho Jin · Tomas Pfister

Virtual

Keywords: [ self-supervised learning ] [ deep one-class classification ]


Abstract:

We present a two-stage framework for deep one-class classification. We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations. The framework not only allows to learn better representations, but also permits building one-class classifiers that are faithful to the target task. We argue that classifiers inspired by the statistical perspective in generative or discriminative models are more effective than existing approaches, such as a normality score from a surrogate classifier. We thoroughly evaluate different self-supervised representation learning algorithms under the proposed framework for one-class classification. Moreover, we present a novel distribution-augmented contrastive learning that extends training distributions via data augmentation to obstruct the uniformity of contrastive representations. In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks, including novelty and anomaly detection. Finally, we present visual explanations, confirming that the decision-making process of deep one-class classifiers is intuitive to humans. The code is available at https://github.com/google-research/deeprepresentationone_class.

Chat is not available.