Input Complexity and Out-of-distribution Detection with Likelihood-based Generative Models

Joan Serrà, David Álvarez, Vicenç Gómez, Olga Slizovskaia, José F. Núñez, Jordi Luque

Keywords: generative models, reliability, robustness

Abstract: Likelihood-based generative models are a promising resource to detect out-of-distribution (OOD) inputs which could compromise the robustness or reliability of a machine learning system. However, likelihoods derived from such models have been shown to be problematic for detecting certain types of inputs that significantly differ from training data. In this paper, we pose that this problem is due to the excessive influence that input complexity has in generative models' likelihoods. We report a set of experiments supporting this hypothesis, and use an estimate of input complexity to derive an efficient and parameter-free OOD score, which can be seen as a likelihood-ratio, akin to Bayesian model comparison. We find such score to perform comparably to, or even better than, existing OOD detection approaches under a wide range of data sets, models, model sizes, and complexity estimates.

Similar Papers

Understanding the Limitations of Conditional Generative Models
Ethan Fetaya, Joern-Henrik Jacobsen, Will Grathwohl, Richard Zemel,
Novelty Detection Via Blurring
Sungik Choi, Sae-Young Chung,