Towards neural networks that provably know when they don't know

Alexander Meinke, Matthias Hein

Keywords: relu networks

Abstract: It has recently been shown that ReLU networks produce arbitrarily over-confident predictions far away from the training data. Thus, ReLU networks do not know when they don't know. However, this is a highly important property in safety critical applications. In the context of out-of-distribution detection (OOD) there have been a number of proposals to mitigate this problem but none of them are able to make any mathematical guarantees. In this paper we propose a new approach to OOD which overcomes both problems. Our approach can be used with ReLU networks and provides provably low confidence predictions far away from the training data as well as the first certificates for low confidence predictions in a neighborhood of an out-distribution point. In the experiments we show that state-of-the-art methods fail in this worst-case setting whereas our model can guarantee its performance while retaining state-of-the-art OOD performance.

Similar Papers

Input Complexity and Out-of-distribution Detection with Likelihood-based Generative Models
Joan Serrà, David Álvarez, Vicenç Gómez, Olga Slizovskaia, José F. Núñez, Jordi Luque,
PAC Confidence Sets for Deep Neural Networks via Calibrated Prediction
Sangdon Park, Osbert Bastani, Nikolai Matni, Insup Lee,
Novelty Detection Via Blurring
Sungik Choi, Sae-Young Chung,