Poster
Verification of Non-Linear Specifications for Neural Networks
Chongli Qin · Krishnamurthy Dvijotham · Brendan ODonoghue · Rudy R Bunel · Robert Stanforth · Sven Gowal · Jonathan Uesato · Grzegorz Swirszcz · Pushmeet Kohli
Great Hall BC #33
Keywords: [ convex optimization ] [ verification ] [ adversarial robustness ]
Prior work on neural network verification has focused on specifications that are linear functions of the output of the network, e.g., invariance of the classifier output under adversarial perturbations of the input. In this paper, we extend verification algorithms to be able to certify richer properties of neural networks. To do this we introduce the class of convex-relaxable specifications, which constitute nonlinear specifications that can be verified using a convex relaxation. We show that a number of important properties of interest can be modeled within this class, including conservation of energy in a learned dynamics model of a physical system; semantic consistency of a classifier's output labels under adversarial perturbations and bounding errors in a system that predicts the summation of handwritten digits. Our experimental evaluation shows that our method is able to effectively verify these specifications. Moreover, our evaluation exposes the failure modes in models which cannot be verified to satisfy these specifications. Thus, emphasizing the importance of training models not just to fit training data but also to be consistent with specifications.
Live content is unavailable. Log in and register to view live content