Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Learning ReLU networks to high uniform accuracy is intractable

Julius Berner · Philipp Grohs · Felix Voigtlaender

MH1-2-3-4 #161

Keywords: [ Theory ] [ Teacher-Student Learning ] [ hardness results ] [ Sample complexity ] [ ReLU networks ] [ learning theory ]


Abstract:

Statistical learning theory provides bounds on the necessary number of training samples needed to reach a prescribed accuracy in a learning problem formulated over a given target class. This accuracy is typically measured in terms of a generalization error, that is, an expected value of a given loss function. However, for several applications --- for example in a security-critical context or for problems in the computational sciences --- accuracy in this sense is not sufficient. In such cases, one would like to have guarantees for high accuracy on every input value, that is, with respect to the uniform norm. In this paper we precisely quantify the number of training samples needed for any conceivable training algorithm to guarantee a given uniform accuracy on any learning problem formulated over target classes containing (or consisting of) ReLU neural networks of a prescribed architecture. We prove that, under very general assumptions, the minimal number of training samples for this task scales exponentially both in the depth and the input dimension of the network architecture.

Chat is not available.