Skip to yearly menu bar Skip to main content


Poster

The inductive bias of ReLU networks on orthogonally separable data

Mary Phuong · Christoph H Lampert

Virtual

Keywords: [ gradient descent ] [ inductive bias ] [ implicit bias ] [ ReLU networks ] [ max-margin ] [ extremal sector ]


Abstract: We study the inductive bias of two-layer ReLU networks trained by gradient flow. We identify a class of easy-to-learn (`orthogonally separable') datasets, and characterise the solution that ReLU networks trained on such datasets converge to. Irrespective of network width, the solution turns out to be a combination of two max-margin classifiers: one corresponding to the positive data subset and one corresponding to the negative data subset. The proof is based on the recently introduced concept of extremal sectors, for which we prove a number of properties in the context of orthogonal separability. In particular, we prove stationarity of activation patterns from some time $T$ onwards, which enables a reduction of the ReLU network to an ensemble of linear subnetworks.

Chat is not available.