Poster
Wide Neural Networks Trained with Weight Decay Provably Exhibit Neural Collapse
Arthur Jacot · Peter Súkeník · Zihan Wang · Marco Mondelli
Hall 3 + Hall 2B #348
[
Abstract
]
Oral
presentation:
Oral Session 1E
Wed 23 Apr 7:30 p.m. PDT — 9 p.m. PDT
Thu 24 Apr midnight PDT
— 2:30 a.m. PDT
Wed 23 Apr 7:30 p.m. PDT — 9 p.m. PDT
Abstract:
Deep neural networks (DNNs) at convergence consistently represent the training data in the last layer via a geometric structure referred to as neural collapse. This empirical evidence has spurred a line of theoretical research aimed at proving the emergence of neural collapse, mostly focusing on the unconstrained features model. Here, the features of the penultimate layer are free variables, which makes the model data-agnostic and puts into question its ability to capture DNN training. Our work addresses the issue, moving away from unconstrained features and studying DNNs that end with at least two linear layers. We first prove generic guarantees on neural collapse that assume \emph{(i)} low training error and balancedness of linear layers (for within-class variability collapse), and \emph{(ii)} bounded conditioning of the features before the linear part (for orthogonality of class-means, and their alignment with weight matrices). The balancedness refers to the fact that for any pair ofconsecutive weight matricesof the linear part, and the bounded conditioning requires a well-behaved ratio between largest and smallest non-zero singular values of the features. We then show that such assumptions hold for gradient descent training with weight decay: \emph{(i)} for networks with a wide first layer, we prove low training error and balancedness, and \emph{(ii)} for solutions that are either nearly optimal or stable under large learning rates, we additionally prove the bounded conditioning. Taken together, our results are the first to show neural collapse in the end-to-end training of DNNs.
Live content is unavailable. Log in and register to view live content