Neural Networks as Kernel Learners: The Silent Alignment Effect

Alexander Atanasov · Blake Bordelon · Cengiz Pehlevan

Keywords: [ feature learning ] [ neural tangent kernel ]

[ Abstract ]
[ Visit Poster at Spot B0 in Virtual World ] [ OpenReview
Wed 27 Apr 6:30 p.m. PDT — 8:30 p.m. PDT


Neural networks in the lazy training regime converge to kernel machines. Can neural networks in the rich feature learning regime learn a kernel machine with a data-dependent kernel? We demonstrate that this can indeed happen due to a phenomenon we term silent alignment, which requires that the tangent kernel of a network evolves in eigenstructure while small and before the loss appreciably decreases, and grows only in overall scale afterwards. We show that such an effect takes place in homogenous neural networks with small initialization and whitened data. We provide an analytical treatment of this effect in the linear network case. In general, we find that the kernel develops a low-rank contribution in the early phase of training, and then evolves in overall scale, yielding a function equivalent to a kernel regression solution with the final network's tangent kernel. The early spectral learning of the kernel depends on the depth. We also demonstrate that non-whitened data can weaken the silent alignment effect.

Chat is not available.