Poster
Learning Hierarchical Polynomials of Multiple Nonlinear Features
Hengyu Fu · Zihao Wang · Eshaan Nichani · Jason Lee
Hall 3 + Hall 2B #340
[
Abstract
]
Fri 25 Apr midnight PDT
— 2:30 a.m. PDT
Abstract:
In deep learning theory, a critical question is to understand how neural networks learn hierarchical features. In this work, we study the learning of hierarchical polynomials of multiple nonlinear features using three-layer neural networks. We examine a broad class of functions of the form f⋆=g⋆∘pf⋆=g⋆∘p, where p:Rd→Rr represents multiple quadratic features with r≪d and g⋆:Rr→R is a polynomial of degree p. This can be viewed as a nonlinear generalization of the multi-index model, and also an expansion upon previous work on nonlinear feature learning that focused only on a single feature (i.e. r=1). Our primary contribution shows that a three-layer neural network trained via layerwise gradient descent suffices for - complete recovery of the space spanned by the nonlinear features - efficient learning of the target function f⋆=g⋆∘p or transfer learning of f=g∘p with a different link function within ˜O(d4) samples and polynomial time.For such hierarchical targets, our result substantially improves the sample complexity Θ(d2p) of the kernel methods, demonstrating the power of efficient feature learning. It is important to highlight that our results leverage novel techniques and thus manage to go beyond all prior settings such as single-index and multi-index models as well as models depending just on one nonlinear feature, contributing to a more comprehensive understanding of feature learning in deep learning.
Live content is unavailable. Log in and register to view live content