Poster
How DNNs break the Curse of Dimensionality: Compositionality and Symmetry Learning
Arthur Jacot · Seok Hoan Choi · Yuxiao Wen
Hall 3 + Hall 2B #352
[
Abstract
]
Thu 24 Apr midnight PDT
— 2:30 a.m. PDT
Abstract:
We show that deep neural networks (DNNs) can efficiently learn anycomposition of functions with bounded -norm, which allowsDNNs to break the curse of dimensionality in ways that shallow networkscannot. More specifically, we derive a generalization bound that combinesa covering number argument for compositionality, and the -norm(or the related Barron norm) for large width adaptivity. We show thatthe global minimizer of the regularized loss of DNNs can fit for examplethe composition of two functions from a small numberof observations, assuming is smooth/regular and reduces the dimensionality(e.g. could be the quotient map of the symmetries of ),so that can be learned in spite of its low regularity. The measuresof regularity we consider is the Sobolev norm with different levelsof differentiability, which is well adapted to the norm.We compute scaling laws empirically and observe phase transitionsdepending on whether or is harder to learn, as predictedby our theory.
Live content is unavailable. Log in and register to view live content