Processing math: 100%
Skip to yearly menu bar Skip to main content


Virtual presentation / poster accept

Approximation and non-parametric estimation of functions over high-dimensional spheres via deep ReLU networks

Namjoon Suh · Tian-Yi Zhou · Xiaoming Huo

Keywords: [ Approximation Theory ] [ High-dimensional sphere ] [ Deep ReLU Neural networks ] [ Asymptotic ] [ non-parametric regression ] [ Deep Learning and representational learning ]


Abstract: We develop a new approximation and estimation analysis of deep feed-forward neural networks (FNNs) with the Rectified Linear Unit (ReLU) activation. The functions of interests for the approximation and estimation are assumed to be from Sobolev spaces defined over the d-dimensional unit sphere with smoothness index r>0. In the regime where r is in the constant order (i.e., r=O(1)), it is shown that at most dd active parameters are required for getting dC approximation rate for some constant C>0. In contrast, in the regime where the index r grows in the order of d (i.e., r=O(d)) asymptotically, we prove the approximation error decays in the rate ddβ with 0<β<1 up to some constant factor independent of d. The required number of active parameters in the networks for the approximation increases polynomially in d as d. In addition to this, it is shown that bound on the excess risk has a dd factor, when r=O(1), whereas it has dO(1) factor, when r=O(d). We emphasize our findings by making comparisons to the results on approximation and estimation errors of deep ReLU FNN when functions are from Sobolev spaces defined over d-dimensional cube. Here, we show that with the current state-of-the-art result, dd factor remain both in the approximation and estimation error, regardless of the order of r.

Chat is not available.