Poster
Minimum width for universal approximation using ReLU networks on compact domain
Namjun Kim · Chanho Min · Sejun Park
Halle B #229
Abstract:
It has been shown that deep neural networks of a large enough width are universal approximators but they are not if the width is too small.There were several attempts to characterize the minimum width enabling the universal approximation property; however, only a few of them found the exact values.In this work, we show that the minimum width for approximation of functions from to is exactly if an activation function is ReLU-Like (e.g., ReLU, GELU, Softplus).Compared to the known result for ReLU networks, when the domain is , our result first shows that approximation on a compact domain requires smaller width than on .We next prove a lower bound on for uniform approximation using general activation functions including ReLU: if $d_x
Chat is not available.