Skip to yearly menu bar Skip to main content


Poster

Minimum width for universal approximation using ReLU networks on compact domain

Namjun Kim · Chanho Min · Sejun Park

Halle B #229

Abstract: It has been shown that deep neural networks of a large enough width are universal approximators but they are not if the width is too small.There were several attempts to characterize the minimum width wmin enabling the universal approximation property; however, only a few of them found the exact values.In this work, we show that the minimum width for Lp approximation of Lp functions from [0,1]dx to Rdy is exactly maxdx,dy,2 if an activation function is ReLU-Like (e.g., ReLU, GELU, Softplus).Compared to the known result for ReLU networks, wmin=maxdx+1,dy when the domain is Rdx, our result first shows that approximation on a compact domain requires smaller width than on Rdx.We next prove a lower bound on wmin for uniform approximation using general activation functions including ReLU: wmindy+1 if $d_x

Chat is not available.