Skip to yearly menu bar Skip to main content


Poster

Deep, Skinny Neural Networks are not Universal Approximators

Jesse Johnson

Great Hall BC #19

Keywords: [ neural network ] [ universality ] [ expressability ]


Abstract:

In order to choose a neural network architecture that will be effective for a particular modeling problem, one must understand the limitations imposed by each of the potential options. These limitations are typically described in terms of information theoretic bounds, or by comparing the relative complexity needed to approximate example functions between different architectures. In this paper, we examine the topological constraints that the architecture of a neural network imposes on the level sets of all the functions that it is able to approximate. This approach is novel for both the nature of the limitations and the fact that they are independent of network depth for a broad family of activation functions.

Live content is unavailable. Log in and register to view live content