Spotlight
On the Optimal Memorization Power of ReLU Neural Networks
Gal Vardi · Gilad Yehudai · Ohad Shamir
Abstract:
We study the memorization power of feedforward ReLU neural networks. We show that such networks can memorize any N points that satisfy a mild separability assumption using ˜O(√N) parameters. Known VC-dimension upper bounds imply that memorizing N samples requires Ω(√N) parameters, and hence our construction is optimal up to logarithmic factors. We also give a generalized construction for networks with depth bounded by 1≤L≤√N, for memorizing N samples using ˜O(N/L) parameters. This bound is also optimal up to logarithmic factors. Our construction uses weights with large bit complexity. We prove that having such a large bit complexity is both necessary and sufficient for memorization with a sub-linear number of parameters.
Chat is not available.