Spotlight
On the Optimal Memorization Power of ReLU Neural Networks
Gal Vardi · Gilad Yehudai · Ohad Shamir
Abstract:
We study the memorization power of feedforward ReLU neural networks. We show that such networks can memorize any points that satisfy a mild separability assumption using parameters. Known VC-dimension upper bounds imply that memorizing samples requires parameters, and hence our construction is optimal up to logarithmic factors. We also give a generalized construction for networks with depth bounded by , for memorizing samples using parameters. This bound is also optimal up to logarithmic factors. Our construction uses weights with large bit complexity. We prove that having such a large bit complexity is both necessary and sufficient for memorization with a sub-linear number of parameters.
Chat is not available.