Skip to yearly menu bar Skip to main content


Poster

On the Optimal Memorization Capacity of Transformers

Tokio Kajitsuka · Issei Sato

Hall 3 + Hall 2B #433
[ ]
Thu 24 Apr midnight PDT — 2:30 a.m. PDT

Abstract: Recent research in the field of machine learning has increasingly focused on the memorization capacity of Transformers, but how efficient they are is not yet well understood.We demonstrate that Transformers can memorize labels with O~(N) parameters in a next-token prediction setting for N input sequences of length n, which is proved to be optimal up to logarithmic factors.This indicates that Transformers can efficiently perform memorization with little influence from the input length n owing to the benefit of parameter sharing.We also analyze the memorization capacity in the sequence-to-sequence setting, and find that O~(nN) parameters are not only sufficient, but also necessary at least for Transformers with hardmax.These results suggest that while self-attention mechanisms can efficiently identify input sequences, the feed-forward network becomes a bottleneck when associating a label to each token.

Live content is unavailable. Log in and register to view live content