Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Bridging the Gap Between Practice and Theory in Deep Learning

Contributed Talk 3: What Can Transformer Learn with Varying Depth? Case Studies on Sequence Learning Tasks

Xingwu Chen · Difan Zou


Abstract:

We study the capabilities of the transformer architecture with varying depth. Specifically, we designed a novel set of sequence learning tasks to systematically evaluate and comprehend how the depth of transformer affects its ability to perform memorization, reasoning, generalization, and contextual generalization. We show a transformer with only one attention layer can excel in memorization but falls short in other tasks. Then, we show that exhibiting reasoning and generalization ability requires the transformer to have at least two attention layers, while context generalization ability may necessitate three attention layers. Additionally, we identify a class of simple operations that a single attention layer can execute, and show that the complex tasks can be approached as the combinations of these simple operations and thus can be resolved by stacking multiple attention layers. This sheds light on studying more practical and complex tasks beyond our design. Numerical experiments corroborate our theoretical findings.

Chat is not available.