Skip to yearly menu bar Skip to main content


Poster

Transformer Learns Optimal Variable Selection in Group-Sparse Classification

Chenyang Zhang · Xuran Meng · Yuan Cao

Hall 3 + Hall 2B #147
[ ]
Sat 26 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

Transformers have demonstrated remarkable success across various applications. However, the success of transformers have not been understood in theory. In this work, we give a case study of how transformers can be trained to learn a classic statistical model with "group sparsity", where the input variables form multiple groups, and the label only depends on the variables from one of the groups. We theoretically demonstrate that, a one-layer transformer trained by gradient descent can correctly leverage the attention mechanism to select variables, disregarding irrelevant ones and focusing on those beneficial for classification. We also demonstrate that a well-pretrained one-layer transformer can be adapted to new downstream tasks to achieve good prediction accuracy with a limited number of samples. Our study sheds light on how transformers effectively learn structured data.

Live content is unavailable. Log in and register to view live content