Skip to yearly menu bar Skip to main content

In-Person Poster presentation / top 25% paper

A Primal-Dual Framework for Transformers and Neural Networks

TAN NGUYEN · Tam Nguyen · Nhat Ho · Andrea Bertozzi · Richard Baraniuk · Stanley J Osher

MH1-2-3-4 #79

Keywords: [ neural network ] [ transformer ] [ primal ] [ dual ] [ support vector regression ] [ attention ] [ Deep Learning and representational learning ]


Self-attention is key to the remarkable success of transformers in sequence modeling tasks including many applications in natural language processing and computer vision. Like neural network layers, these attention mechanisms are often developed by heuristics and experience. To provide a principled framework for constructing attention layers in transformers, we show that the self-attention corresponds to the support vector expansion derived from a support vector regression problem, whose primal formulation has the form of a neural network layer. Using our framework, we derive popular attention layers used in practice and propose two new attentions: 1) the Batch Normalized Attention (Attention-BN) derived from the batch normalization layer and 2) the Attention with Scaled Head (Attention-SH) derived from using less training data to fit the SVR model. We empirically demonstrate the advantages of the Attention-BN and Attention-SH in reducing head redundancy, increasing the model's accuracy, and improving the model's efficiency in a variety of practical applications including image and time-series classification.

Chat is not available.