Poster
Kolmogorov-Arnold Transformer
Xingyi Yang · Xinchao Wang
Hall 3 + Hall 2B #224
Transformers stand as the cornerstone of mordern deep learning. Traditionally, these models rely on multi-layerperceptron (MLP) layers to mix the information between channels. In this paper, we introduce the Kolmogorov–ArnoldTransformer (KAT), a novel architecture that replaces MLP layers with Kolmogorov-Arnold Network (KAN) layers toenhance the expressiveness and performance of the model. Integrating KANs into transformers, however, is no easyfeat, especially when scaled up. Specifically, we identify three key challenges: (C1) Base function. The standard B-splinefunction used in KANs is not optimized for parallel computing on modern hardware, resulting in slower inference speeds.(C2) Parameter and Computation Inefficiency. KAN requires a unique function for each input-output pair, making thecomputation extremely large. (C3) Weight initialization. The initialization of weights in KANs is particularly challengingdue to their learnable activation functions, which are critical for achieving convergence in deep neural networks. Toovercome the aforementioned challenges, we propose three key solutions: (S1) Rational basis. We replace B-spline functionswith rational functions to improve compatibility with modern GPUs. By implementing this in CUDA, we achieve fastercomputations. (S2) Group KAN. We share the activation weights through a group of neurons, to reduce the computationalload without sacrificing performance. (S3) Variance-preserving initialization. We carefully initialize the activation weightsto make sure that the activation variance is maintained across layers. With these designs, KAT scales effectively and readilyoutperforms traditional MLP-based transformers. We demonstrate the advantages of KAT across various tasks, includingimage recognition, object detection, and semantic segmentation. It consistently enhances performance over the standardtransformer architectures of different model sizes.
Live content is unavailable. Log in and register to view live content