Skip to yearly menu bar Skip to main content


Poster

Sparse Learning for State Space Models on Mobile

Xuan Shen · Hangyu Zheng · Yifan Gong · Zhenglun Kong · Changdi Yang · Zheng Zhan · Yushu Wu · Xue Lin · Yanzhi Wang · Pu Zhao · Wei Niu

Hall 3 + Hall 2B #631
[ ]
Fri 25 Apr 7 p.m. PDT — 9:30 p.m. PDT

Abstract: Transformer models have been widely investigated in different domains by providing long-range dependency handling and global contextual awareness, driving the development of popular AI applications such as ChatGPT, Gemini, and Alexa.State Space Models (SSMs) have emerged as strong contenders in the field of sequential modeling, challenging the dominance of Transformers. SSMs incorporate a selective mechanism that allows for dynamic parameter adjustment based on input data, enhancing their performance.However, this mechanism also comes with increasing computational complexity and bandwidth demands, posing challenges for deployment on resource-constraint mobile devices.To address these challenges without sacrificing the accuracy of the selective mechanism, we propose a sparse learning framework that integrates architecture-aware compiler optimizations. We introduce an end-to-end solution--Cn4Cn4 kernel sparsity, which prunes nn elements from every four contiguous weights, and develop a compiler-based acceleration solution to ensure execution efficiency for this sparsity on mobile devices.Based on the kernel sparsity, our framework generates optimized sparse models targeting specific sparsity or latency requirements for various model sizes. We further leverage pruned weights to compensate for the remaining weights, enhancing downstream task performance.For practical hardware acceleration, we propose Cn4Cn4-specific optimizations combined with a layout transformation elimination strategy. This approach mitigates inefficiencies arising from fine-grained pruning in linear layers and improves performance across other operations. Experimental results demonstrate that our method achieves superior task performance compared to other semi-structured pruning methods and achieves up-to 7×× speedup compared to llama.cpp framework on mobile devices.

Live content is unavailable. Log in and register to view live content