Skip to yearly menu bar Skip to main content


Poster

Hymba: A Hybrid-head Architecture for Small Language Models

Xin Dong · Yonggan Fu · Shizhe Diao · Wonmin Byeon · ZIJIA CHEN · Ameya Mahabaleshwarkar · Shih-Yang Liu · Matthijs Van keirsbilck · Min-Hung Chen · Yoshi Suhara · Yingyan Celine Lin · Jan Kautz · Pavlo Molchanov

Hall 3 + Hall 2B #571
[ ]
Fri 25 Apr midnight PDT — 2:30 a.m. PDT

Abstract: We propose Hymba, a family of small language models featuring a hybrid-head parallel architecture that integrates attention mechanisms and state space models (SSMs) within the same layer, offering parallel and complementary processing of the same inputs. In this hybrid-head module, attention heads provide high-resolution recall, while SSM heads facilitate efficient context summarization. Additionally, we introduce learnable meta tokens, which are prepended to prompts to store critical meta information, guiding subsequent tokens and alleviating the “forced-to-attend” burden associated with attention mechanisms. Thanks to the global context summarized by SSMs, the attention heads in our model can be further optimized through cross-layer key-value (KV) sharing and a mix of global and local attention, resulting in a compact cache size without compromising accuracy. Notably, Hymba achieves state-of-the-art performance among small LMs: Our Hymba-1.5B-Base model surpasses all sub-2B public models and even outperforms Llama-3.2-3B, achieving 1.32\% higher average accuracy, an 11.67× reduction in cache size, and 3.49× higher throughput.

Live content is unavailable. Log in and register to view live content