Processing math: 100%
Skip to yearly menu bar Skip to main content


Poster

Scaling Transformers for Low-Bitrate High-Quality Speech Coding

Julian Parker · Anton Smirnov · Jordi Pons · CJ Carr · Zack Zukowski · Zach Evans · Xubo Liu

Hall 3 + Hall 2B #47
[ ] [ Project Page ]
Thu 24 Apr 7 p.m. PDT — 9:30 p.m. PDT

Abstract: The tokenization of audio with neural audio codec models is a vital part of modern AI pipelines for the generation or understanding of speech, alone or in a multimodal context. Traditionally such tokenization models have concentrated on low parameter-count architectures using only components with strong inductive biases. In this work we show that by applying a transformer architecture with large parameter count to this problem, and applying a flexible Finite Scalar Quantization (FSQ) based bottleneck, it is possible to reach state-of-the-art speech quality at extremely low bit-rates of 400 or 700 bits-per-second. The trained models strongly out-perform existing baselines in both objective and subjective tests.

Live content is unavailable. Log in and register to view live content