Skip to yearly menu bar Skip to main content


Spoken Question Answering and Speech Continuation Using Spectrogram-Powered LLM

Eliya Nachmani · Alon Levkovitch · Roy Hirsch · Julian Salazar · Chulayuth Asawaroengchai · Soroosh Mariooryad · Ehud Rivlin · RJ Skerry-Ryan · Michele Tadmor Ramanovich

Halle B #57
[ ]
Wed 8 May 1:45 a.m. PDT — 3:45 a.m. PDT


We present Spectron, a novel approach to adapting pre-trained large language models (LLMs) to perform spoken question answering (QA) and speech continuation. By endowing the LLM with a pre-trained speech encoder, our model becomes able to take speech inputs and generate speech outputs. The entire system is trained end-to-end and operates directly on spectrograms, simplifying our architecture. Key to our approach is a training objective that jointly supervises speech recognition, text continuation, and speech synthesis using only paired speech-text pairs, enabling a `cross-modal' chain-of-thought within a single decoding pass. Our method surpasses existing spoken language models in speaker preservation and semantic coherence. Furthermore, the proposed model improves upon direct initialization in retaining the knowledge of the original LLM as demonstrated through spoken QA datasets. We release our audio samples and spoken QA dataset via our website.

Chat is not available.