Skip to yearly menu bar Skip to main content


Poster

No Need to Talk: Asynchronous Mixture of Language Models

Anastasiia Filippova · Angelos Katharopoulos · David Grangier · Ronan Collobert

Hall 3 + Hall 2B #304
[ ]
Thu 24 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

We introduce SMALLTALK LM, an innovative method for training a mixture of language models in an almost asynchronous manner. Eachmodel of the mixture specializes in distinct parts of the data distribution, without the need of high-bandwidth communication between the nodes training each model. At inference, a lightweight router directs a given sequence to a single expert, according to a short prefix. This inference scheme naturally uses a fraction of the parameters from the overall mixture model. Unlike prior works on asynchronous LLM training, our routing method does not rely on full corpus clustering or access to metadata, making it more suitable for real-world applications. Our experiments on language modeling demonstrate that SMALLTALK LM achieves significantly lower perplexity than dense model baselines for the same total training FLOPs and an almost identical inference cost. Finally, in our downstream evaluations we outperform the dense baseline on 75% of the tasks.

Live content is unavailable. Log in and register to view live content