Skip to yearly menu bar Skip to main content


Poster

Generative Representational Instruction Tuning

Niklas Muennighoff · Hongjin SU · Liang Wang · Nan Yang · Furu Wei · Tao Yu · Amanpreet Singh · Douwe Kiela

Hall 3 + Hall 2B #210
[ ]
Sat 26 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

All text-based language problems can be reduced to either generation or embedding. Current models only perform well at one or the other. We introduce generative representational instruction tuning (GRIT) whereby a large language model is trained to handle both generative and embedding tasks by distinguishing between them through instructions. Compared to other open models, our resulting GritLM-7B is among the top models on the Massive Text Embedding Benchmark (MTEB) and outperforms various models up to its size on a range of generative tasks. By scaling up further, GritLM-8x7B achieves even stronger generative performance while still being among the best embedding models. Notably, we find that GRIT matches training on only generative or embedding data, thus we can unify both at no performance loss. Among other benefits, the unification via GRIT speeds up Retrieval-Augmented Generation (RAG) by > 60% for long documents, by no longer requiring separate retrieval and generation models. Models, code, etc. are freely available at https://github.com/ContextualAI/gritlm.

Live content is unavailable. Log in and register to view live content