Poster
MMTEB: Massive Multilingual Text Embedding Benchmark
Kenneth Enevoldsen · Isaac Chung · Imene Kerboua · Márton Kardos · Ashwin Mathur · David Stap · Jay Gala · Wissam Siblini · Dominik Krzemiński · Genta Winata · Saba Sturua · Saiteja Utpala · Mathieu Ciancone · Marion Schaeffer · Diganta Misra · Shreeya Dhakal · Jonathan Rystrøm · Roman Solomatin · Ömer Çağatan · Akash Kundu · Martin Bernstorff · Shitao Xiao · Akshita Sukhlecha · Bhavish Pahwa · Rafał Poświata · Kranthi Kiran GV · Shawon Ashraf · Daniel Auras · Björn Plüster · Jan Harries · Loïc Magne · Isabelle Mohr · Dawei Zhu · Hippolyte Gisserot-Boukhlef · Tom Aarsen · Jan Kostkan · Konrad Wojtasik · Taemin Lee · Marek Suppa · Crystina Zhang · Roberta Rocca · Mohammed Hamdy · Andrianos Michail · John Yang · Manuel Faysse · Aleksei Vatolin · Nandan Thakur · Manan Dey · Dipam Vasani · Pranjal Chitale · Simone Tedeschi · Nguyen Tai · Artem Snegirev · Mariya Hendriksen · Michael Günther · Mengzhou Xia · Weijia Shi · Xing Han Lu · Jordan Clive · Gayatri K · Maksimova Anna · Silvan Wehrli · Maria Tikhonova · Henil Panchal · Aleksandr Abramov · Malte Ostendorff · Zheng Liu · Simon Clematide · Lester James V. Miranda · Alena Fenogenova · Guangyu Song · Ruqiya Bin Safi · Wen-Ding Li · Alessia Borghini · Federico Cassano · Lasse Hansen · Sara Hooker · Chenghao Xiao · Vaibhav Adlakha · Orion Weller · Siva Reddy · Niklas Muennighoff
Hall 3 + Hall 2B #314
Text embeddings are typically evaluated on a narrow set of tasks, limited in terms of languages, domains, and task types. To circumvent this limitation and to provide a more comprehensive evaluation, we introduce the Massive Multilingual Text Embedding Benchmark (MMTEB) -- a large-scale community-driven initiative expanding MTEB to over 500 quality-controlled evaluation tasks across 1,000+ languages. MMTEB includes a wide range of challenging novel tasks such as instruction following, long-document retrieval, and code retrieval, and represents the largest multilingual collection of evaluation tasks for embedding models to date. We use this collection to construct multiple highly multilingual benchmarks. We evaluate a representative set of models on these benchmarks.Our findings indicate that, while LLM-based models can achieve state-of-the-art performance on a subset of languages, the best-performing publicly available model across languages is the notably smaller, multilingual-e5-large-instruct.Massive benchmarks often impose high computational demands, limiting accessibility, particularly for low-resource communities. To address this, we downsample tasks based on inter-task correlation (i.e., selecting only a diverse set of tasks) while preserving relative rankings.We further optimize tasks such as retrieval by sampling hard negatives, creating smaller but effective splits. These optimizations allow us to introduce benchmarks at a significantly lower computational cost. For instance, we introduce a new zero-shot English benchmark that maintains a similar ordering at a fraction of the cost.
Live content is unavailable. Log in and register to view live content