Skip to yearly menu bar Skip to main content


Poster

On Speeding Up Language Model Evaluation

Jin Zhou · Christian Belardi · Ruihan Wu · Travis Zhang · Carla Gomes · Wen Sun · Kilian Weinberger

Hall 3 + Hall 2B #225
[ ] [ Project Page ]
Fri 25 Apr 7 p.m. PDT — 9:30 p.m. PDT

Abstract:

Developing prompt-based methods with Large Language Models (LLMs) requires making numerous decisions, which give rise to a combinatorial search problem over hyper-parameters. This exhaustive evaluation can be time-consuming and costly. In this paper, we propose an \textit{adaptive} approach to explore this space. We are exploiting the fact that often only few samples are needed to identify clearly superior or inferior settings, and that many evaluation tests are highly correlated. We lean on multi-armed bandits to sequentially identify the next (method, validation sample)-pair to evaluate and utilize low-rank matrix factorization to fill in missing evaluations. We carefully assess the efficacy of our approach on several competitive benchmark problems and show that it can identify the top-performing method using only 5-15% of the typical resources---resulting in 85-95% LLM cost savings. Our code is available at https://github.com/kilian-group/banditeval.

Live content is unavailable. Log in and register to view live content