Skip to yearly menu bar Skip to main content


Spotlight
in
Workshop: Mathematical and Empirical Understanding of Foundation Models (ME-FoMo)

Exploring Demonstration Ensembling for In-context Learning

Muhammad Khalifa · Lajanugen Logeswaran · Moontae Lee · Honglak Lee · Lu Wang

Keywords: [ prompting ] [ in-context learning ] [ few-shot learning ]


Abstract:

In-context learning (ICL) operates by showing language models (LMs) examples of input-output pairs for desired tasks, i.e., demonstrations. The standard approach for ICL is to prompt the LM with concatenated demonstrations followed by the test input. This approach suffers from some issues. First, concatenation offers almost no control over the contribution of each demo to the model prediction. This can be sub-optimal when some demonstrations are not very relevant to the test example. Second, due to the input length limit of transformer models, it can be infeasible to fit many examples into the context, especially when dealing with long-input tasks. In this work, we explore Demonstration Ensembling (DENSE) as an alternative to simple concatenation. DENSE predicts outputs using subsets (i.e., buckets) of the demonstrations and then combines the output probabilities resulting from each subset to produce the final prediction. We study different ensembling methods using GPT-j and experiment on 7 different language tasks. Our experiments show max ensembling to outperform concatenation by an average of 3.8 points.

Chat is not available.