Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Reasoning and Planning for Large Language Models

MIR-Bench: Benchmarking LLM's Long-Context Intelligence via Many-Shot In-Context Inductive Reasoning

Kai Yan · Zhan Ling · Kang Liu · Yifan Yang · Ting-Han Fan · Lingfeng Shen · Zhengyin Du · Jiecao Chen


Abstract:

Inductive Reasoning (IR), the ability to summarize rules from existing examples and apply on new ones, has long been viewed as a primal ability for general intelligence and widely studied by cognitive science and AI researchers. Many benchmarks have been proposed to measure such ability for Large Language Models (LLMs); however, they all focus on few-shot (typically 10) setting, which lacks evaluation for aggregating many pieces of information from long contexts. On the other hand, the ever-growing context length of LLMs have brought forth the novel paradigm of many-shot In-Context Learning (ICL), which addresses new tasks with hundreds to thousands of examples without expensive and inefficient fine-tuning. However, many-shot evaluations are mostly focused on classification (a very limited aspect of IR), and popular long-context LLM tasks such as Needle-In-A-Haystack (NIAH) are more of tracking tasks instead of ones that require intelligence. To fix the issues from both worlds, we propose MIR-Bench, the first many-shot in-context inductive reasoning benchmark that asks LLM to induce output via input-output examples from underlying functions with diverse input-output format. Based on such a benchmark, we study many novel problems for inductive reasoning and many-shot ICL, including robustness against erroneous shots and the effect of Chain-of-Thought (CoT), and acquired insightful findings.

Chat is not available.