Poster
Mutual Reasoning Makes Smaller LLMs Stronger Problem-Solver
Zhenting Qi · Mingyuan MA · Jiahang Xu · Li Lyna Zhang · Fan Yang · Mao Yang
Hall 3 + Hall 2B #208
This paper introduces rStar, a self-play mutual reasoning approach that significantly improves reasoning capabilities of small language models (SLMs) without fine-tuning or superior models. rStar decouples reasoning into a self-play mutual generation-discrimination process. First, a target SLM augments the Monte Carlo Tree Search (MCTS) with a rich set of human-like reasoning actions to construct higher quality reasoning trajectories. Next, another SLM, with capabilities similar to the target SLM, acts as a discriminator to verify each trajectory generated by the target SLM. The mutually agreed reasoning trajectories are considered mutual consistent, thus are more likely to be correct. Extensive experiments across five SLMs demonstrate rStar can effectively solve diverse reasoning problems, including GSM8K, GSM-Hard, MATH, SVAMP, and StrategyQA. Remarkably, rStar boosts GSM8K accuracy from 12.51\% to 63.91\% for LLaMA2-7B, from 36.46\% to 81.88\% for Mistral-7B, from 74.53\% to 91.13\% for LLaMA3-8B-Instruct. Code is available at https://github.com/zhentingqi/rStar.
Live content is unavailable. Log in and register to view live content