Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Large Language Models for Agents

Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models

Andy Zhou · Kai Yan · Michal Shlapentokh-Rothman · Haohan Wang · Yu-Xiong Wang


Abstract:

While language models (LMs) have shown potential on a range of decision-making tasks, their reliance on simple acting processes limits their broad deployment as autonomous agents. In this paper, we introduce Language Agent Tree Search (LATS) -- \emph{the first general} framework that \emph{synergizes} the capabilities of LMs in reasoning, acting, and planning. By leveraging the in-context learning ability of LMs, we integrate Monte Carlo tree search into LATS to enables LMs as agents, along with LM-powered value functions and self-reflections for cleverer exploration and thus enhanced decision-making. A key feature of our approach is the incorporation of an environment for external feedback, which offers a more deliberate and adaptive problem-solving mechanism that surpasses the constraints of existing techniques. Our experimental evaluation across diverse domains, including programming, interactive QA, web navigation, and math, validates the effectiveness and generality of LATS in decision-making while maintaining competitive or improved reasoning performance. Notably, LATS achieves state-of-the-art pass@1 accuracy (94.4%) for programming on HumanEval with GPT-4, and demonstrates gradient-free performance (average score of 75.9) comparable to gradient-based fine-tuning for web navigation on WebShop with GPT-3.5.

Chat is not available.