Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Large Language Models for Agents

Beyond A*: Better LLM planning via Search Dynamics Bootstrapping

Lucas Lehnert · Sainbayar Sukhbaatar · Paul McVay · Michael Rabbat · Yuandong Tian


Abstract: While Transformers have enabled tremendous progress in various application settings, such architectures still lag behind traditional symbolic planners for solving complex decision making tasks. In this work, we demonstrate how to train Transformers to solve complex planning tasks and present *Searchformer*, a Transformer model that solves previously unseen Sokoban puzzles 93.5% of the time while using up to 12.7% fewer search steps than standard $A^*$ search. Searchformer is an encoder-decoder Transformer model trained to predict the search dynamics of $A^*$ and then fine-tuned via expert iterations to perform fewer search steps while still generating the optimal plan. In our training method, $A^*$'s search dynamics are expressed as a token sequence outlining when search states are added and removed into the search tree during symbolic planning. In our ablation study on maze navigation, we find that Searchformer significantly outperforms baselines that predict the optimal plan directly with 5-10x smaller model size and 10x smaller training set. Searchformer also scales to larger and more complex decision making tasks.

Chat is not available.