Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Reasoning and Planning for Large Language Models

LLMs Aren't Good Strategists, Yet Can Accumulate Episodes for Improved Planning

Yi Wu · Zhimin Hu


Abstract:

Strategic reasoning in dynamic environments such as games requires a balance between long-term planning and short-term objectives. Despite advancements in artificial intelligence (AI) for game playing, significant challenges remain. Specifically trained AI agents can reach top or superhuman performance levels, yet they often lack explainability, adaptability, and depend heavily on extensive data and computational resources. Although methods based on large language models (LLMs) are lighter, more generalizable and provide better explainability, their performance is limited due to struggles with strategic consistency. To address these shortcomings, we introduce \textbf{EpiCStaR}, a LLM-based agent enhanced with cognitive-inspired memory modules. This includes episodic memory for sustaining a coherent long-term strategy, working memory for adaptive short-term exploration, semantic memory for general game knowledge and procedural rules for enhanced decision-making efficiency. EpiCStaR, operating within a similar token budget, competes effectively against built-in AI at Level 6 difficulty, surpassing its predecessor's performance at Level 5. Our approach not only offers improved adaptability but also maintains a consistent strategic trajectory, underscoring the importance of cognitive-inspired memory mechanisms in enhancing strategic reasoning within complex environments.

Chat is not available.