Poster
in
Workshop: Workshop on Large Language Models for Agents
MathChat: Converse to Tackle Challenging Math Problems with LLM Agents
Yiran Wu · Feiran Jia · Shaokun Zhang · Hangyu Li · Erkang Zhu · Yue Wang · Yin Tat Lee · Richard Peng · Qingyun Wu · Chi Wang
Employing Large Language Models (LLMs) to address mathematical problems is an intriguing research endeavor, considering the abundance of math problems expressed in natural language across numerous science and engineering fields. LLMs, with their generalized ability, are used as a foundation model to build AI agents for different tasks. In this paper, we study the effectiveness of utilizing LLM agents to solve math problems through conversations. We propose MathChat, a conversational problem-solving framework designed for math problems. MathChat consists of an LLM agent and a user proxy agent which is responsible for tool execution and additional guidance. This synergy facilitates a collaborative problem-solving process, where the agents engage in a dialogue to solve the problems.We perform evaluation on difficult high school competition problems from the MATH dataset. Utilizing Python, we show that MathChat can further improve previous tool-using prompting methods by 6\%.