MemoGraph: Augmenting LLMs with Explicit Episodic Memory for Multi-step Mathematical Reasoning
Abstract
Large Language Models (LLMs) fundamentally struggle with complex mathematical reasoning due to the volatility of their implicit parametric memory, which leads to context drift and hallucination. Existing paradigms, relying on linear generation or static retrieval, fail to maintain a precise, persistent record of the evolving proof state. To address this, we propose \textbf{MemoGraph}, a neuro-symbolic framework that augments LLMs with an explicit episodic memory layer. We formulate reasoning as the dynamic maintenance of a heterogeneous graph, enabling state-aware reading that utilizes graph-structural encoding to retrieve relevant principles from a verified semantic memory. Furthermore, we introduce a write-gating verification module to intercept invalid deductions before they are consolidated into the reasoning context. Empirical evaluations across multiple benchmarks demonstrate that MemoGraph significantly outperforms strong baselines in both accuracy and robustness by ensuring memory integrity, establishing a scalable paradigm for trustworthy reasoning agents.