Belief Engine: Bayesian Memory for Configurable Opinion Dynamics in LLM Agents
Abstract
Large Language Model (LLM) agents can debate fluently, but they do not reliably maintain beliefs across long interactions. This makes it difficult to use them for opinion-dynamics studies where trajectories must be stable, interpretable, and reproducible. We introduce the Belief Engine, a configurable belief architecture that externalises belief state and updates it from extracted arguments. The engine stores adjudicated evidence in memory and updates a bounded stance score using a simple Bayesian log-odds rule with tunable parameters controlling evidence sensitivity, anchoring, and asymmetric weighting. In controlled two-agent debates across topics, we show that Belief Engine produces stance trajectories that are smoother and more reproducible than LLM-based (Agentic) updating, and that its parameters provide monotonic control over persuadability and resistance. By separating what an agent says from how its beliefs are updated, the framework enables traceable and controllable opinion dynamics in LLM-agent simulations.