Skip to yearly menu bar Skip to main content


Poster
in
Workshop: From Cells to Societies: Collective Learning Across Scales

Summarizing Societies: Agent Abstraction in Multi-Agent Reinforcement Learning

Matt Riemer · Maximilian Puelma Touzel · Amin Memarian · Rupali Bhati · Irina Rish

Keywords: [ multi-agent ]


Abstract:

In many agent societies with complex and vast interactions, agents cannot make sense of the world through direct consideration of small-scale low-level agent identities, rather it is imperative to recognize emergent collective identities among populations of agents. In this paper, we take a first step towards developing a framework for recognizing this structure in low-level agents so that they can be modeled as a much smaller number of high-level agents – a process that we call agent abstraction. Specifically, we build on the literature of bisimulation metrics for state abstraction in reinforcement learning and take steps to broaden the scope of this theory to the setting of multi-agent reinforcement learning, in which an agent is necessarily faced with a non-stationary environment resulting from the presence of other learning agents. We formulate a new set of bisimulation metrics on the joint action space of other agents and analyze a straightforward, if crude, abstraction based on a metric that distinguishes experienced joint actions. We show that this joint action space abstraction improves the minimax regret of a reinforcement learning agent by a transparent factor that inspires a measure for the utility of abstracting the joint action space of a subset of agents. We then test this measure on a large dataset of human play of the popular social dilemma game Diplomacy, we find that it correlates strongly with the degree of ground- truth abstraction of low-level units into the human players that control them and reveals key moments of stronger top-down control during the game.

Chat is not available.