Skip to yearly menu bar Skip to main content


Poster

BlendRL: A Framework for Merging Symbolic and Neural Policy Learning

Hikaru Shindo · Quentin Delfosse · Devendra Singh Dhami · Kristian Kersting

Hall 3 + Hall 2B #390
[ ]
Thu 24 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

Humans can leverage both symbolic reasoning and intuitive responses. In contrast, reinforcement learning policies are typically encoded in either opaque systems like neural networks or symbolic systems that rely on predefined symbols and rules. This disjointed approach severely limits the agents’ capabilities, as they often lack either the flexible low-level reaction characteristic of neural agents or the interpretable reasoning of symbolic agents. To overcome this challenge, we introduce BlendRL, a neuro-symbolic RL framework that harmoniously integrates both paradigms. We empirically demonstrate that BlendRL agents outperform both neural and symbolic baselines in standard Atari environments, and showcase their robustness to environmental changes. Additionally, we analyze the interaction between neural and symbolic policies, illustrating how their hybrid use helps agents overcome each other's limitations.

Live content is unavailable. Log in and register to view live content