Skip to yearly menu bar Skip to main content


Poster

Soft Q-Learning with Mutual-Information Regularization

Jordi Grau-Moya · Felix Leibfried · Peter Vrancx

Great Hall BC #1

Keywords: [ entropy ] [ regularization ] [ mutual information ] [ reinforcement learning ]


Abstract:

We propose a reinforcement learning (RL) algorithm that uses mutual-information regularization to optimize a prior action distribution for better performance and exploration. Entropy-based regularization has previously been shown to improve both exploration and robustness in challenging sequential decision-making tasks. It does so by encouraging policies to put probability mass on all actions. However, entropy regularization might be undesirable when actions have significantly different importance. In this paper, we propose a theoretically motivated framework that dynamically weights the importance of actions by using the mutual-information. In particular, we express the RL problem as an inference problem where the prior probability distribution over actions is subject to optimization. We show that the prior optimization introduces a mutual-information regularizer in the RL objective. This regularizer encourages the policy to be close to a non-uniform distribution that assigns higher probability mass to more important actions. We empirically demonstrate that our method significantly improves over entropy regularization methods and unregularized methods.

Live content is unavailable. Log in and register to view live content