Skip to yearly menu bar Skip to main content


Virtual oral
in
Affinity Workshop: Tiny Papers Showcase Day (a DEI initiative)

The Point to Which Soft Actor-Critic Converges

Jianfei Ma


Abstract:

Soft actor-critic is a successful successor over soft Q-learning. While lived under maximum entropy framework, their relationship is still unclear. In this paper, we prove that in the limit they converge to the same solution. This is appealing since it translates the optimization from an arduous to an easier way. The same justification can also be applied to other regularizers such as KL divergence.

Chat is not available.