Skip to yearly menu bar Skip to main content

In-Person Poster presentation / top 25% paper

Learning Soft Constraints From Constrained Expert Demonstrations

Ashish Gaurav · Kasra Rezaee · Guiliang Liu · Pascal Poupart

MH1-2-3-4 #104

Keywords: [ Reinforcement Learning ] [ constraint learning ] [ inverse reinforcement learning ]


Inverse reinforcement learning (IRL) methods assume that the expert data is generated by an agent optimizing some reward function. However, in many settings, the agent may optimize a reward function subject to some constraints, where the constraints induce behaviors that may be otherwise difficult to express with just a reward function. We consider the setting where the reward function is given, and the constraints are unknown, and propose a method that is able to recover these constraints satisfactorily from the expert data. While previous work has focused on recovering hard constraints, our method can recover cumulative soft constraints that the agent satisfies on average per episode. In IRL fashion, our method solves this problem by adjusting the constraint function iteratively through a constrained optimization procedure, until the agent behavior matches the expert behavior. We demonstrate our approach on synthetic environments, robotics environments and real world highway driving scenarios.

Chat is not available.