Poster
Studying the Interplay Between the Actor and Critic Representations in Reinforcement Learning
Samuel Garcin · Trevor McInroe · Pablo Samuel Castro · Christopher Lucas · David Abel · Prakash Panangaden · Stefano Albrecht
Hall 3 + Hall 2B #392
Extracting relevant information from a stream of high-dimensional observations is a central challenge for deep reinforcement learning agents. Actor-critic algorithms add further complexity to this challenge, as it is often unclear whether the same information will be relevant to both the actor and the critic. To this end, we here explore the principles that underlie effective representations for an actor and for a critic. We focus our study on understanding whether an actor and a critic will benefit from a decoupled, rather than shared, representation. Our primary finding is that when decoupled, the representations for the actor and critic systematically specialise in extracting different types of information from the environment---the actor's representation tends to focus on action-relevant information, while the critic's representation specialises in encoding value and dynamics information. Finally, we demonstrate how these insights help select representation learning objectives that play into the actor's and critic's respective knowledge specialisations, and improve performance in terms of agent returns.
Live content is unavailable. Log in and register to view live content