Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Agent Learning in Open-Endedness

Agent, do you see it now? systematic generalisation in deep reinforcement learning

Borja G. Leon · Murray Shanahan · Francesco Belardinelli


Abstract:

Systematic generalisation, i.e., the algebraic capacity to understand and execute unseen tasks by combining already known primitives, is one of the most desirable features for a computational model. Good adaptation to novel tasks in open-ended settings rely heavily on the ability of agents to reuse their past experience and recombine meaningful learning pieces to tackle new goals. In this work, we analyse how the architecture of convolutional layers impacts on the performance of autonomous agents when generalising to zero-shot, unseen tasks while executing human instructions. Our findings suggest that the convolutional architecture that is correctly suited to the environment the agent will interact with, may be of greater importance than having a generic convolutional network trained in the given environment.

Chat is not available.