Mon May 6th 09:45 AM -- 06:30 PM @ Room R04
Structure & Priors in Reinforcement Learning (SPiRL)
Pierre-Luc Bacon · Marc Deisenroth · Chelsea Finn · Erin Grant · Thomas L Griffiths · Abhishek Gupta · Nicolas Heess · Michael L. Littman · Junhyuk Oh


Generalization and sample complexity remain unresolved problems in reinforcement learning (RL), limiting the applicability of these methods to real-world problem settings. A powerful solution to these challenges lies in the deliberate use of inductive bias, which has the potential to allow RL algorithms to acquire solutions from significantly fewer samples and with greater generalization performance [Ponsen et al., 2009]. However, the question of what form this inductive bias should take in the context of RL remains an open one. Should it be provided as a prior distribution for use in Bayesian inference [Ghavamzadeh et al., 2015], learned wholly from data in a multi-task or meta-learning setup [Taylor and Stone, 2009], specified as structural constraints (such as temporal abstraction [Parr and Russell, 1998, Dietterich, 2000, Sutton et al., 1999] or hierarchy [Singh, 1992, Dayan and Hinton, 1992]), or some combination thereof?

The computational cost of recently successful applications of RL to complex domains such as gameplay [Silver et al., 2016, Silver et al., 2017, OpenAI, 2018] and robotics [Levine et al., 2018, Kalashnikov et al., 2018] has led to renewed interest in answering this question, most notably in the specification and learning of structure [Vezhnevets et al., 2017, Frans et al., 2018, Andreas et al., 2017] and priors [Duan et al., 2016, Wang et al., 2016, Finn et al., 2017]. In response to this trend, the ICLR 2019 workshop on "Structure & Priors in Reinforcement Learning" (SPiRL) aims to revitalize a multi-disciplinary approach to investigating the role of structure and priors as a way of specifying inductive bias in RL.

Beyond machine learning, other disciplines such as neuroscience and cognitive science have traditionally played, or have the potential to play, a role in identifying useful structure [Botvinick et al., 2009, Boureau et al., 2015] and priors [Trommershauser et al., 2008, Gershman and Niv, 2015, Dubey et al., 2018] for use in RL. As such, we expect attendees to be from a broad variety of backgrounds (including RL and machine learning, Bayesian methods, cognitive science and neuroscience), which would be beneficial for the (re-)discovery of commonalities and under-explored research directions.

09:45 AM Opening remarks (Talk)
09:50 AM TBA (Invited talk)
Pieter Abbeel
10:20 AM Efficient off-policy meta-reinforcement learning via probabilistic context variables (Contributed talk)
Kate Rakelly, Aurick Zhou
10:30 AM Poster Session #1 (Break)
11:00 AM Meta-reinforcement learning: Quo vadis? (Invited talk)
Matthew Botvinick
11:30 AM Directions and challenges in multi-task reinforcement learning (Invited talk)
Katja Hofmann
12:00 PM Self-supervised object-centric representations for reinforcement learning (Invited talk)
Tejas Kulkarni
12:30 PM TBA (Invited talk)
Timothy Lillicrap
03:20 PM Task-agnostic priors for reinforcement learning (Invited talk)
Karthik Narasimhan
03:50 PM Priors for exploration and robustness (Contributed talk)
Ben Eysenbach, Lisa Lee, Jacob Tyo
04:00 PM Poster Session #2 (Break)
04:30 PM TBA (Invited talk)
05:00 PM Learning and development of structured, causal priors (Invited talk)
Jane Wang
05:30 PM Discussion Panel & Closing Remarks (Discussion)
Timothy Lillicrap, Tejas Kulkarni, Karthik Narasimhan, Jane Wang