Workshop
Mon May 06 07:45 AM -- 04:30 PM (PDT) @ Room R04
Structure & Priors in Reinforcement Learning (SPiRL)
Pierre-Luc Bacon · Marc Deisenroth · Chelsea Finn · Erin Grant · Thomas L Griffiths · Abhishek Gupta · Nicolas Heess · Michael L. Littman · Junhyuk Oh
Abstract
Generalization and sample complexity remain unresolved problems in reinforcement learning (RL), limiting the applicability of these methods to real-world problem settings. A powerful solution to these challenges lies in the deliberate use of inductive bias, which has the potential to allow RL algorithms to acquire solutions from significantly fewer samples and with greater generalization performance [Ponsen et al., 2009]. However, the question of what form this inductive bias should take in the context of RL remains an open one. Should it be provided as a prior distribution for use in Bayesian inference [Ghavamzadeh et al., 2015], learned wholly from data in a multi-task or meta-learning setup [Taylor and Stone, 2009], specified as structural constraints (such as temporal abstraction [Parr and Russell, 1998, Dietterich, 2000, Sutton et al., 1999] or hierarchy [Singh, 1992, Dayan and Hinton, 1992]), or some combination thereof?
The computational cost of recently successful applications of RL to complex domains such as gameplay [Silver et al., 2016, Silver et al., 2017, OpenAI, 2018] and robotics [Levine et al., 2018, Kalashnikov et al., 2018] has led to renewed interest in answering this question, most notably in the specification and learning of structure [Vezhnevets et al., 2017, Frans et al., 2018, Andreas et al., 2017] and priors [Duan et al., 2016, Wang et al., 2016, Finn et al., 2017]. In response to this trend, the ICLR 2019 workshop on "Structure & Priors in Reinforcement Learning" (SPiRL) aims to revitalize a multi-disciplinary approach to investigating the role of structure and priors as a way of specifying inductive bias in RL.
Beyond machine learning, other disciplines such as neuroscience and cognitive science have traditionally played, or have the potential to play, a role in identifying useful structure [Botvinick et al., 2009, Boureau et al., 2015] and priors [Trommershauser et al., 2008, Gershman and Niv, 2015, Dubey et al., 2018] for use in RL. As such, we expect attendees to be from a broad variety of backgrounds (including RL and machine learning, Bayesian methods, cognitive science and neuroscience), which would be beneficial for the (re-)discovery of commonalities and under-explored research directions.