Skip to yearly menu bar Skip to main content


Poster

DOM-Q-NET: Grounded RL on Structured Language

Sheng Jia · Jamie Kiros · Jimmy Ba

Great Hall BC #11

Keywords: [ web navigation ] [ reinforcement learning ] [ graph neural networks ]


Abstract:

Building agents to interact with the web would allow for significant improvements in knowledge understanding and representation learning. However, web navigation tasks are difficult for current deep reinforcement learning (RL) models due to the large discrete action space and the varying number of actions between the states. In this work, we introduce DOM-Q-NET, a novel architecture for RL-based web navigation to address both of these problems. It parametrizes Q functions with separate networks for different action categories: clicking a DOM element and typing a string input. Our model utilizes a graph neural network to represent the tree-structured HTML of a standard web page. We demonstrate the capabilities of our model on the MiniWoB environment where we can match or outperform existing work without the use of expert demonstrations. Furthermore, we show 2x improvements in sample efficiency when training in the multi-task setting, allowing our model to transfer learned behaviours across tasks.

Live content is unavailable. Log in and register to view live content