Skip to yearly menu bar Skip to main content


Poster

WebRL: Training LLM Web Agents via Self-Evolving Online Curriculum Reinforcement Learning

Zehan Qi · Xiao Liu · Iat Long Iong · Hanyu Lai · Xueqiao Sun · Jiadai Sun · Xinyue Yang · Yu Yang · Shuntian Yao · Wei Xu · Jie Tang · Yuxiao Dong

Hall 3 + Hall 2B #264
[ ]
Sat 26 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

Large language models (LLMs) have shown remarkable potential as autonomous agents, particularly in web-based tasks. However, existing LLM web agents face significant limitations: high-performing agents rely on expensive proprietary LLM APIs, while open LLMs lack the necessary decision-making capabilities. This paper introduces WebRL, a novel self-evolving online curriculum reinforcement learning framework designed to train high-performance web agents using open LLMs. Our approach addresses key challenges in this domain, including the scarcity of training tasks, sparse feedback signals, and policy distribution drift in online learning. WebRL incorporates a self-evolving curriculum that generates new tasks from unsuccessful attempts, a robust outcome-supervised reward model (ORM), and adaptive reinforcement learning strategies to ensure consistent improvement. We apply WebRL to transform Llama-3.1 models into proficient web agents, achieving remarkable results on the WebArena-Lite benchmark. Our Llama-3.1-8B agent improves from an initial 4.8\% success rate to 42.4\%, while the Llama-3.1-70B agent achieves a 47.3\% success rate across five diverse websites. These results surpass the performance of GPT-4-Turbo (17.6\%) by over 160\% relatively and significantly outperform previous state-of-the-art web agents trained on open LLMs (AutoWebGLM, 18.2\%). Our findings demonstrate WebRL's effectiveness in bridging the gap between open and proprietary LLM-based web agents, paving the way for more accessible and powerful autonomous web interaction systems.

Live content is unavailable. Log in and register to view live content