Skip to yearly menu bar Skip to main content


Workshop

Workshop on Reasoning and Planning for Large Language Models

Zhiyuan Hu · Yilun Zhao · Xidong Feng · Min-Yen Kan · Nouha Dziri · Yali Du · Pang Wei Koh · Bryan Hooi · Arman Cohan

Garnet 212-213

Sun 27 Apr, 5:30 p.m. PDT

This workshop explores the growing capabilities of large language models (LLMs), such as OpenAI's o1 model, in reasoning, planning, and decision-making, highlighting recent advances and challenges. We aim to examine how reinforcement learning methods, post-training optimization, and efficient inference techniques can further enhance LLMs' reasoning capabilities. Topics include training approach for enhancing reasoning and planning abilities, scaling inference for complex tasks, developing robust benchmarks, and extending LLMs to multi-modal and embodied environments. We will also discuss broader themes such as causal reasoning, collaborative multi-agent systems, uncertainty, and explainability to offer insights and guidance for the further development of reasoning and planning in LLMs.

Live content is unavailable. Log in and register to view live content

Timezone: America/Los_Angeles

Schedule

Log in and register to view live content