Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Large Language Models for Agents

If LLM Is the Wizard, Then Code Is the Wand: A Survey on How Code Empowers Large Language Models to Serve as Intelligent Agents

Ke Yang · Jiateng Liu · John Wu · Chaoqi Yang · Yi Fung · Sha Li · Zixuan Huang · Xu Cao · Xingyao Wang · Heng Ji · ChengXiang Zhai


Abstract:

The prominent large language models (LLMs) of today differ from past language models not only in size, but also in the fact that they are trained on a combination of natural language and code. As a medium between humans and computers, code translates high-level goals into executable steps, featuring standard syntax, logical consistency, abstraction, and modularity. In this survey, we present an overview of the various benefits of integrating code into LLMs' training data. In addition, we trace how these profound capabilities of LLMs, brought by code, have led to their emergence as intelligent agents (IAs). Finally, we present several key challenges and future directions of empowering code-LLMs to serve as IAs.

Chat is not available.