Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Workshop on Large Language Models for Agents

AutoAct: Automatic Agent Learning from Scratch via Self-Planning

Shuofei Qiao · Ningyu Zhang · Runnan Fang · Yujie Luo · Wangchunshu Zhou · Yuchen Jiang · chengfei lv · Huajun Chen


Abstract:

Language agents have achieved considerable performance on various complex tasks. Despite the incessant exploration in this field, existing language agent systems still struggle with costly, non-reproducible data reliance and face the challenge of compelling a single model for multiple functions. To this end, we introduce AutoAct, an automatic agent learning framework that does not rely on large-scale annotated data and synthetic trajectories from closed-source models (e.g., GPT-4). Given limited data with a tool library, AutoAct first automatically synthesizes planning trajectories without any assistance from humans or strong closed-source models. Then, AutoAct leverages a division-of-labor strategy to automatically differentiate based on the target task information and synthesized trajectories, producing a sub-agent group to complete the task. We conduct comprehensive experiments with different LLMs, which demonstrates that AutoAct yields better or parallel performance compared to various strong baselines. Further analysis demonstrates the effectiveness of the division-of-labor strategy, with the trajectory quality generated by AutoAct generally outperforming that of other methods.

Chat is not available.