Skip to yearly menu bar Skip to main content


Virtual presentation / poster accept

Automatic Chain of Thought Prompting in Large Language Models

Zhuosheng Zhang · Aston Zhang · Mu Li · Alex Smola

Keywords: [ Arithmetic Reasoning ] [ Chain of Thought Prompting ] [ Commonsense Reasoning ] [ in-context learning ] [ large language models ] [ few-shot learning ] [ Symbolic Reasoning. ] [ Applications ]


Abstract:

Large Language Models (LLMs) can carry out complex reasoning tasks by generating intermediate reasoning steps. These steps are triggered by what is called chain-of-thought (CoT) prompting, which comes in two flavors: one leverages a simple prompt like "Let’s think step by step" to facilitate step-by-step reasoning before answering a question (Zero-Shot-CoT). The other uses manual demonstrations, each composed of a question and a reasoning chain that leads to an answer (Manual-CoT). Unfortunately, the superior performance of the latter strategy crucially hinges on manually generating task-specific demonstrations. This makes it far less scalable and more dependent on the talent of the CoT engineer. We show that such manual efforts may be eliminated by leveraging LLMs to generate the reasoning chains on its own. Since these generated chains often come with mistakes we propose a number of mitigation strategies. Our proposed Auto-CoT method automaticaly samples diverse questions and we perform post-processing quality control to generate usable reasoning chains from Zero-Shot-CoT. On ten public benchmark reasoning tasks, Auto-CoT performs on par with Manual-CoT without the need for human intervention. Code is available at https://github.com/amazon-research/auto-cot.

Chat is not available.