Skip to content

🏸 Auto CoT Prompting

Abstract

This section covers "Auto CoT Prompting".

🦜 Video lecture for this chapter - Link

βœ’οΈ Overview

Two popular techniques of CoT prompting are few-shot CoT and zero-shot CoT. Zero-shot CoT is simple and task-agnostic,Β requiring only adding "Let's think step by step" to prompts. Even though it doesn’t require manually crafted examples and shows decent zero-shot reasoning capabilities, the generated reasoning chains can be inaccurate.

Few-shot CoT achieves better performance than Zero-Shot-CoT but requires significant manual effort in the form of examples with reasoning chains. Auto CoT prompting addresses the issues with few-shot CoT and zero-shot CoT prompting techniques by automatically constructing demonstrations with reasoning chains.

βœ’οΈ How it works

Auto-CoT consists of two main steps.

  • The first step involves partitioning questions of a given dataset into a few clusters.
  • The second step involves selecting a representative question from each cluster and generating its reasoning chain using Zero-Shot-CoT with simple heuristics.

It is previously discussed that LLMs are just decent zero-shot reasoners and the generated reasoning chains are prone to errors. The selection of diversified questions reduces the impact of incorrect reasoning chains.

βœ’οΈ Pros

  • Reduced manual effort -Β Automates the error-prone and laborious task of manual demonstration creation.
  • Scalability -Β Enables generating demonstrations for a vast number of tasks without manual intervention.

βœ’οΈ Cons

  • Incorrect Reasoning Chains -Β The LLM might generate incorrect or incomplete reasoning steps.

To summarize, Auto-CoT prompting automates the task of creating demonstrations and achieves results on par with few-shot CoT prompting.

πŸ“ Note: The above image is from "Auto CoT Prompting" Paper.