πΈ Auto CoT Prompting
Abstract
This section covers "Auto CoT Prompting".
π¦ Video lecture for this chapter - Link
βοΈ Overview
Two popular techniques of CoT prompting are few-shot CoT and zero-shot CoT. Zero-shot CoT is simple and task-agnostic,Β requiring only adding "Let's think step by step" to prompts. Even though it doesnβt require manually crafted examples and shows decent zero-shot reasoning capabilities, the generated reasoning chains can be inaccurate.
Few-shot CoT achieves better performance than Zero-Shot-CoT but requires significant manual effort in the form of examples with reasoning chains. Auto CoT prompting addresses the issues with few-shot CoT and zero-shot CoT prompting techniques by automatically constructing demonstrations with reasoning chains.
βοΈ How it works
Auto-CoT consists of two main steps.
- The first step involves partitioning questions of a given dataset into a few clusters.
- The second step involves selecting a representative question from each cluster and generating its reasoning chain using Zero-Shot-CoT with simple heuristics.
It is previously discussed that LLMs are just decent zero-shot reasoners and the generated reasoning chains are prone to errors. The selection of diversified questions reduces the impact of incorrect reasoning chains.
βοΈ Pros
Reduced manual effort
-Β Automates the error-prone and laborious task of manual demonstration creation.Scalability
-Β Enables generating demonstrations for a vast number of tasks without manual intervention.
βοΈ Cons
Incorrect Reasoning Chains
-Β The LLM might generate incorrect or incomplete reasoning steps.
To summarize, Auto-CoT prompting automates the task of creating demonstrations and achieves results on par with few-shot CoT prompting.
Note: The above image is from "Auto CoT Prompting" Paper.