πΊ CoT Prompting
Abstract
This section covers "CoT Prompting".
π¦ Video lecture for this chapter - Link
π Overview
CoT Prompting involves presenting the model with examples having reasoning chains along with input and outputs. These reasoning chains guide the LLM to solve the given problem in a step-by-step manner. This technique is especially useful for tasks that require logical reasoning, problem-solving, or deep understanding.
This technique not only helps in obtaining more accurate and contextually relevant answers but also makes the model's decision-making process transparent and easier to understand.
π How it works
Consider the human thought process when solving a complicated reasoning task such as a multi-step math word problem. It is quite common to decompose the problem into intermediate steps and solve each before giving the final answer: βAfter Jane gives 2 flowers to her mom she has 10 . . . then after she gives 3 to her dad she will have 7 . . . so the answer is 7.β
CoT prompting is heavily inspired by this observation. CoT prompting by providing example reasoning chains instructs the LLM to solve the given problem in a step-by-step manner rather than arrive at the answer directly.
-
Show examples
- You provide a few examples (solved problems) with their reasoning steps. -
Learn by watching
- The LLM learns from these examples how to break the problem into various steps, solve each step and finally generate the answer. -
Solve the new problem
- The LLM attempts to solve the given problem based on the step-by-step approach learned from the given examples.
Let us understand this with an example.
πΏ Example
Prompt
Q: A coin is heads up. Ka flips the coin. Sherrie flips the coin. Is the coin still heads up?
A: The coin was flipped by Ka and Sherrie. So the coin was flipped 2 times, which is an even number.
The coin started heads up, so after an even number of flips, it will still be heads up. So the answer is
yes.
Q: A coin is heads up. Jamey flips the coin. Teressa flips the coin. Is the coin still heads up?
A: The coin was flipped by Jamey and Teressa. So the coin was flipped 2 times, which is an even number. The
coin started heads up, so after an even number of flips, it will still be heads up. So the answer is yes.
Q: A coin is heads up. Maybelle flips the coin. Shalonda does not flip the coin. Is the coin still heads up?
A: The coin was flipped by Maybelle. So the coin was flipped 1 time, which is an odd number. The coin started
heads up, so after an odd number of flips, it will be tails up. So the answer is no.
Q: A coin is heads up. Millicent does not flip the coin. Conception flips the coin. Is the coin still heads up?
A:
The coin was flipped by Conception. So the coin was flipped 1 time, which is an odd number. The coin started
heads up, so after an odd number of flips, it will be tails up. So the answer is no.
π Pros
- Improves LLM performance on reasoning tasks (math,Β logic,Β etc.).
- Makes LLM answers more transparent and understandable.
- Helps debug LLM reasoning errors.
π Cons
- Requires hand-crafted reasoning steps for each task.
- May not generalize well to all problems.
- Can be computationally expensive.
- Gives good results only with the LLMs having a large number of parameters.
In short, CoT prompting teaches LLMs to think step-by-step, boosting reasoning and explainability, but demands effort and has limitations. While it comes with challenges, its potential for improved accuracy and transparency makes it a promising prompting technique for various applications.