Skip to content

πŸͺ— Zero Shot Prompting

Abstract

This section covers zero-shot prompting.

🦜 Video lecture for this chapter - Link

βœ’οΈ Overview

Zero-Shot Prompting is the simplest prompting technique which involves asking the model to answer a question or to do a task without giving any examples i.e., in simple terms, zero-shot prompting means prompting without examples.

βœ’οΈ How it works

As LLM is not provided with any examples, LLM relies completely on its knowledge gained during pretraining and then interprets the prompt to generate the desired output.

🍿 Example

Prompt

Write a poem about friendship in 4 short lines.

Output

In laughter and in sorrow's night,
A friend's embrace, a guiding light,
Together in our dreams we soar,
True friendship lasts forevermore.

Here the model is prompted without any examples. Here, the LLM uses its knowledge to understand the given prompt and then generates the desired output i.e., a poem about friendship in 4 short lines.

βœ’οΈ Pros

  • Minimal effort - Just provide a clear instruction or a question.
  • Accessible for everyone - No technical expertise needed.
  • Lower computational cost - No additional training data or steps are required.
  • Broad Task Range - Applicable to various tasks without significant modifications, offering versatility.

βœ’οΈ Cons

  • Limited Performance - May not achieve the same accuracy as few-shot prompting or advanced prompting techniques.
  • Prompt Dependence -Β Relies heavily on carefully crafted prompts for optimal results.

To summarize, zero-shot prompting is the simplest prompting technique to leverage LLMs but requires careful prompt crafting. Although zero-shot prompting is simple to implement, it may not give optimal results for complex tasks.