Pular para o conteúdo principal

🟢 Formalizing Prompts

RoleExample 1InstructionExample 2Example 3ContextQuestion

We have now covered multiple types of prompts, as well as ways to combine them. This page will provide you with terms to explain different types of prompts. Although there have been approaches to formalize discourse around prompt engineering1, the field is ever changing, so we will provide you with just enough information to get started.

Parts of a Prompt

There are a few different parts of a prompt that you will see over and over again. They are roughly:

  • A role
  • An instruction/task
  • A question
  • Context
  • Examples (few shot)

We have covered roles, instructions, and examples in previous pages. A question is simply a question! (E.g. what is the capital of France?). Context is any relevant information that you want to model to use when answering the question/performing the instruction.

Not all of these occur in every prompt, and when some do occur, there is no standard order for them. For example, the following two prompts, which each contain a role, an instruction, and context, will do roughly the same thing:

You are a doctor. Read this medical history and predict risks for the patient:

January 1, 2000: Fractured right arm playing basketball. Treated with a cast.
February 15, 2010: Diagnosed with hypertension. Prescribed lisinopril.
September 10, 2015: Developed pneumonia. Treated with antibiotics and recovered fully.
March 1, 2022: Sustained a concussion in a car accident. Admitted to the hospital and monitored for 24 hours.
January 1, 2000: Fractured right arm playing basketball. Treated with a cast.
February 15, 2010: Diagnosed with hypertension. Prescribed lisinopril.
September 10, 2015: Developed pneumonia. Treated with antibiotics and recovered fully.
March 1, 2022: Sustained a concussion in a car accident. Admitted to the hospital and monitored for 24 hours.

You are a doctor. Read this medical history and predict risks for the patient:

However, the second prompt is likely preferable since the instruction is the last part of the prompt. This is good since the LLM is less likely to simply write more context instead of following the instruction. For example, if given the first prompt, the LLM might add a new line: March 15, 2022: Follow-up appointment scheduled with neurologist to assess concussion recovery progress.

A "Standard" Prompt

We have heard of a few different formats of prompts thus far. Now, we will quickly jump back to the beginning and define a "standard" prompt. Following Kojima et al.2, we will refer to prompts that consist solely of a question as "standard" prompts. We also consider prompts that consist solely of a question that are in the QA format to be "standard" prompts.

Why should I care?

Many articles/papers that we reference use this term. We are defining it so we can discuss new types of prompts in contrast to standard prompts.

Two examples of standard prompts:

Standard Prompt

What is the capital of France?

Standard Prompt in QA format

Q: What is the capital of France?

A:

Few Shot Standard Prompts

Few shot standard prompts3 are just standard prompts that have exemplars in them. Exemplars are examples of the task that the prompt is trying to solve, which are included in the prompt itself4. In research, few shot standard prompts are sometimes referred to simply as standard prompts (though we attempt not to do so in this guide).

Two examples of few shot standard prompts:

Few Shot Standard Prompt

What is the capital of Spain?
Madrid
What is the capital of Italy?
Rome
What is the capital of France?

Few Shot Standard Prompt in QA format

Q: What is the capital of Spain?
A: Madrid
Q: What is the capital of Italy?
A: Rome
Q: What is the capital of France?
A:

Few shot prompts facilitate "few shot" AKA "in context" learning, which is the ability to learn without parameter updates5.


  1. White, J., Fu, Q., Hays, S., Sandborn, M., Olea, C., Gilbert, H., Elnashar, A., Spencer-Smith, J., & Schmidt, D. C. (2023). A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT.
  2. Kojima, T., Gu, S. S., Reid, M., Matsuo, Y., & Iwasawa, Y. (2022). Large Language Models are Zero-Shot Reasoners.
  3. Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., & Neubig, G. (2022). Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing. ACM Computing Surveys. https://doi.org/10.1145/3560815
  4. Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., … Amodei, D. (2020). Language Models are Few-Shot Learners.
  5. Zhao, T. Z., Wallace, E., Feng, S., Klein, D., & Singh, S. (2021). Calibrate Before Use: Improving Few-Shot Performance of Language Models.