๐๏ธ ๐ข Introduction
This chapter introduces simple prompting techniques as well as terminology. In order to understand prompting/prompt engineering, you first need to understand some very basic AI concepts. If you already know about the below topics, feel free to skip ahead to the next article.
๐๏ธ ๐ข Prompting
In the previous chapter, we discussed AI and how humans can instruct AIs to perform tasks.
๐๏ธ ๐ข Giving Instructions
One of the simplest prompting methods is just giving instructions (sometimes called instruction prompting)(@efrat2020turking)(@mishra2022reframing). We already saw a simple instruction
๐๏ธ ๐ข Role Prompting
Another prompting technique is to assign a role to the AI. For example, your
๐๏ธ ๐ข Few shot prompting
Yet another prompting strategy is few shot prompting, which is basically just showing the model a few examples (called shots) of what you want it to do.
๐๏ธ ๐ข Combining Techniques
As we have seen in the previous pages, prompts can have varying formats and complexity. They can include context, instructions, and multiple input-output examples. However, thus far, we have only examined separate classes of prompts. Combining these different prompting techniques can lead to more powerful prompts.
๐๏ธ ๐ข Formalizing Prompts
We have now covered multiple types of prompts, as well as ways to combine them. This page will provide you with terms to explain different types of prompts. Although there have been approaches to formalize discourse around prompt engineering(@white2023prompt), the field is ever changing, so we will provide you with just enough information to get started.
๐๏ธ ๐ข Chatbot Basics
Thus far, this course has mostly used GPT-3 for examples. GPT-3 is a LLM that has no memory. When you ask it a question (a prompt), it does not remember anything that you have previously asked it. In contrast, chatbots like ChatGPT are able to remember your conversation history. This can be useful for applications such as customer service or simply if you want to have a conversation with a LLM!
๐๏ธ ๐ข Pitfalls of LLMs
LLMs are extremely powerful, but they are by no means perfect. There are many pitfalls that you should be aware of when using them.
๐๏ธ ๐ข LLM Settings
The output of LLMs can be affected by configuration hyperparameters, which control various aspects of the model, such as how 'random' it is. These hyperparameters can be adjusted to produce more creative, diverse, and interesting output. In this section, we will discuss two important configuration hyperparameters and how they affect the output of LLMs.
๐๏ธ ๐ข Understanding AI Minds
There are a few simple things you should know about different AIs and how they work before you start reading the rest of the course.
๐๏ธ ๐ข Starting Your Journey
Now that you have learned about the basics of prompt engineering, you are ready to start prompt engineering on your own. The rest of this course will contain additional techniques and resources, but the best way of learning PE is to start experimenting with your own prompts. This page will show you how to get started with solving an arbitrary prompt engineering problem.