Prompt engineering is the process of guiding generative models to best produce your desired outputs. To be honest, not enough is yet known about it to really be called “engineering,” so sometimes we just say “prompting.” This is a list of information and resources to get you confidently developing prompts which consistently and reliably deliver you the written content you are looking for.
Stable prompt → the holy grail:
Our end goal is to create something called a stable prompt, which is a prompt which reliably delivers you the content you want even if you modify certain key words within the prompt. If you are repeating a task in which you are using prompts to achieve a single goal with some variation, then it will be worthwhile to invest effort into creating a stable prompt. Unlike a disposable or a one-time prompt, a stable prompt is something you can re-use over and over and get a reliable output. These prompts should be stable enough that you can modify a few key words, and it will still deploy content of equal quality and do so reliably. Creating a stable prompt specifically for your needs is what prompt engineering is all about.
Preparing for Prompt Engineering
- Do a cost-benefit analysis: Before you even start composing your prompt, you need to ask yourself: do you already know exactly what you want to write?
- If yes: Skip the AI. If you know exactly what you want to write, AI is not the right tool for you. The likelihood of it accidentally writing the thing that you want is very low.
- If no: Putting in the effort of composing the prompt is useful if you’re brainstorming ideas, or you know the general direction of what you want to write but not the details. Or if you have two thirds of something done, and need to consider options on how to culminate it all. Or if you have something that you've written and you wanna reformat it. All of these are great use cases for AI, and are worth the effort of designing a prompt.
- Give maximum specificity to your goals: The majority of work in developing your prompt is clearly stating your intention. Ask yourself if it is crystal clear to you on what you are trying to achieve when developing your prompt.
- Use available resources: Can you use pre-existing text from elsewhere? Reusing suitable content can be helpful and time-saving. If you have any kind of language that you already have prepared elsewhere, ready to copy into the prompt, that’s a great way to save you a lot of time. You would utilize this content by giving the instructions, and then telling the AI “replicate the style in the quoted excerpt below”.
Specifiers to mention in your prompt:
While writing up your prompt, try specifying the following characteristics to guide the AI into creating what you want (these work for instruct models only, not completion models):
- Tone - formal, casual, informative, persuasive, etc.
- Format - essay, bullet points outline, dialogue, etc.
- Speak as - expert, critic, enthusiast, etc.
- Objective - inform, persuade, entertain, etc.
- Context - describe the wider context. For example:
- Instead of writing: "Write 10 ideas about sharing lab space in universities."
- Write: "I am working in an engineering department within a university that has recently become an R2 research university. The amount of lab space is fixed, but the number of total faculty, as well as the faculty responsible for research, is increasing. List 10 ideas on how we can we solve this problem of limited lab space, while still ensuring safety, research quality, equipment access and faculty autonomy?"
- Keywords - list keywords or phrases you want to include.
- Examples - offer examples in the prompt, starting with the statement "here is an example."
- Audience - specify the target audience to which the content is tailored.
- Analogies - you can specifically request for analogies (this can be achieved with a chain of prompts)
- Call to action - you can specifically request for the model to end with a call to cation.
We recommend using the Library of Babel to document your prompt engineering journey. This tool allows you to build a repository of effective prompts, track your progress, record your successes and cross-pollinate them within a growing community. By creating this valuable resource, you can contribute to the collective knowledge in the field of prompt engineering and share your insights with the broader AI community.
Library of Babel
Library of Babel is a place to create and share prompts across Large Language Models (LLMs), in a community setting. It’s a place to collectively explore these models and their capability for augmenting human creativity, utility, absurdity, and more.
Prompt Engineering Tactics
Here we will delve into different approaches you can take to enhance the output of your AI models. These principles will provide a solid foundation for improving the quality, relevance, and accuracy of the AI's output. Prompt engineering is a blend of art and science, involving a deep understanding of language nuances, context, and the specific capabilities of the AI model.
Few Shot Learning
Few shot learning means you give the model a couple of examples and it riffs off of those to create the content you are looking for. In these examples, you want things to be as consistent as possible, with style, time frame, length and other features remaining similar to each other across examples. The only thing you want to be different from example to example is the part you want the generator to innovate on.
The example set offers guidance to the model as to what elements within the generated content should emulate the style of the example. Simultaneously, the elements which are different from example to example signal to the model that this is the feature which it should specialize to what you specify. What is different from example to example is what the model will innovate on in what it generates.
Example: I know this sounds like a difficult concept to wrap your mind around. Let me give an example. So for me, I love Pablo Neruda poetry. Neruda is a Chilean poet and one of the first things I ever did with generative AI was make a poem generator replicating his style. How did I do that? I started by taking a couple of real Pablo Neruda poems. Neruda's poems are especially well suited for this exercise because he has a collection of poems which all start with the title "An Ode to <something>" and then he writes a poem about that topic.
In the worksheet, you can find three of these poems. So we have one poem actually written by Pablo Neruda called "Ode to My Socks" and an "Ode to an Artichoke" and so on. These examples are perfect because they all start with the same title, and the poem style and theme is consistent across examples. All of them are dedicated to the object which is unique from title to title - be it socks, or artichokes or whatever.
So because we offered multiple examples which start with “Ode to <Blank>” the model will know to start with “Ode to” in the title of poem it generates, and because they all are written in short stanzas and fixated on the subject of the title, the model will know to repeat those elements too. But because they all are ode to different objects, the model will likewise differentiate its output in regard to what the ode is about. The subject it will dedicate the generated poem to is whatever you put in the title after “Ode to…”. So once you set up something like this, you can create a poem generator which will spit out an ode to whatever subject you choose to specify, while keeping stable elements such as style and length in the style of a Neruda poems.
Try it yourself: we have a very convenient platform for you to test this process. Head over to an AI platform we developed exclusively for this, called Mesopotamia.ai. Once you created an account, head over to “Text” in the top right corner, and then from the menu which appeared select “Create”. What you will see is pre-created slots where to put examples, and then a slot where to specify a title. After you’re done, specify a title on a subject matter that you want to generate content on. What you will get is content on the subject matter of the title, but in the style and format of the examples you offered. The separate boxes for examples and titles is just training wheels. You can re-create this process in ChatGPT or any other model by copying and pasting the content all together into the prompt window and ending with “Now generate content similar to the examples but on the subject of <your subject matter>”
In Chain-of-Thought prompting, you are achieving your end product through multiple steps which feed into each other. So you would create a prompt for one step, and then use the output of that step as part of a prompt for the next step. Amazingly enough, even though you are not providing the model with anything it didn’t already know, prompting it to generate guidelines for itself is incredibly powerful in improving its output.
Example: Let me give an example to describe how this is done. Let's say you want to write a story about a subject like cats in the style of Ernest Hemingway. Rather than listing out what characteristics are typical to Hemingway’s writing by yourself, and then you ask the AI to do it for you. So you would start a chain of prompts where the first link is: “List the characteristics of Ernest Hemingway’s writing”, and then step 2: “Write a story about cats with the characteristics listed above.” This works because the model actually takes everything written and generated beforehand as part of the prompt. It doesn’t just read your last prompt, but considers everything written in the thread prior as part of the prompt.
Try it yourself: Use the worksheet provided below and fill in the blanks. This is a more complicated example of a chain of prompts.
You see here that we offer the example: “List five <some ideas> on <a topic of your choosing>”. In the spaces blocked off by carrot brackets you can specify any topics you want. After you shape that first prompt and have the model respond to it, repeat the process again with an adjacent idea and topic. And then as your third prompt input you would ask the model to connect the elements from the first list and the second one. So you would it ask it to connect one of the concepts from list one and one concept from list two.
If you are working with a developer API, such as platform.openai.com, you will get a bunch of toggles which will affect the output. The terminology here is quite universal across models and providers.
- Model - The type of AI model used.
- Ex.: For example, OpenAI offers text-davinci-003 as its latest model, as of this writing, but it also offers older models which were trained on different data sets, such as curie (the older models cheaper to use and sometimes faster in their reponse).
- Temperature - This is how creative AI gets with its output, toggles between 0 and 1. High temperature makes for very out-of-the-box and sometimes schizophrenic outputs. Low temperature makes for very rote responses which often repeat themselves. Another way to think of temperature is how predictable the output of a model is. 0.5-0.7 is the typical default.
- How to choose your temperature: Your preferred temperature depends on what kind of output you are looking for.
- If you are writing something concrete, like question answering where you want the model to inform you on factual data, where you are looking for a specific answer, you want to keep your temperature low.
- If you are writing something creative, like drafting an email or writing an intriguing piece, where you want to bank on diversity and intrigue, you want to keep your temperature high.
- Maximum length - length of input + output.
- Top P - another way to control for the randomness of the text, by specifying the cut off percentage on the probability curve. Value of 1 is often optimal. Not a toggle that is usef often because the temperature toggle usually gets the job done.
- Frequency penalty - high much the model will try to avoid repeating the same words or phrases.
- Presence penalty - a more strict mode of frequency penalty which avoids using any words already present in the output or input.
Playgrounds for prompt engineering:
Now sail off and try out these strategies. Test different prompts and see if they generate the right content. Remember, if you didn’t get the desired result, don’t jump to the conclusion that you have found the limits of AI. Don’t blame the hammer for not hitting the nail. Keep trying.
Other prompting guides: