What is an AI Prompt
An AI Prompt is any input, like text or images, that you give to a language model to maintain a conversation and get the information you need from it.
Although some prompts can be as simple and short as “Suggest three names for a new restaurant”, they can be as large as the context window length of your language model.
A one-sentence instruction is enough for language models to know what to do in most cases. However, if you want more control over how the answer is generated, you will need to provide context.
Prompt = Instruction + Context
Contextual statements fall into two categories —styling or informational context—
Styling context covers elements like tone, perspective, and emotion that impact the language style. Some examples are:
- Tone/style - Specify the desired tone or writing style for the response. Say you want the language formal, conversational, or funny.
- Point of view - Ask the AI to respond from a certain perspective, like "Answer as if you are a lawyer giving legal advice."
- Emotional state - Convey a particular emotion you want to be reflected, like excitement, thoughtfulness, or caution.
- Reader profile - Relevant details about the intended audience, such as age, interests, and cultural background among others.
We can refer to each of these styling statements as style traits. You can combine multiple traits into an AI Persona, which defines the styling context of a prompt.
Informational context is any additional facts or opinions that you want the language model to know about before generating a response.
Any language model is limited by the dataset that was used to train it. These datasets are generally a much smaller subset of everything that is currently known.
Below are some knowledge gaps that you may want to cover with informational context:
- Opinions - Language models are trained on unbiased data vetted via red teaming.
- Recent facts - Current language models aren’t trained with up-to-date information, known as the knowledge cutoff. You can provide additional text fragments that contain up-to-date statements to make up for the knowledge gap.
- Finance, Medical, and Legal facts are usually removed from training datasets in general-purpose language models.
- Private information - Any sort of private, personal or confidential information about you, your business or your brand that isn’t available publicly.
The most efficient way to provide informational context is to use semantic libraries. These libraries can split up large bodies of information and semantically index it. Based on your prompt instruction, you can then retrieve only the relevant text fragments from the semantic library to fill up the context window with relevant data. You can continue here to read more about semantic libraries and how to create one.
Having the ability to save and reuse prompt templates can be very beneficial when you work with language models:
- Saves time - Creating effective prompts can take time and iterative experimentation. With templates, you don't have to start from scratch every time or have to recreate prompts you've fine-tuned.
- Promotes consistency - Having a library of saved prompt templates allows you to consistently provide the optimal instruction and context each time based on the workflow.
- Encourages refinement - Prompt crafting is a skill that can be refined over time as you learn what works best. Saved templates let you progressively improve and tweak prompts.
- Enables organization - Templates can be organized by project, client, business function, etc. making it efficient to locate and reuse the right prompt for the job.
- Facilitates auditing - Having a library of prompts makes it easier to audit over time and identify areas for improvement, new templates needed, etc.
- Promotes accountability - Templates create a record of the types of prompts crafted and used over time.