When you're building a custom AI application using a GPT API you'll probably want the model to respond in a way that fits your application or company. You can achieve this using the system prompt.
What is the system prompt?
Requests to and from a GPT API generally have 3 types of messages, also known as roles or prompts:
1. User
User messages are any messages that your application has sent to the model.
2. Assistant
Assistant messages are any messages that the model has sent back to your application.
3. System
The system prompt is sent with every request to the API and instructs the model how it should respond to each request.
When we don't set a system prompt the user can tell the model to act however they would like it to:
Note: Depending on the model you're using, you may need to be more firm with your system prompt for the model to listen. Test your prompt using OpenAI's Playground before deploying.
For more information on system prompts, see OpenAI's documentation, or use their playground to start testing your own!