Overview
The Jinja templating feature allows you to create powerful, dynamic System Prompts. By using variables that are resolved with real-time data before a call begins, you can build personalized and context-aware conversations. This guide explains how to use Jinja templates, the validation rules in place, and the benefits of this approach.
Core Concept: Understanding Variable Types
Our system uses two distinct types of curly braces to handle variables from different sources. Understanding this distinction is crucial for building prompts correctly.
**1. Server-Side Variables: **
These variables are placeholders for data that you provide to the system before the call is initiated.
- Source: The value for these variables must come from one of two places:
- Dynamic API Response: Data fetched from your API endpoint just before the call.
- Pre-call Variables: Data you provide when triggering the call via an API, especially during testing.
- Use Case: Ideal for injecting customer-specific data like names, appointment details, order history, or account status.
- Example: Hello customer_name , your appointment is scheduled for booking_date .
These variables are placeholders for information that is extracted directly from the user’s spoken message during the conversation.
- Source: The value is populated by the system based on the user’s utterance.
- Use Case: Perfect for capturing dynamic user choices or inputs within a conversation turn.
- Example: If the prompt is “To confirm, you selected user_selected_option, right?”, the value of user_selected_option will be filled from what the user said in their previous turn.
Variable Reference Table
Syntax | Example | Source of Value | Required in API/Pre-call Variables? |
---|
variable_name | customer_name | Dynamic API response or Pre-call variable | Yes |
variable_name | user_selected_option | Extracted directly from the User’s Message | No |
Validation Rules and System Behavior
To ensure your prompts are reliable, the system validates the Jinja template syntax before you can save or test an agent.
Validation Checks: The System Prompt is validated for the following conditions:
- Correct Jinja Syntax: Ensures all brackets and statements (e.g.,
{{var}}
, { %if% }
) are correctly formatted.
- No Undeclared Variables: All variables wrapped in double curly braces
{{...}}
must have a corresponding value provided either from the dynamic API or pre-call variables.
Error Handling
The system’s behavior changes based on the validation result:
Condition | Save / Test Buttons | System Message |
---|
Valid Syntax | ✅ Enabled | – |
Invalid Jinja Syntax | 🛑 Disabled | ”Invalid Jinja syntax.” |
Variable Missing at Runtime (Variable not in API/pre-call data) | ✅ Enabled (Save works) | “Call triggered failed.” (Error during the call) |
Agent Behavior
- Existing Agents: Agents created before this feature will continue to function as normal until their System Prompt is modified and saved again.
- New & Updated Agents: All new agents, or existing agents whose prompts are updated, must follow the Jinja formatting and validation rules.
Examples
1. Valid Syntax
Hello {{customer_name}}, your booking for a {{product_name}} is confirmed for {{booking_date}}.
2. Invalid Syntax
Hello {{customer_name}}. Your booking is on {{booking_date}.
- Error: The curly braces for booking_date are not closed.
- System Behavior: The Save and Test buttons will be disabled until the syntax is corrected.
3. Advanced Example with Conditional Logic
Hello {{ customer_name }}.
{% if loyalty_status == 'Gold' %}
As a Gold member, you get a special 20% discount.
{% elif loyalty_status == 'Silver' %}
As a Silver member, you get a 10% discount.
{% else %}
Thank you for being our customer.
{% endif %}
Your order #{{ order_id }} is ready.
In this example, the agent’s greeting is personalized based on the customer’s loyalty_status variable.
Using Jinja templates, especially with conditional blocks ({% if %}
, {% elif %}
, {% else %}
), significantly optimizes your prompts.
- Reduces Input Tokens: By including only the relevant text based on the provided variables, the overall length of the prompt sent to the language model is reduced.
- Lowers Inference Costs: Fewer input tokens directly translate to lower API costs for each call.
- Improves Execution Speed: Shorter, more concise prompts are processed faster by the language model, reducing latency.
Important Note: During testing mode, if a variable exists in both the pre-call variables and the dynamic API response, the value from the pre-call variables will be used.