This document serves as a guide for creating YAML templates for llmquery. Templates are the foundation of how prompts are structured, validated, and rendered for various Language Model APIs.
Templates are powered by Jinja2, a Turing-complete template engine. This allows for the creation of dynamic and flexible templates through the use of conditional statements, loops, functions, and other advanced constructs.
Each YAML template consists of the following key components:
Every template must have a unique identifier specified under the id key.
id: unique-template-idMetadata allows users to add descriptive information about their templates. It provides context, notes, or categorization for better organization.
metadata:
author: "Your Name"
tags:
- tag1
- tag2
category: "Your Category"
description: "Your template description"The system_prompt defines the behavior and tone of the LLM. This field describes the context and rules the model should follow.
system_prompt: >
You are a helpful assistant.- Use
>for multiline prompts. - Keep instructions concise and explicit.
The prompt field contains the main query or instruction sent to the model. Use variables to make the prompt dynamic.
prompt: >
Translate the following text from {{ source_language }} to {{ target_language }}:
Text:
{{ text }}Variables allow the prompt to be dynamic and reusable. Define them under the variables key or pass them directly when instantiating the LLMQuery class. Variables provided in the LLMQuery class will overwrite those defined in the YAML template.
variables:
source_language: English
target_language: Spanish
text: "Hello, how are you?"Example of providing Variables in code:
from llmquery import LLMQuery
query = LLMQuery(
provider="OPENAI",
templates_path="./templates/translate-natural-language.yaml",
variables={"source_language": "French", "target_language": "German", "text": "Bonjour"},
openai_api_key="your-api-key"
)
response = query.Query()
print(response)Here are some examples to help you create your own YAML templates.
id: detect-natural-language
metadata:
author: "Your Name"
tags:
- language-detection
category: "Utility"
system_prompt: >
You're an AI assistant. You should return the expected response without any additional information. The response should be exclusively in JSON with no additional code blocks or text.
prompt: >
Analyze the following text and identify its language:
Return response as: {"detected_language": "LANGUAGE_NAME"}
{{ text }}
variables:
text: " السلام عليكم"id: find-book-author-name
metadata:
author: "Your Name"
tags:
- book-info
category: "Information Retrieval"
system_prompt: >
You are an AI assistant that returns concise information in JSON format exclusively without any additional context, code blocks, or formatting.
prompt: >
Who is the author of this book?
Response should be in the format: {"author": "AUTHOR NAME"}
Book name: {{ book }}
variables:
book: Atomic Habitsid: translate-natural-language
metadata:
author: "Your Name"
tags:
- translation
- language
category: "Utility"
system_prompt: >
You're an AI assistant. You should return the expected response without any additional information.
prompt: >
Translate the following natural language from {{ source_language }} to {{ target_language }}:
Text:
{{ text }}
variables:
source_language: English
target_language: Spanish
text: "Hello, how are you?"id: Unique string identifying the template.metadata: Optional descriptive information about the template.system_prompt: Explicit instructions for the LLM.prompt: The user instruction with variables embedded.variables: Key-value pairs defining dynamic components of the prompt.
- Clarity: Keep prompts clear and concise.
- JSON Responses: When applicable, enforce JSON-only outputs for structured responses.
- Error Handling: Provide explicit instructions in the
system_promptto handle errors gracefully. - Dynamic Variables: Use variables to make the template reusable across different inputs.
- Validation: Test templates to ensure they produce the expected results.
- Start with the
idsection to uniquely identify your template. - Add a
metadatasection to describe your template. - Write a clear
system_promptto define the behavior of the model. - Craft a
promptwith placeholders for dynamic inputs. - Define the
variableswith default values to test the template.
id: your-template-id
metadata:
author: "Your Name"
tags:
- example-tag
category: "Example Category"
system_prompt: >
[Define the behavior of the LLM here.]
prompt: >
[Write the main instruction or query here. Use {{ variable_name }} for placeholders.]
variables:
variable_name: "default value"We welcome contributions to the llmquery repository! If you've created useful YAML templates, consider submitting it to help the community.
- Fork the repository from GitHub.
- Create a branch for your template addition.
- Add your template under the appropriate directory (e.g.,
templates/). - Test your template to ensure it works as expected.
- Open a Pull Request (PR) with a clear description of your template and its use case.
Join us in building a library of powerful, reusable templates for everyone!
GitHub Link: https://github.com/mazen160/llmquery