This directory contains client examples demonstrating how to interact with the Template Agent's simplified streaming API. These examples show best practices for handling real-time streaming, different event types, and error scenarios.
A full-featured chat application built with Streamlit:
- Real-time chat interface with message history
- Token streaming visualization for responsive UX
- Session management with thread and session persistence
- Configuration panel for API settings and debugging
- Export functionality for conversation data
Key Features:
- Live token streaming with visual updates
- Tool call visualization with expandable details
- API health monitoring
- Conversation export to JSON
- Session state management
To Run:
# Install Streamlit if not already installed
pip install streamlit requests
# Run the app
streamlit run examples/streamlit_app.py
# Open http://localhost:8501 in your browserA robust async Python client for server-to-server communication:
- Async/await support using aiohttp
- Streaming and non-streaming modes for different use cases
- Comprehensive error handling with detailed error messages
- Session management with automatic ID generation
- Health checking for API availability
Key Features:
- Generator-based streaming for memory efficiency
- Automatic session ID generation
- Built-in retry logic and timeout handling
- Example conversation flows
To Run:
# Install dependencies
pip install aiohttp
# Run the example
python examples/client_python.pyUsage as Library:
from examples.client_python import TemplateAgentClient
client = TemplateAgentClient()
# Simple message
response, messages = await client.send_message("Hello!")
# Streaming chat
async for event in client.stream_chat("Hello!", "thread-123", "session-123", "user-123"):
if event['type'] == 'token':
print(event['content'], end='', flush=True)All clients use the simplified request format:
{
"message": "User's input message",
"thread_id": "Conversation thread identifier",
"session_id": "Session identifier",
"user_id": "User identifier",
"stream_tokens": true
}The API returns Server-Sent Events with this format:
{"type": "message", "content": {"type": "ai", "content": "Hello"}}
{"type": "token", "content": " world"}
{"type": "error", "content": {"message": "Error occurred", "recoverable": false}}
[DONE]Event Types:
message- Complete messages (AI responses, tool calls, tool results)token- Individual tokens for real-time streamingerror- Error messages with recovery information[DONE]- Stream completion marker
-
Template Agent Server Running
# Start the Template Agent server cd template-agent python -m uvicorn template_agent.src.main:app --reload --port 8081
-
Install Client Dependencies
# For Python examples pip install aiohttp requests streamlit # For TypeScript example npm install # (if using in a Node.js project)
Test the API is working:
# Health check
curl http://localhost:8081/health
# Simple streaming test
curl -X POST 'http://localhost:8081/stream' \
-H 'Content-Type: application/json' \
-H 'Accept: text/event-stream' \
-d '{
"message": "Hello!",
"thread_id": "test-123",
"session_id": "test-123",
"user_id": "test-user",
"stream_tokens": true
}'- Use consistent
thread_idfor multi-turn conversations - Use
session_idto group related threads - Generate UUIDs for unique identifiers
- Always handle
errorevents in streams - Check
recoverableflag to determine retry logic - Implement timeout and connection error handling
- Set
stream_tokens: truefor real-time UX - Set
stream_tokens: falsefor simpler message-only handling - Buffer tokens appropriately for UI updates
- Use appropriate timeouts for your use case
- Handle stream interruption gracefully
- Consider connection pooling for high-volume usage
All examples preserve enterprise features from the original implementation:
- SSO Authentication: Pass
X-Tokenheader for enterprise auth - Langfuse Tracing: Automatic tracing and analytics
- PostgreSQL Persistence: Conversation history and checkpointing
- Error Monitoring: Comprehensive error logging and recovery
- Template Agent API Documentation
- FastAPI Documentation
- Streamlit Documentation
- LangGraph Documentation
Connection Refused
- Ensure Template Agent server is running on http://localhost:8081
- Check firewall settings and port availability
Authentication Errors
- Verify SSO token is valid (if using enterprise features)
- Check X-Token header format
Streaming Issues
- Ensure
Accept: text/event-streamheader is set - Check for proxy/firewall interference with streaming
- Verify timeout settings are appropriate
Token Streaming Not Working
- Confirm
stream_tokens: truein request - Check for buffering issues in HTTP clients
- Verify WebSocket/EventSource compatibility
Enable detailed logging in examples:
# Python examples
import logging
logging.basicConfig(level=logging.DEBUG)
# Streamlit
st.set_option('client.showErrorDetails', True)For more help, check the main project documentation or create an issue in the repository.