The foundation layer for agentic intelligence.
Ship full-stack agentic systems the way they're meant to be built —
production-ready, secure by default, with the developer experience modern Python deserves.
Website · Documentation · Quick Start · Showcase · Discussions
From your first agent to a fleet running in production — without gluing libraries together.
- Agents that discover tools and remember context.
- MCP servers with access control, governance, and audit trails — multi-user, scalable, secure by default.
- Reasoning engine you can shape, like bricks.
- Autonomous runtime that recovers from crashes and stays within budget.
pip install promptiseimport asyncio
from promptise import build_agent, PromptiseSecurityScanner, SemanticCache
from promptise.config import HTTPServerSpec
from promptise.memory import ChromaProvider
async def main():
agent = await build_agent(
model="openai:gpt-5-mini",
servers={
"tools": HTTPServerSpec(url="http://localhost:8000/mcp"),
},
instructions="You are a helpful assistant.",
memory=ChromaProvider(persist_directory="./memory"),
guardrails=PromptiseSecurityScanner.default(),
cache=SemanticCache(),
observe=True,
)
result = await agent.ainvoke({
"messages": [{"role": "user", "content": "What's the status of our pipeline?"}]
})
print(result["messages"][-1].content)
await agent.shutdown()
asyncio.run(main())Guardrails block injection and redact PII. Semantic cache serves similar queries instantly. Full observability.
Each pillar replaces an entire category of libraries you would otherwise assemble yourself.
|
🤖 |
Turn any LLM into a production-ready agent with one function call.
|
|
🧠 |
Compose reasoning the way you compose code. Not a black box.
|
|
🔧 |
Production server and native client for the Model Context Protocol — multi-user, scalable, secure by default.
|
|
⚡ |
The operating system for autonomous agents. 5 trigger types (cron, webhook, file watch, event, message) · crash recovery via journal replay · 5 rewind modes · 14 lifecycle hooks · budget enforcement with tool costs · health monitoring (stuck, loop, empty, error rate) · mission tracking with LLM-as-judge · secret scoping with TTL and zero-fill revocation · 14 meta-tools for self-modifying agents · 37-endpoint REST API with typed client · live agent inbox · distributed multi-node coordination. |
|
✨ |
Prompts built like software. Not strings. 8 block types with priority-based token budgeting · conversation flows that evolve per phase · 5 composable strategies ( |
|
Your first agent in 5 minutes. |
Architecture, design principles, the five pillars. |
|
|
Step-by-step, simple to production. |
Graphs, nodes, flags, patterns. |
Production tool servers with access control, middleware, and audit trails. |
|
Autonomous agents with governance, triggers, and crash recovery. |
Composable blocks, strategies, flows, and guards. Prompts built like software, not strings. |
|
|
End-to-end working patterns and reference implementations. |
Every class, method, parameter. |
|
+ any LangChain BaseChatModel · FallbackChain for automatic failover
Local embeddings · air-gapped model paths · prompt-injection mitigation built in
Session ownership enforced · per-user isolation for cache and guardrails
8 transporters: OTel · Prometheus · Slack · PagerDuty · Webhook · HTML · JSON · Console
Docker + seccomp + gVisor + capability dropping · Kubernetes-native health probes
stdio · streamable HTTP · SSE · HMAC-chained audit logs
Promptise Foundry is open-source and growing fast. If it saves you time, drop a ⭐ — it genuinely helps.
Contributing · Security · License: Apache 2.0
Built by Promptise
Formerly known as DeepMCPAgent — a public preview of one sliver of this framework (MCP-native agent tooling). Promptise Foundry is the full system it was a teaser for: reasoning engine, agent runtime, prompt engineering, sandboxed execution, governance, and observability.