-
-
Notifications
You must be signed in to change notification settings - Fork 571
Feature Request: Add a MCP (Model Context Protocol) Server inside plantuml-server #400
Description
Feature Request: Add a MCP (Model Context Protocol) Server inside plantuml-server
Summary
This feature proposes adding a Model Context Protocol (MCP) Server directly inside plantuml-server, exposed as a new HTTP/JSON API under /mcp/*.
The goal is to allow AI agents (ChatGPT, Claude, AssistAI, IDE assistants…) to interact with PlantUML in a safe, structured, programmatic way, without modifying the existing public rendering API.
This MCP layer would not execute shell commands or access the filesystem. It only exposes safe PlantUML operations through a declarative JSON interface fully controlled by the server.
This issue contains the full specification.
1. Objectives
Provide a standard interface for LLM-powered tools to:
- render PlantUML diagrams safely
- analyze PlantUML sources (errors, warnings, includes…)
- introspect the server (version, security profile, limits…)
- manage ephemeral diagram “workspaces” for iterative editing
- optionally access PlantUML examples bundled in the server
This must respect all existing PlantUML server constraints, especially:
PLANTUML_SECURITY_PROFILEPLANTUML_LIMIT_SIZE- allowlist rules
- disabled filesystem access by default
The MCP API must not expand PlantUML permissions.
2. High-Level Architecture
- The MCP server is a new internal HTTP/JSON API, mounted under:
/mcp/*
- All logic stays inside the existing server (Jetty/Tomcat or embedded).
- No external process, no WebSocket requirement.
- MCP is enabled only when configured.
Configuration (environment variables)
| Variable | Description |
|---|---|
PLANTUML_MCP_ENABLED |
true/false (default: false) |
PLANTUML_MCP_API_KEY |
Optional API key for clients |
PLANTUML_MCP_WORKSPACE_LIMIT |
Max diagrams per workspace (default: 20) |
PLANTUML_MCP_MAX_REQUESTS_PER_MINUTE |
Optional rate limit |
If the variable is not explicitly enabled, /mcp/* must return 404.
3. MCP Endpoints to Implement
The MCP API exposes tools grouped in logical families.
All endpoints accept/return JSON (application/json).
3.1. /mcp/render – Render PlantUML diagram
POST /mcp/render
Input JSON:
{
"source": "@startuml\nAlice -> Bob : Hello\n@enduml",
"format": "png",
"configFileName": null,
"limitSizeOverride": null
}Rules / Notes
formatlimited to formats already supported by plantuml-server (png/svg/txt/eps…)limitSizeOverridecannot exceedPLANTUML_LIMIT_SIZE- Rendering must use the same engine/config as existing endpoints
Output JSON:
{
"status": "ok",
"format": "png",
"dataUrl": "data:image/png;base64,...",
"width": 800,
"height": 300,
"renderTimeMs": 42,
"sha256": "..."
}3.2. /mcp/render-url – Render from encoded PlantUML URL
POST /mcp/render-url
Input:
{
"encodedUrl": "/svg/SoWkIImgAStDuU9o..."
}Behavior:
- Decode the compressed diagram
- Render it in the requested format (optional:
format) - Return same output as
/mcp/render
3.3. /mcp/analyze – Analyze PlantUML source
POST /mcp/analyze
Input:
{ "source": "..." }Output example:
{
"status": "error",
"messages": [
{ "type": "error", "line": 3, "column": 5, "message": "Syntax Error?" }
],
"includes": [
{ "raw": "!include common.puml", "resolved": null }
],
"diagramType": "sequence",
"estimatedComplexity": {
"lineCount": 42,
"elementCount": 12
}
}The endpoint must not read files unless allowed by the current PlantUML security profile.
3.4. Workspace API
Allow the AI agent to iterate on stored diagrams.
A workspace begins when the client provides an arbitrary sessionId.
POST /mcp/workspace/create
{
"sessionId": "abc123",
"name": "login-sequence",
"source": "..."
}Output:
{ "diagramId": "w1" }POST /mcp/workspace/update
{
"sessionId": "abc123",
"diagramId": "w1",
"source": "..."
}POST /mcp/workspace/get
{ "sessionId": "abc123", "diagramId": "w1" }POST /mcp/workspace/render
Same output as /mcp/render.
POST /mcp/workspace/list
Returns all diagrams visible in the workspace:
{
"diagrams": [
{ "diagramId": "w1", "name": "login-sequence" }
]
}Workspaces are in-memory only and bounded by PLANTUML_MCP_WORKSPACE_LIMIT.
3.5. /mcp/info – Server Introspection
GET /mcp/info
Output example:
{
"plantumlServerVersion": "2025.2",
"plantumlCoreVersion": "v1.2025.10",
"securityProfile": "STANDARD",
"limitSize": 1000000,
"statsEnabled": false,
"environment": {
"backend": "jetty",
"readOnly": true
}
}This endpoint is read-only.
3.6. /mcp/stats – Optional statistics
Available only if PLANTUML_STATS=on.
GET /mcp/stats
Return structured metrics: number of renders, average render time, diagram types observed, error counts, etc.
3.7. /mcp/examples/* – Retrieve built-in examples
GET /mcp/examples/list
Returns example names extracted from /WebUI/examples/.
GET /mcp/examples/get?name=...`
Returns the PlantUML source of the example.
This is optional but very useful for AI-generated diagram templates.
4. Security Requirements
4.1. Do NOT expand PlantUML permissions
- Never bypass
PLANTUML_SECURITY_PROFILE. - Difficult includes must behave exactly as today.
- No filesystem reads beyond what PlantUML already allows.
4.2. No command execution
- No shell, no OS operations.
4.3. Optional API key
If PLANTUML_MCP_API_KEY is set:
- Requests must contain header:
Authorization: Bearer <key>
Else: MCP is open on local network (depending on deployment).
4.4. Rate limiting
If PLANTUML_MCP_MAX_REQUESTS_PER_MINUTE is set, MCP requests should be throttled.
5. Implementation hints
-
Implement MCP endpoints in a dedicated servlet, e.g.
McpServlet, mapped to:/mcp/* -
Keep MCP logic isolated in its own package, for example:
net.sourceforge.plantuml.server.mcp -
The servlet should:
- dispatch on
pathInfo(/render,/analyze,/workspace/...,/info, etc.) - parse/serialize JSON (Jackson or Gson)
- enforce
PLANTUML_MCP_ENABLED, API key checks, and rate limiting (if configured)
- dispatch on
-
Reuse existing rendering logic:
- call into the same internal APIs used by
/png,/svg, etc. (e.g.SourceStringReaderand friends) - do not duplicate PlantUML core behavior in MCP-specific code
- call into the same internal APIs used by
-
Workspaces:
- keep a simple in-memory structure, e.g.
Map<String, Workspace>(sessionId → workspace) - each
Workspaceholds a bounded list of diagrams (PLANTUML_MCP_WORKSPACE_LIMIT) - no persistence; all MCP workspaces are ephemeral
- keep a simple in-memory structure, e.g.
-
Error handling:
- reuse existing exception / error mapping to extract syntax errors and warnings
- return them as structured JSON in
/mcp/analyzeand/mcp/workspace/render
-
Security:
- all MCP requests must go through the same security configuration as normal rendering
- never bypass
PLANTUML_SECURITY_PROFILE, limit size, or allowlist behavior
6. Future Extensions (not required now)
- WebSocket support for long sessions
- AI-aware diffing (
estimateImpact) - Local config management
- AI-driven caching layer
These are explicitly out-of-scope for the first implementation.
Conclusion
This feature adds a safe, modern way for AI agents to interact with PlantUML through a clean, JSON-based MCP API. It preserves all current server guarantees while enabling powerful new integrations with IDEs, documentation tools, and conversational assistants.
If approved, I recommend implementing:
/mcp/info/mcp/render/mcp/analyze/mcp/workspace/*
as the initial MVP.