This project implements a comprehensive AI-native graphical modeling platform with advanced features:
- Backend: Rust HTTP server implementing Model Context Protocol (MCP) over JSON-RPC
- Frontend: TypeScript web client with Canvas rendering
- Database: Multi-backend support (PostgreSQL, InfluxDB, Redis, SQLite)
- WASM Runtime: Full wasmtime integration for sandboxed component execution
- Simulation: Time-driven simulation framework with pipeline execution
- AI Integration: Ollama LLM for natural language processing
-
MCP Protocol Layer:
- Resources → Diagram model state (read-only views)
- Tools → Diagram operations (create, modify, validate)
- Prompts → AI modeling workflows (templates for common tasks)
-
Database Layer:
- Multi-backend factory pattern
- PostgreSQL for relational data
- InfluxDB for time-series sensor data
- Redis for caching and sessions
- SQLite for local development
- Dataset manager for sensor data abstraction
-
WASM Execution Engine:
- Wasmtime runtime integration
- Security scanner for component validation
- Resource limits (memory, CPU, I/O)
- WIT interface analysis
- Component discovery and management
-
Simulation Framework:
- Pipeline-based component execution
- Sensor data bridge from database
- Time synchronization modes
- Scenario management
- Performance monitoring
cd glsp-mcp-server
cargo build
cargo run --bin server
# Server runs on http://127.0.0.1:3000Package Manager Choice: npm (selected for universal compatibility and zero-setup) Note: Requires Node.js and npm
cd glsp-web-client
npm install
npm run dev
# Frontend runs on http://localhost:5173# PostgreSQL (for diagram storage)
docker run -d -p 5432:5432 -e POSTGRES_PASSWORD=postgres postgres:15
# InfluxDB (for sensor data)
docker run -d -p 8086:8086 influxdb:2.7
# Redis (for caching)
docker run -d -p 6379:6379 redis:7- Rust: Latest stable version with wasm32-wasip2 target
- Node.js: v18+ for frontend development
- npm: For frontend package management
- Docker: For database services (optional for development)
POST /mcp/rpc- Main MCP JSON-RPC endpointGET /health- Server health checkPOST /wasm/execute- WASM component executionPOST /simulation/run- Run simulation scenariosGET /database/status- Database connection status
create_diagram- Create new diagramscreate_node- Add nodes to diagramscreate_edge- Connect nodes with edgesdelete_element- Remove diagram elementsupdate_element- Modify element propertiesapply_layout- Auto-layout algorithmsexport_diagram- Export to various formats
diagram://model/{id}- Diagram model datadiagram://validation/{id}- Validation resultsdiagram://metadata/{id}- Diagram metadatadiagram://list- List all diagrams
generate_workflow- Create workflow diagramsoptimize_layout- Improve diagram layoutadd_error_handling- Add error patternsanalyze_diagram- Analyze for improvementscreate_subprocess- Extract subprocessesconvert_diagram- Convert between types
Set environment variables or use defaults:
# PostgreSQL
DATABASE_URL=postgresql://user:pass@localhost/glsp
# InfluxDB
INFLUXDB_URL=http://localhost:8086
INFLUXDB_TOKEN=your-token
INFLUXDB_ORG=glsp
INFLUXDB_BUCKET=sensor-data
# Redis
REDIS_URL=redis://localhost:6379
# Or use mock backend for testing
DATABASE_BACKEND=mock- Use Rust with wasm32-wasip2 target
- Define WIT interfaces in
wit/directory - Implement component logic
- Build with cargo-component
- Place in component registry
// Implement WIT interface
wit_bindgen::generate!();
struct Component;
impl Guest for Component {
fn process(input: Input) -> Output {
// Component logic here
}
}# simulation-config.yaml
simulation:
name: "Test Scenario"
pipelines:
- id: "sensor-pipeline"
stages:
- component: "sensor-processor"
method: "process"
- id: "ai-pipeline"
stages:
- component: "object-detection"
method: "detect"
dependencies: ["sensor-pipeline"]# Run simulation
cargo run --bin simulator -- --config simulation-config.yamlcargo testcargo test --features integrationcargo test --features simulation- Ollama: Running on http://127.0.0.1:11434 (local LLM)
- MCP-GLSP Server: Running on http://127.0.0.1:3000 (diagram backend)
- Web Client: Running on http://localhost:5173 (frontend + AI agent)
- ✅ Complete MCP-GLSP backend with full protocol support
- ✅ Multi-backend database layer with sensor data management
- ✅ WASM runtime with security scanning and resource limits
- ✅ Simulation framework with pipeline execution
- ✅ Web frontend with Canvas rendering and AI chat
- ✅ AI agent integration for natural language processing
- ✅ 20+ ADAS example components demonstrating capabilities
- API Response: <100ms for diagram operations
- WASM Execution: <20ms for component calls
- Simulation: 30+ FPS for real-time scenarios
- Database: 10k+ writes/sec for sensor data
- AI Inference: <2s for diagram generation
🤖 Natural Language Processing: Convert descriptions to diagrams 📊 AI-Powered Analysis: Intelligent optimization and validation 🔧 Component Composition: Pipeline-based WASM execution 🎨 Interactive Canvas: Real-time editing with theme support 📈 Time-Series Data: Sensor data streaming and analysis 🔒 Security: Sandboxed execution with resource limits ⚡ Performance: Async architecture with connection pooling 🧪 Testing: Comprehensive test framework with simulations
RUST_LOG=debug cargo run --bin server# Test database connection
cargo run --bin test-db-connection# Validate component
wasm-tools validate component.wasm
# Check WIT interfaces
wasm-tools component wit component.wasm# Run with verbose output
RUST_LOG=glsp_mcp_server::wasm::simulation=trace cargo run- Use environment variables for all configurations
- Enable TLS for all connections
- Set appropriate resource limits
- Configure monitoring and alerting
- Implement backup strategies
- Use connection pooling
- Enable request rate limiting
- Implement circuit breakers
- Distributed simulation across multiple nodes
- GPU acceleration for AI components
- GraphQL API alongside REST
- Event streaming with Kafka
- Kubernetes operators for deployment
- Advanced caching strategies
- Multi-tenancy support
- Plugin marketplace