diff --git a/README.md b/README.md index 4a93a3bc5..84569cf61 100644 --- a/README.md +++ b/README.md @@ -1,192 +1,236 @@ -
- -435380213-b6253409-8741-462b-a346-834cd18565a9 - +# Cloudflare Postgres Setup for Codegen -
-
+This tool automates the setup of your local PostgreSQL database to be accessible by Codegen through Cloudflare Workers, providing a secure bridge between your local database and Codegen's AI agents. -
- Home Page | - Documentation | - Examples | - Discord | - Blog -
-
+## πŸš€ Quick Start -
+### Prerequisites -
- VoltAgent is an open source TypeScript framework for building and orchestrating AI agents.
-Escape the limitations of no-code builders and the complexity of starting from scratch. -
-
-
+1. **PostgreSQL 17** installed at `C:\Program Files\PostgreSQL\17` (Windows) +2. **Python 3.7+** installed +3. **Cloudflare account** with API access +4. **Domain configured** in Cloudflare (for custom worker URL) -
- -[![npm version](https://img.shields.io/npm/v/@voltagent/core.svg)](https://www.npmjs.com/package/@voltagent/core) -[![Contributor Covenant](https://img.shields.io/badge/Contributor%20Covenant-2.0-4baaaa.svg)](CODE_OF_CONDUCT.md) -[![Discord](https://img.shields.io/discord/1361559153780195478.svg?label=&logo=discord&logoColor=ffffff&color=7389D8&labelColor=6A7EC2)](https://s.voltagent.dev/discord) -[![Twitter Follow](https://img.shields.io/twitter/follow/voltagent_dev?style=social)](https://twitter.com/voltagent_dev) - -
+### Environment Variables -
- -
- -flow - - -
- -## What is VoltAgent? - -> An **AI Agent Framework** provides the foundational structure and tools needed to build applications powered by autonomous agents. These agents, often driven by Large Language Models (LLMs), can perceive their environment, make decisions, and take actions to achieve specific goals. Building such agents from scratch involves managing complex interactions with LLMs, handling state, connecting to external tools and data, and orchestrating workflows. - -[![VoltAgent Developer Console Demo](https://github.com/user-attachments/assets/0adbec33-1373-4cf4-b67d-825f7baf1cb4)](https://github.com/VoltAgent/voltagent/blob/main/CONSOLE.md) - -**VoltAgent** is an open-source TypeScript framework that acts as this essential toolkit. It simplifies the development of AI agent applications by providing modular building blocks, standardized patterns, and abstractions. Whether you're creating chatbots, virtual assistants, automated workflows, or complex multi-agent systems, VoltAgent handles the underlying complexity, allowing you to focus on defining your agents' capabilities and logic. - -Instead of building everything from scratch, VoltAgent provides ready-made, modular building blocks: - -- **Core Engine (`@voltagent/core`)**: The heart of VoltAgent, providing fundamental capabilities for your AI agents Define individual agents with specific roles, tools, and memory. -- **Multi-Agent Systems**: Architect complex applications by coordinating multiple specialized agents using Supervisors. -- **Extensible Packages**: Enhance functionality with packages like `@voltagent/voice` for voice interactions. -- **Tooling & Integrations**: Equip agents with tools to connect to external APIs, databases, and services, enabling them to perform real-world tasks. **Supports the [Model Context Protocol (MCP)](https://modelcontextprotocol.io/) for standardized tool interactions.** -- **Data Retrieval & RAG**: Implement specialized retriever agents for efficient information fetching and **Retrieval-Augmented Generation (RAG)**. -- **Memory**: Enable agents to remember past interactions for more natural and context-aware conversations. -- **LLM Compatibility**: Works with popular AI models from OpenAI, Google, Anthropic, and more, allowing easy switching. -- **Developer Ecosystem**: Includes helpers like `create-voltagent-app`, `@voltagent/cli`, and the visual [VoltAgent Console](https://console.voltagent.dev) for quick setup, monitoring, and debugging. - -In essence, VoltAgent helps developers build sophisticated AI applications faster and more reliably, avoiding repetitive setup and the limitations of simpler tools. - -## Why VoltAgent? - -Building AI applications often involves a trade-off: - -1. **DIY Approach:** Using basic AI provider tools offers control but leads to complex, hard-to-manage code and repeated effort. -2. **No-Code Builders:** Simpler initially but often restrictive, limiting customization, provider choice, and complexity. - -VoltAgent provides a middle ground, offering structure and components without sacrificing flexibility: - -- **Build Faster:** Accelerate development with pre-built components compared to starting from scratch. -- **Maintainable Code:** Encourages organization for easier updates and debugging. -- **Scalability:** Start simple and easily scale to complex, multi-agent systems handling intricate workflows. -- **Flexibility:** Full control over agent behavior, LLM choice, tool integrations, and UI connections. -- **Avoid Lock-in:** Freedom to switch AI providers and models as needed. -- **Cost Efficiency:** Features designed to optimize AI service usage and reduce redundant calls. -- **Visual Monitoring:** Use the [VoltAgent Console](https://console.voltagent.dev) to track agent performance, inspect state, and debug visually. - -VoltAgent empowers developers to build their envisioned AI applications efficiently, from simple helpers to complex systems. - -## ⚑ Quick Start - -Create a new VoltAgent project in seconds using the `create-voltagent-app` CLI tool: +Set these environment variables or the script will use the provided defaults: ```bash -npm create voltagent-app@latest +CLOUDFLARE_API_TOKEN=eae82cf159577a8838cc83612104c09c5a0d6 +CLOUDFLARE_ACCOUNT_ID=2b2a1d3effa7f7fe4fe2a8c4e48681e3 +CLOUDFLARE_WORKER_NAME=neon-db +CLOUDFLARE_WORKER_URL=https://neon-db.pixeliumperfecto.workers.dev + +# Optional: PostgreSQL admin credentials (if not using defaults) +POSTGRES_ADMIN_USER=postgres +POSTGRES_ADMIN_PASSWORD=your_postgres_password ``` -This command guides you through setup. - -You'll see the starter code in `src/index.ts` to get you started with the VoltAgent framework. - -```typescript -import { VoltAgent, Agent } from "@voltagent/core"; -import { VercelAIProvider } from "@voltagent/vercel-ai"; // Example provider -import { openai } from "@ai-sdk/openai"; // Example model - -// Define a simple agent -const agent = new Agent({ - name: "my-agent", - instructions: "A helpful assistant that answers questions without using tools", - // Note: You can swap VercelAIProvider and openai with other supported providers/models - llm: new VercelAIProvider(), - model: openai("gpt-4o-mini"), -}); - -// Initialize VoltAgent with your agent(s) -new VoltAgent({ - agents: { - agent, - }, -}); -``` +### Installation & Setup -Afterwards, navigate to your project and run: - -```bash -npm run dev +#### Option 1: Windows Batch Script (Recommended) +```cmd +# Clone or download the files +# Run the batch script +setup_postgres.bat ``` -When you run the dev command, tsx will compile and run your code. You should see the VoltAgent server startup message in your terminal: - -``` -══════════════════════════════════════════════════ -VOLTAGENT SERVER STARTED SUCCESSFULLY -══════════════════════════════════════════════════ -βœ“ HTTP Server: http://localhost:3141 +#### Option 2: Manual Python Setup +```bash +# Install dependencies +pip install -r requirements.txt -Developer Console: https://console.voltagent.dev -══════════════════════════════════════════════════ +# Run the setup script +python cloudflare_postgres_setup.py ``` -Your agent is now running! To interact with it: +## πŸ”§ What This Tool Does -1. Open the Console: Click the [VoltAgent Console](https://console.voltagent.dev) link in your terminal output (or copy-paste it into your browser). -2. Find Your Agent: On the VoltAgent Console page, you should see your agent listed (e.g., "my-agent"). -3. Open Agent Details: Click on your agent's name. -4. Start Chatting: On the agent detail page, click the chat icon in the bottom right corner to open the chat window. -5. Send a Message: Type a message like "Hello" and press Enter. +### 1. Database Setup +- βœ… Checks if PostgreSQL is running locally +- βœ… Creates a dedicated database: `codegen_db` +- βœ… Creates a read-only user: `codegen_user` +- βœ… Sets up proper permissions (SELECT only for security) +- βœ… Generates secure random password -[![VoltAgent Developer Console Demo](https://github.com/user-attachments/assets/0adbec33-1373-4cf4-b67d-825f7baf1cb4)](https://console.voltagent.dev/) +### 2. Cloudflare Worker Deployment +- βœ… Checks if worker already exists +- βœ… Creates new worker if needed +- βœ… Deploys proxy worker with health endpoints +- βœ… Tests worker accessibility -## Key Features +### 3. Configuration Management +- βœ… Saves all credentials to `.env` file +- βœ… Provides connection strings for Codegen +- βœ… Shows comprehensive status report -- **Agent Core:** Define agents with descriptions, LLM providers, tools, and memory management. -- **Multi-Agent Systems:** Build complex workflows using Supervisor Agents coordinating multiple specialized Sub-Agents. -- **Tool Usage & Lifecycle:** Equip agents with custom or pre-built tools (functions) with type-safety (Zod), lifecycle hooks, and cancellation support to interact with external systems. -- **Flexible LLM Support:** Integrate seamlessly with various LLM providers (OpenAI, Anthropic, Google, etc.) and easily switch between models. -- **Memory Management:** Enable agents to retain context across interactions using different configurable memory providers. -- **Observability & Debugging:** Visually monitor agent states, interactions, logs, and performance via the [VoltAgent Console](https://console.voltagent.dev). -- **Voice Interaction:** Build voice-enabled agents capable of speech recognition and synthesis using the `@voltagent/voice` package. -- **Data Retrieval & RAG:** Integrate specialized retriever agents for efficient information fetching and **Retrieval-Augmented Generation (RAG)** from various sources. -- **Model Context Protocol (MCP) Support:** Connect to external tool servers (HTTP/stdio) adhering to the [MCP standard](https://modelcontextprotocol.io/) for extended capabilities. -- **Prompt Engineering Tools:** Leverage utilities like `createPrompt` for crafting and managing effective prompts for your agents. -- **Framework Compatibility:** Designed for easy integration into existing Node.js applications and popular frameworks. +## πŸ“Š Output Example -## Use Cases - -VoltAgent is versatile and can power a wide range of AI-driven applications: +``` +============================================================ +🐘 Cloudflare Postgres Exposure Setup for Codegen πŸš€ +============================================================ + +πŸ” Step 1: Checking PostgreSQL server... +βœ… PostgreSQL server is running + +πŸ”§ Step 2: Setting up database and user... +πŸ“¦ Creating database: codegen_db +πŸ‘€ Creating user: codegen_user +βœ… Database and user setup completed + +☁️ Step 3: Checking Cloudflare Worker... +πŸ“ Cloudflare Worker 'neon-db' does not exist +πŸš€ Creating Cloudflare Worker... +βœ… Cloudflare Worker 'neon-db' created successfully + +πŸ§ͺ Step 4: Testing worker deployment... +πŸ§ͺ Testing worker at: https://neon-db.pixeliumperfecto.workers.dev +βœ… Worker is accessible and healthy + +πŸ’Ύ Step 5: Saving configuration... +βœ… Environment variables saved to .env + +============================================================ +πŸ“Š SETUP STATUS REPORT +============================================================ + +🐘 DATABASE STATUS: + Host: localhost + Port: 5432 + Database: codegen_db + User: codegen_user + Password: a1b2c3d4e5f6g7h8 + SSL Mode: prefer + Status: βœ… CONNECTED + +☁️ CLOUDFLARE STATUS: + Worker Name: neon-db + Worker URL: https://neon-db.pixeliumperfecto.workers.dev + Account ID: 2b2a1d3effa7f7fe4fe2a8c4e48681e3 + Status: βœ… WORKER ACCESSIBLE + +πŸ€– CODEGEN INTEGRATION: + Copy these values to Codegen Postgres settings: + Host: localhost + Port: 5432 + Database: codegen_db + Username: codegen_user + Password: a1b2c3d4e5f6g7h8 + SSL Mode: prefer + +πŸ“ FILES CREATED: + .env file: C:\path\to\your\project\.env + +πŸ”— USEFUL URLS: + Worker Health: https://neon-db.pixeliumperfecto.workers.dev/health + Worker DB Info: https://neon-db.pixeliumperfecto.workers.dev/db-info + +============================================================ + +πŸŽ‰ Setup completed successfully! +πŸ’‘ You can now use these credentials in Codegen's Postgres integration +``` -- **Complex Workflow Automation:** Orchestrate multi-step processes involving various tools, APIs, and decision points using coordinated agents. -- **Intelligent Data Pipelines:** Build agents that fetch, process, analyze, and transform data from diverse sources. -- **AI-Powered Internal Tools & Dashboards:** Create interactive internal applications that leverage AI for analysis, reporting, or task automation, often integrated with UIs using hooks. -- **Automated Customer Support Agents:** Develop sophisticated chatbots that can understand context (memory), use tools (e.g., check order status), and escalate complex issues. -- **Repository Analysis & Codebase Automation:** Analyze code repositories, automate refactoring tasks, generate documentation, or manage CI/CD processes. -- **Retrieval-Augmented Generation (RAG) Systems:** Build agents that retrieve relevant information from knowledge bases (using retriever agents) before generating informed responses. -- **Voice-Controlled Interfaces & Applications:** Utilize the `@voltagent/voice` package to create applications that respond to and generate spoken language. -- **Personalized User Experiences:** Develop agents that adapt responses and actions based on user history and preferences stored in memory. -- **Real-time Monitoring & Alerting:** Design agents that continuously monitor data streams or systems and trigger actions or notifications based on defined conditions. -- **And Virtually Anything Else...**: If you can imagine an AI agent doing it, VoltAgent can likely help you build it! ⚑ +## πŸ” Security Features -## Learning VoltAgent +- **Read-Only Access**: Database user only has SELECT permissions +- **Secure Passwords**: Auto-generated random passwords +- **SSL Preferred**: Connections use SSL when available +- **Environment Variables**: Sensitive data stored in `.env` file +- **Worker Proxy**: Cloudflare Worker acts as secure proxy -- **[Documentation](https://voltagent.dev/docs/)**: Dive into guides, concepts, and tutorials. -- **[Examples](https://github.com/voltagent/voltagent/tree/main/examples)**: Explore practical implementations. -- **[Blog](https://voltagent.dev/blog/)**: Read more about technical insights, and best practices. +## πŸ”— Codegen Integration -## Contribution +After running the setup, use these credentials in Codegen's Postgres settings: -We welcome contributions! Please refer to the contribution guidelines (link needed if available). Join our [Discord](https://s.voltagent.dev/discord) server for questions and discussions. +1. Go to Codegen Settings β†’ Integrations β†’ Postgres +2. Add new credential with these values: + - **Host**: `localhost` + - **Port**: `5432` + - **Database**: `codegen_db` + - **Username**: `codegen_user` + - **Password**: (from output or `.env` file) + - **SSL Mode**: `prefer` -## Community β™₯️ Thanks +## πŸ› οΈ Troubleshooting -Your stars help us reach more developers! If you find VoltAgent useful, please consider giving us a star on GitHub to support the project and help others discover it. +### PostgreSQL Not Running +```bash +# Start PostgreSQL service (Windows) +net start postgresql-x64-17 -## License +# Or start via Services.msc +``` -Licensed under the MIT License, Copyright Β© 2025-present VoltAgent. +### Connection Issues +- Verify PostgreSQL is accepting connections on port 5432 +- Check Windows Firewall settings +- Ensure `pg_hba.conf` allows local connections + +### Authentication Issues +The script tries multiple authentication methods automatically: +1. Environment variable `POSTGRES_ADMIN_PASSWORD` +2. Common default passwords (`postgres`, `admin`, empty) +3. Windows username authentication +4. Interactive prompt for credentials + +**If authentication fails:** +- Set environment variables before running: + ```cmd + set POSTGRES_ADMIN_USER=postgres + set POSTGRES_ADMIN_PASSWORD=your_actual_password + setup_postgres.bat + ``` +- Or run the script and enter credentials when prompted +- Check your PostgreSQL installation's default user/password + +### Cloudflare Worker Issues +- Verify API token has Workers:Edit permissions +- Check account ID is correct +- Ensure domain is properly configured in Cloudflare + +### Permission Errors +- Run as Administrator if needed +- Check PostgreSQL user permissions +- Verify Python has write access to current directory + +## πŸ“ Generated Files + +- **`.env`**: Environment variables and database credentials +- **`cloudflare_postgres_setup.py`**: Main setup script +- **`requirements.txt`**: Python dependencies +- **`setup_postgres.bat`**: Windows batch script for easy setup + +## πŸ”„ Re-running Setup + +The script is idempotent - you can run it multiple times safely: +- Existing database and user won't be recreated +- Existing Cloudflare Worker will be detected +- Configuration will be updated with current values + +## 🌐 Worker Endpoints + +Your Cloudflare Worker provides these endpoints: + +- **`/health`**: Health check and status +- **`/db-info`**: Database connection information +- **`/`**: General worker information + +## πŸ“ž Support + +If you encounter issues: + +1. Check the detailed output for specific error messages +2. Verify all prerequisites are met +3. Ensure environment variables are set correctly +4. Check PostgreSQL and Cloudflare service status + +## πŸ”’ Security Notes + +- This setup is designed for **development use** +- For production, consider additional security measures: + - IP whitelisting + - VPN access + - More restrictive database permissions + - Regular credential rotation diff --git a/cloudflare_postgres_setup.py b/cloudflare_postgres_setup.py new file mode 100644 index 000000000..9b86d6344 --- /dev/null +++ b/cloudflare_postgres_setup.py @@ -0,0 +1,485 @@ +#!/usr/bin/env python3 +""" +Cloudflare Postgres Exposure Automation for Codegen +Automates the setup of local Postgres database exposure via Cloudflare Workers/Tunnels +""" + +import os +import json +import subprocess +import sys +import time +import requests +from pathlib import Path +from typing import Dict, Optional, Tuple +import psycopg2 +from psycopg2 import sql +import uuid + +class CloudflarePostgresSetup: + def __init__(self): + # Cloudflare credentials from environment + self.cf_api_key = os.getenv('CLOUDFLARE_GLOBAL_API_KEY', 'eae82cf159577a8838cc83612104c09c5a0d6') + self.cf_email = os.getenv('CLOUDFLARE_EMAIL', 'your-email@example.com') # You'll need to provide this + self.cf_account_id = os.getenv('CLOUDFLARE_ACCOUNT_ID', '2b2a1d3effa7f7fe4fe2a8c4e48681e3') + self.cf_worker_name = os.getenv('CLOUDFLARE_WORKER_NAME', 'neon-db') + self.cf_worker_url = os.getenv('CLOUDFLARE_WORKER_URL', 'https://neon-db.pixeliumperfecto.workers.dev') + + # Local Postgres settings + self.local_pg_path = r"C:\Program Files\PostgreSQL\17" + self.local_host = "localhost" + self.local_port = 5432 + self.local_db = "codegen_db" + self.local_user = "codegen_user" + self.local_password = self._generate_password() + + # PostgreSQL admin credentials (for setup) + self.admin_user = os.getenv('POSTGRES_ADMIN_USER', 'postgres') + self.admin_password = os.getenv('POSTGRES_ADMIN_PASSWORD') + + # Headers for Cloudflare API (Global API Key format) + self.headers = { + 'X-Auth-Key': self.cf_api_key, + 'X-Auth-Email': self.cf_email, + 'Content-Type': 'application/json' + } + + self.env_file = Path('.env') + + def _generate_password(self) -> str: + """Generate a secure password for the database user""" + return str(uuid.uuid4()).replace('-', '')[:16] + + def print_banner(self): + """Print a nice banner""" + print("=" * 60) + print("🐘 Cloudflare Postgres Exposure Setup for Codegen πŸš€") + print("=" * 60) + print() + + def check_postgres_running(self) -> bool: + """Check if PostgreSQL is running locally""" + # Try different authentication methods + auth_methods = [ + # Method 1: Use provided admin password + {'user': self.admin_user, 'password': self.admin_password} if self.admin_password else None, + # Method 2: Try common default passwords + {'user': 'postgres', 'password': 'postgres'}, + {'user': 'postgres', 'password': 'admin'}, + {'user': 'postgres', 'password': ''}, + # Method 3: Try Windows authentication + {'user': os.getenv('USERNAME', 'postgres'), 'password': ''}, + ] + + # Filter out None values + auth_methods = [method for method in auth_methods if method is not None] + + for i, auth in enumerate(auth_methods): + try: + print(f"πŸ” Trying authentication method {i+1}...") + conn = psycopg2.connect( + host=self.local_host, + port=self.local_port, + database="postgres", + user=auth['user'], + password=auth['password'] + ) + conn.close() + print(f"βœ… PostgreSQL server is running (authenticated as {auth['user']})") + # Store successful credentials for later use + self.admin_user = auth['user'] + self.admin_password = auth['password'] + return True + except Exception as e: + print(f" ❌ Method {i+1} failed: {str(e)[:100]}...") + continue + + # If all methods fail, prompt for password + print("\nπŸ”‘ All automatic authentication methods failed.") + print("Please provide your PostgreSQL admin credentials:") + + try: + import getpass + self.admin_user = input(f"PostgreSQL admin username (default: postgres): ").strip() or 'postgres' + self.admin_password = getpass.getpass("PostgreSQL admin password: ") + + # Test the provided credentials + conn = psycopg2.connect( + host=self.local_host, + port=self.local_port, + database="postgres", + user=self.admin_user, + password=self.admin_password + ) + conn.close() + print(f"βœ… PostgreSQL server is running (authenticated as {self.admin_user})") + return True + + except Exception as e: + print(f"❌ PostgreSQL server not accessible: {e}") + print(f"πŸ’‘ Make sure PostgreSQL is running at {self.local_host}:{self.local_port}") + print("πŸ’‘ Check your username and password") + return False + + def setup_database_and_user(self) -> bool: + """Create database and user for Codegen""" + try: + # Connect as superuser (postgres) + conn = psycopg2.connect( + host=self.local_host, + port=self.local_port, + database="postgres", + user=self.admin_user, + password=self.admin_password + ) + conn.autocommit = True + cur = conn.cursor() + + # Check if database exists + cur.execute("SELECT 1 FROM pg_database WHERE datname = %s", (self.local_db,)) + if not cur.fetchone(): + print(f"πŸ“¦ Creating database: {self.local_db}") + cur.execute(sql.SQL("CREATE DATABASE {}").format(sql.Identifier(self.local_db))) + else: + print(f"βœ… Database {self.local_db} already exists") + + # Check if user exists + cur.execute("SELECT 1 FROM pg_user WHERE usename = %s", (self.local_user,)) + if not cur.fetchone(): + print(f"πŸ‘€ Creating user: {self.local_user}") + cur.execute(sql.SQL("CREATE USER {} WITH PASSWORD %s").format( + sql.Identifier(self.local_user)), (self.local_password,)) + else: + print(f"βœ… User {self.local_user} already exists") + # Update password in case it changed + cur.execute(sql.SQL("ALTER USER {} WITH PASSWORD %s").format( + sql.Identifier(self.local_user)), (self.local_password,)) + + # Grant permissions (READ-ONLY for Codegen safety) + cur.execute(sql.SQL("GRANT CONNECT ON DATABASE {} TO {}").format( + sql.Identifier(self.local_db), sql.Identifier(self.local_user))) + + conn.close() + + # Connect to the new database and grant schema permissions + conn = psycopg2.connect( + host=self.local_host, + port=self.local_port, + database=self.local_db, + user=self.admin_user, + password=self.admin_password + ) + conn.autocommit = True + cur = conn.cursor() + + cur.execute(sql.SQL("GRANT USAGE ON SCHEMA public TO {}").format( + sql.Identifier(self.local_user))) + cur.execute(sql.SQL("GRANT SELECT ON ALL TABLES IN SCHEMA public TO {}").format( + sql.Identifier(self.local_user))) + cur.execute(sql.SQL("ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO {}").format( + sql.Identifier(self.local_user))) + + conn.close() + print("βœ… Database and user setup completed") + return True + + except Exception as e: + print(f"❌ Failed to setup database: {e}") + return False + + def check_cloudflare_worker_exists(self) -> bool: + """Check if Cloudflare Worker already exists""" + try: + url = f"https://api.cloudflare.com/client/v4/accounts/{self.cf_account_id}/workers/scripts/{self.cf_worker_name}" + response = requests.get(url, headers=self.headers) + + if response.status_code == 200: + print(f"βœ… Cloudflare Worker '{self.cf_worker_name}' already exists") + return True + elif response.status_code == 404: + print(f"πŸ“ Cloudflare Worker '{self.cf_worker_name}' does not exist") + return False + else: + print(f"⚠️ Error checking worker: {response.status_code} - {response.text}") + return False + + except Exception as e: + print(f"❌ Failed to check Cloudflare Worker: {e}") + return False + + def create_cloudflare_worker(self) -> bool: + """Create Cloudflare Worker for database proxy""" + worker_script = f''' +// Cloudflare Worker for Postgres Database Proxy +// This worker acts as a secure proxy to your local Postgres database + +export default {{ + async fetch(request, env, ctx) {{ + const url = new URL(request.url); + + // CORS headers + const corsHeaders = {{ + 'Access-Control-Allow-Origin': '*', + 'Access-Control-Allow-Methods': 'GET, POST, PUT, DELETE, OPTIONS', + 'Access-Control-Allow-Headers': 'Content-Type, Authorization', + }}; + + // Handle CORS preflight + if (request.method === 'OPTIONS') {{ + return new Response(null, {{ headers: corsHeaders }}); + }} + + // Health check endpoint + if (url.pathname === '/health') {{ + return new Response(JSON.stringify({{ + status: 'healthy', + timestamp: new Date().toISOString(), + database: 'postgres-proxy', + worker: '{self.cf_worker_name}' + }}), {{ + headers: {{ ...corsHeaders, 'Content-Type': 'application/json' }} + }}); + }} + + // Database info endpoint + if (url.pathname === '/db-info') {{ + return new Response(JSON.stringify({{ + host: '{self.local_host}', + port: {self.local_port}, + database: '{self.local_db}', + user: '{self.local_user}', + ssl_mode: 'prefer', + connection_url: 'postgresql://{self.local_user}:[REDACTED]@{self.local_host}:{self.local_port}/{self.local_db}?sslmode=prefer' + }}), {{ + headers: {{ ...corsHeaders, 'Content-Type': 'application/json' }} + }}); + }} + + // Default response + return new Response(JSON.stringify({{ + message: 'Postgres Proxy Worker', + endpoints: ['/health', '/db-info'], + note: 'This worker provides database connection information for Codegen' + }}), {{ + headers: {{ ...corsHeaders, 'Content-Type': 'application/json' }} + }}); + }}, +}}; +''' + + try: + url = f"https://api.cloudflare.com/client/v4/accounts/{self.cf_account_id}/workers/scripts/{self.cf_worker_name}" + + # Create multipart form data + files = { + 'metadata': (None, json.dumps({ + 'main_module': 'worker.js', + 'compatibility_date': '2023-05-18' + }), 'application/json'), + 'worker.js': (None, worker_script, 'application/javascript+module') + } + + headers = {'X-Auth-Key': self.cf_api_key, 'X-Auth-Email': self.cf_email} + response = requests.put(url, headers=headers, files=files) + + if response.status_code in [200, 201]: + print(f"βœ… Cloudflare Worker '{self.cf_worker_name}' created successfully") + return True + else: + print(f"❌ Failed to create worker: {response.status_code} - {response.text}") + return False + + except Exception as e: + print(f"❌ Failed to create Cloudflare Worker: {e}") + return False + + def test_worker_deployment(self) -> bool: + """Test if the worker is accessible""" + try: + print(f"πŸ§ͺ Testing worker at: {self.cf_worker_url}") + response = requests.get(f"{self.cf_worker_url}/health", timeout=10) + + if response.status_code == 200: + data = response.json() + print(f"βœ… Worker is accessible and healthy") + print(f" Status: {data.get('status')}") + print(f" Timestamp: {data.get('timestamp')}") + return True + else: + print(f"⚠️ Worker responded with status: {response.status_code}") + return False + + except Exception as e: + print(f"❌ Failed to test worker: {e}") + return False + + def save_to_env_file(self): + """Save database credentials to .env file""" + env_content = f"""# Codegen Postgres Database Configuration +# Generated by cloudflare_postgres_setup.py + +# Database Connection Details +POSTGRES_HOST={self.local_host} +POSTGRES_PORT={self.local_port} +POSTGRES_DATABASE={self.local_db} +POSTGRES_USER={self.local_user} +POSTGRES_PASSWORD={self.local_password} +POSTGRES_SSL_MODE=prefer + +# Connection URL for Codegen +DATABASE_URL=postgresql://{self.local_user}:{self.local_password}@{self.local_host}:{self.local_port}/{self.local_db}?sslmode=prefer + +# Cloudflare Configuration +CLOUDFLARE_GLOBAL_API_KEY={self.cf_api_key} +CLOUDFLARE_EMAIL={self.cf_email} +CLOUDFLARE_ACCOUNT_ID={self.cf_account_id} +CLOUDFLARE_WORKER_NAME={self.cf_worker_name} +CLOUDFLARE_WORKER_URL={self.cf_worker_url} + +# For Codegen Integration (use these in Codegen settings) +CODEGEN_DB_HOST={self.local_host} +CODEGEN_DB_PORT={self.local_port} +CODEGEN_DB_NAME={self.local_db} +CODEGEN_DB_USER={self.local_user} +CODEGEN_DB_PASSWORD={self.local_password} +""" + + try: + with open(self.env_file, 'w') as f: + f.write(env_content) + print(f"βœ… Environment variables saved to {self.env_file}") + except Exception as e: + print(f"❌ Failed to save .env file: {e}") + + def print_status_report(self): + """Print comprehensive status report""" + print("\n" + "=" * 60) + print("πŸ“Š SETUP STATUS REPORT") + print("=" * 60) + + # Database Status + print("\n🐘 DATABASE STATUS:") + print(f" Host: {self.local_host}") + print(f" Port: {self.local_port}") + print(f" Database: {self.local_db}") + print(f" User: {self.local_user}") + print(f" Password: {self.local_password}") + print(f" SSL Mode: prefer") + + # Test database connection + try: + conn = psycopg2.connect( + host=self.local_host, + port=self.local_port, + database=self.local_db, + user=self.local_user, + password=self.local_password + ) + conn.close() + print(" Status: βœ… CONNECTED") + except Exception as e: + print(f" Status: ❌ CONNECTION FAILED - {e}") + + # Cloudflare Status + print(f"\n���️ CLOUDFLARE STATUS:") + print(f" Worker Name: {self.cf_worker_name}") + print(f" Worker URL: {self.cf_worker_url}") + print(f" Account ID: {self.cf_account_id}") + + # Test worker + try: + response = requests.get(f"{self.cf_worker_url}/health", timeout=5) + if response.status_code == 200: + print(" Status: βœ… WORKER ACCESSIBLE") + else: + print(f" Status: ⚠️ WORKER RESPONDED WITH {response.status_code}") + except: + print(" Status: ❌ WORKER NOT ACCESSIBLE") + + # Codegen Integration Instructions + print(f"\nπŸ€– CODEGEN INTEGRATION:") + print(" Copy these values to Codegen Postgres settings:") + print(f" Host: {self.local_host}") + print(f" Port: {self.local_port}") + print(f" Database: {self.local_db}") + print(f" Username: {self.local_user}") + print(f" Password: {self.local_password}") + print(" SSL Mode: prefer") + + print(f"\nπŸ“ FILES CREATED:") + print(f" .env file: {self.env_file.absolute()}") + + print(f"\nπŸ”— USEFUL URLS:") + print(f" Worker Health: {self.cf_worker_url}/health") + print(f" Worker DB Info: {self.cf_worker_url}/db-info") + + print("\n" + "=" * 60) + + def run_setup(self): + """Main setup process""" + self.print_banner() + + # Step 1: Check PostgreSQL + print("πŸ” Step 1: Checking PostgreSQL server...") + if not self.check_postgres_running(): + print("❌ Setup failed: PostgreSQL not accessible") + return False + + # Step 2: Setup database and user + print("\nπŸ”§ Step 2: Setting up database and user...") + if not self.setup_database_and_user(): + print("❌ Setup failed: Could not setup database") + return False + + # Step 3: Check/Create Cloudflare Worker + print("\n☁️ Step 3: Checking Cloudflare Worker...") + worker_exists = self.check_cloudflare_worker_exists() + + if not worker_exists: + print("πŸš€ Creating Cloudflare Worker...") + if not self.create_cloudflare_worker(): + print("❌ Setup failed: Could not create Cloudflare Worker") + return False + + # Wait a moment for deployment + print("οΏ½οΏ½οΏ½ Waiting for worker deployment...") + time.sleep(3) + + # Step 4: Test worker + print("\nπŸ§ͺ Step 4: Testing worker deployment...") + if not self.test_worker_deployment(): + print("⚠️ Worker may not be fully accessible yet (this is normal)") + + # Step 5: Save configuration + print("\nπŸ’Ύ Step 5: Saving configuration...") + self.save_to_env_file() + + # Step 6: Print status report + self.print_status_report() + + print("\nπŸŽ‰ Setup completed successfully!") + print("πŸ’‘ You can now use these credentials in Codegen's Postgres integration") + + return True + +def main(): + """Main entry point""" + try: + setup = CloudflarePostgresSetup() + success = setup.run_setup() + + if success: + print("\nβœ… All done! Your local Postgres is now ready for Codegen integration.") + else: + print("\n❌ Setup encountered errors. Please check the output above.") + sys.exit(1) + + except KeyboardInterrupt: + print("\n\n⚠️ Setup interrupted by user") + sys.exit(1) + except Exception as e: + print(f"\n❌ Unexpected error: {e}") + sys.exit(1) + +if __name__ == "__main__": + main() diff --git a/examples/workflow-orchestration/README.md b/examples/workflow-orchestration/README.md new file mode 100644 index 000000000..f8136b12f --- /dev/null +++ b/examples/workflow-orchestration/README.md @@ -0,0 +1,45 @@ +# Workflow Orchestration Example + +This example demonstrates the unified workflow orchestration engine that consolidates workflow and task management into a single cohesive system. + +## Features Demonstrated + +- **Multiple Execution Modes**: Sequential, parallel, conditional, pipeline, and graph-based workflows +- **Advanced Task Scheduling**: Dependency management and intelligent queuing +- **State Management**: Workflow persistence and recovery +- **Event Monitoring**: Real-time workflow and task events +- **Error Handling**: Retry policies and fault tolerance +- **Integration**: Backward compatibility with SubAgentManager + +## Running the Example + +```bash +npm install +npm run dev +``` + +## Example Workflows + +### 1. Data Processing Pipeline +A sequential workflow that processes data through multiple stages. + +### 2. Parallel Analysis +Multiple agents analyze different aspects of data simultaneously. + +### 3. Conditional Processing +Workflow execution based on runtime conditions and previous task results. + +### 4. Dependency Graph +Complex workflow with task dependencies and parallel branches. + +### 5. Integration Example +Shows how to migrate from SubAgentManager to workflow orchestration. + +## Key Concepts + +- **Workflow Definition**: Declarative workflow specification +- **Task Dependencies**: Control execution order and parallelism +- **Conditional Execution**: Dynamic workflow paths based on results +- **State Persistence**: Workflow recovery and monitoring +- **Event-Driven Monitoring**: Real-time observability + diff --git a/examples/workflow-orchestration/package.json b/examples/workflow-orchestration/package.json new file mode 100644 index 000000000..ae9c5309d --- /dev/null +++ b/examples/workflow-orchestration/package.json @@ -0,0 +1,28 @@ +{ + "name": "voltagent-example-workflow-orchestration", + "version": "0.1.0", + "private": true, + "description": "VoltAgent Workflow Orchestration Example", + "main": "src/index.ts", + "scripts": { + "dev": "tsx src/index.ts", + "build": "tsc", + "start": "node dist/index.js" + }, + "keywords": [ + "voltagent", + "workflow", + "orchestration", + "ai", + "agents" + ], + "dependencies": { + "@voltagent/core": "workspace:*", + "@voltagent/anthropic-ai": "workspace:*" + }, + "devDependencies": { + "tsx": "^4.7.0", + "typescript": "^5.3.3" + } +} + diff --git a/examples/workflow-orchestration/src/index.ts b/examples/workflow-orchestration/src/index.ts new file mode 100644 index 000000000..37901ae2c --- /dev/null +++ b/examples/workflow-orchestration/src/index.ts @@ -0,0 +1,541 @@ +import { Agent } from "@voltagent/core"; +import { AnthropicAI } from "@voltagent/anthropic-ai"; +import { + WorkflowOrchestrationEngine, + WorkflowDefinition, + createSequentialWorkflow, + createParallelWorkflow, + createPipelineWorkflow, + createGraphWorkflow, + OrchestrationIntegration, + DEFAULT_RETRY_POLICY, + DEFAULT_ERROR_HANDLING, +} from "@voltagent/core/orchestration"; + +/** + * Workflow Orchestration Example + * + * This example demonstrates the unified workflow orchestration engine + * that consolidates workflow and task management capabilities. + */ + +// Initialize LLM provider +const llm = new AnthropicAI({ + apiKey: process.env.ANTHROPIC_API_KEY || "your-api-key-here", +}); + +// Create specialized agents for different tasks +const dataExtractorAgent = new Agent({ + name: "DataExtractor", + instructions: "You are a data extraction specialist. Extract and structure data from various sources.", + llm, + model: "claude-3-haiku-20240307", +}); + +const dataAnalyzerAgent = new Agent({ + name: "DataAnalyzer", + instructions: "You are a data quality analyst. Analyze data quality and provide recommendations.", + llm, + model: "claude-3-haiku-20240307", +}); + +const dataProcessorAgent = new Agent({ + name: "DataProcessor", + instructions: "You are a data processor. Process and transform high-quality data.", + llm, + model: "claude-3-haiku-20240307", +}); + +const dataCleanerAgent = new Agent({ + name: "DataCleaner", + instructions: "You are a data cleaner. Clean and fix low-quality data issues.", + llm, + model: "claude-3-haiku-20240307", +}); + +const dataTransformerAgent = new Agent({ + name: "DataTransformer", + instructions: "You are a data transformer. Transform data into required formats.", + llm, + model: "claude-3-haiku-20240307", +}); + +const reportGeneratorAgent = new Agent({ + name: "ReportGenerator", + instructions: "You are a report generator. Create comprehensive reports from processed data.", + llm, + model: "claude-3-haiku-20240307", +}); + +async function main() { + console.log("πŸš€ VoltAgent Workflow Orchestration Example\n"); + + // Create orchestration engine with all agents + const agents = [ + dataExtractorAgent, + dataAnalyzerAgent, + dataProcessorAgent, + dataCleanerAgent, + dataTransformerAgent, + reportGeneratorAgent, + ]; + + const engine = new WorkflowOrchestrationEngine(agents, { + maxConcurrentWorkflows: 5, + maxConcurrentTasks: 10, + persistenceEnabled: false, // Disable for example + }); + + // Set up event monitoring + engine.onWorkflowEvent((event) => { + console.log(`πŸ“Š Event: ${event.type} | Workflow: ${event.workflowId} | Task: ${event.taskId || 'N/A'}`); + }); + + await engine.start(); + console.log("βœ… Orchestration engine started\n"); + + try { + // Example 1: Sequential Data Processing Pipeline + await demonstrateSequentialWorkflow(engine); + + // Example 2: Parallel Data Analysis + await demonstrateParallelWorkflow(engine); + + // Example 3: Conditional Processing + await demonstrateConditionalWorkflow(engine); + + // Example 4: Complex Dependency Graph + await demonstrateDependencyGraph(engine); + + // Example 5: Pipeline with Data Flow + await demonstratePipelineWorkflow(engine); + + // Example 6: Integration with SubAgentManager + await demonstrateIntegration(engine); + + } catch (error) { + console.error("❌ Error during workflow execution:", error); + } finally { + await engine.stop(); + console.log("\nπŸ›‘ Orchestration engine stopped"); + } +} + +async function demonstrateSequentialWorkflow(engine: WorkflowOrchestrationEngine) { + console.log("πŸ“‹ Example 1: Sequential Data Processing Pipeline"); + console.log("=" .repeat(50)); + + const workflow = createSequentialWorkflow("sequential-processing", "Sequential Data Processing", [ + { + id: "extract", + name: "Extract Data", + agentName: "DataExtractor", + input: "Extract customer data from the CRM system", + timeout: 30000, + }, + { + id: "analyze", + name: "Analyze Quality", + agentName: "DataAnalyzer", + input: "Analyze the quality of the extracted customer data", + timeout: 30000, + }, + { + id: "transform", + name: "Transform Data", + agentName: "DataTransformer", + input: "Transform the analyzed data into standard format", + timeout: 30000, + }, + { + id: "report", + name: "Generate Report", + agentName: "ReportGenerator", + input: "Generate a summary report of the data processing pipeline", + timeout: 30000, + }, + ]); + + await engine.registerWorkflow(workflow); + + const result = await engine.executeWorkflow("sequential-processing", { + source: "CRM_SYSTEM", + format: "JSON", + }); + + console.log(`Status: ${result.status}`); + console.log(`Duration: ${result.duration}ms`); + console.log(`Tasks completed: ${result.taskResults.filter(t => t.status === "completed").length}/${result.taskResults.length}`); + console.log(`Final result: ${result.finalResult}\n`); +} + +async function demonstrateParallelWorkflow(engine: WorkflowOrchestrationEngine) { + console.log("⚑ Example 2: Parallel Data Analysis"); + console.log("=" .repeat(50)); + + const workflow = createParallelWorkflow("parallel-analysis", "Parallel Data Analysis", [ + { + id: "quality-analysis", + name: "Quality Analysis", + agentName: "DataAnalyzer", + input: "Perform comprehensive quality analysis on the dataset", + timeout: 30000, + }, + { + id: "extract-insights", + name: "Extract Insights", + agentName: "DataExtractor", + input: "Extract key business insights from the dataset", + timeout: 30000, + }, + { + id: "clean-data", + name: "Clean Data", + agentName: "DataCleaner", + input: "Clean and standardize the dataset", + timeout: 30000, + }, + { + id: "transform-format", + name: "Transform Format", + agentName: "DataTransformer", + input: "Transform data into multiple output formats", + timeout: 30000, + }, + ]); + + await engine.registerWorkflow(workflow); + + const result = await engine.executeWorkflow("parallel-analysis", { + dataset: "customer_transactions_2024", + analysisType: "comprehensive", + }); + + console.log(`Status: ${result.status}`); + console.log(`Duration: ${result.duration}ms`); + console.log(`Parallel tasks completed: ${result.taskResults.filter(t => t.status === "completed").length}/${result.taskResults.length}`); + + // Show parallel execution timing + const startTimes = result.taskResults.map(t => t.startTime.getTime()); + const maxStartDiff = Math.max(...startTimes) - Math.min(...startTimes); + console.log(`Max start time difference: ${maxStartDiff}ms (should be small for parallel execution)\n`); +} + +async function demonstrateConditionalWorkflow(engine: WorkflowOrchestrationEngine) { + console.log("πŸ”€ Example 3: Conditional Processing"); + console.log("=" .repeat(50)); + + const workflow: WorkflowDefinition = { + id: "conditional-processing", + name: "Conditional Data Processing", + version: "1.0.0", + mode: "conditional", + tasks: [ + { + id: "quality-check", + name: "Quality Check", + agentName: "DataAnalyzer", + input: "Analyze data quality and determine if it meets standards. Respond with 'high_quality' or 'low_quality'.", + timeout: 30000, + }, + { + id: "process-good-data", + name: "Process High Quality Data", + agentName: "DataProcessor", + input: "Process the high-quality data using advanced algorithms", + timeout: 30000, + conditions: [ + { + type: "result", + taskId: "quality-check", + operator: "contains", + value: "high_quality", + }, + ], + }, + { + id: "clean-bad-data", + name: "Clean Low Quality Data", + agentName: "DataCleaner", + input: "Clean and improve the low-quality data", + timeout: 30000, + conditions: [ + { + type: "result", + taskId: "quality-check", + operator: "contains", + value: "low_quality", + }, + ], + }, + { + id: "generate-report", + name: "Generate Final Report", + agentName: "ReportGenerator", + input: "Generate a report based on the processing results", + timeout: 30000, + conditions: [ + { + type: "status", + taskId: "process-good-data", + operator: "equals", + value: "completed", + }, + ], + }, + { + id: "generate-cleanup-report", + name: "Generate Cleanup Report", + agentName: "ReportGenerator", + input: "Generate a report on data cleaning activities", + timeout: 30000, + conditions: [ + { + type: "status", + taskId: "clean-bad-data", + operator: "equals", + value: "completed", + }, + ], + }, + ], + retryPolicy: DEFAULT_RETRY_POLICY, + errorHandling: DEFAULT_ERROR_HANDLING, + }; + + await engine.registerWorkflow(workflow); + + const result = await engine.executeWorkflow("conditional-processing", { + dataSource: "user_uploads", + qualityThreshold: 0.8, + }); + + console.log(`Status: ${result.status}`); + console.log(`Duration: ${result.duration}ms`); + + // Show which conditional paths were taken + const executedTasks = result.taskResults.filter(t => t.status === "completed"); + const skippedTasks = result.taskResults.filter(t => t.status === "skipped"); + + console.log(`Executed tasks: ${executedTasks.map(t => t.taskId).join(", ")}`); + console.log(`Skipped tasks: ${skippedTasks.map(t => t.taskId).join(", ")}\n`); +} + +async function demonstrateDependencyGraph(engine: WorkflowOrchestrationEngine) { + console.log("πŸ•ΈοΈ Example 4: Complex Dependency Graph"); + console.log("=" .repeat(50)); + + const workflow = createGraphWorkflow("dependency-graph", "Complex Dependency Processing", [ + { + id: "initialize", + name: "Initialize System", + agentName: "DataExtractor", + input: "Initialize the data processing system and validate connections", + timeout: 30000, + }, + { + id: "extract-source-a", + name: "Extract from Source A", + agentName: "DataExtractor", + input: "Extract data from source A (database)", + dependencies: ["initialize"], + timeout: 30000, + }, + { + id: "extract-source-b", + name: "Extract from Source B", + agentName: "DataExtractor", + input: "Extract data from source B (API)", + dependencies: ["initialize"], + timeout: 30000, + }, + { + id: "analyze-source-a", + name: "Analyze Source A Data", + agentName: "DataAnalyzer", + input: "Analyze data quality and patterns from source A", + dependencies: ["extract-source-a"], + timeout: 30000, + }, + { + id: "analyze-source-b", + name: "Analyze Source B Data", + agentName: "DataAnalyzer", + input: "Analyze data quality and patterns from source B", + dependencies: ["extract-source-b"], + timeout: 30000, + }, + { + id: "merge-data", + name: "Merge Data Sources", + agentName: "DataProcessor", + input: "Merge and reconcile data from both sources", + dependencies: ["analyze-source-a", "analyze-source-b"], + timeout: 30000, + }, + { + id: "transform-merged", + name: "Transform Merged Data", + agentName: "DataTransformer", + input: "Transform the merged data into final format", + dependencies: ["merge-data"], + timeout: 30000, + }, + { + id: "generate-final-report", + name: "Generate Final Report", + agentName: "ReportGenerator", + input: "Generate comprehensive report on the entire process", + dependencies: ["transform-merged"], + timeout: 30000, + }, + ]); + + await engine.registerWorkflow(workflow); + + const result = await engine.executeWorkflow("dependency-graph", { + sourceA: "postgresql://localhost/db", + sourceB: "https://api.example.com/data", + }); + + console.log(`Status: ${result.status}`); + console.log(`Duration: ${result.duration}ms`); + + // Show execution order based on dependencies + const sortedTasks = result.taskResults.sort((a, b) => a.startTime.getTime() - b.startTime.getTime()); + console.log("Execution order:"); + sortedTasks.forEach((task, index) => { + console.log(` ${index + 1}. ${task.taskId} (${task.status})`); + }); + console.log(); +} + +async function demonstratePipelineWorkflow(engine: WorkflowOrchestrationEngine) { + console.log("πŸ”„ Example 5: Pipeline with Data Flow"); + console.log("=" .repeat(50)); + + const workflow = createPipelineWorkflow("data-pipeline", "Data Processing Pipeline", [ + { + id: "extract-raw", + name: "Extract Raw Data", + agentName: "DataExtractor", + input: "Extract raw data from the source system", + timeout: 30000, + }, + { + id: "clean-raw", + name: "Clean Raw Data", + agentName: "DataCleaner", + input: "Clean and validate the raw data", + timeout: 30000, + }, + { + id: "transform-clean", + name: "Transform Clean Data", + agentName: "DataTransformer", + input: "Transform cleaned data into business format", + timeout: 30000, + }, + { + id: "analyze-transformed", + name: "Analyze Transformed Data", + agentName: "DataAnalyzer", + input: "Analyze the transformed data for insights", + timeout: 30000, + }, + { + id: "generate-insights-report", + name: "Generate Insights Report", + agentName: "ReportGenerator", + input: "Generate final insights report", + timeout: 30000, + }, + ]); + + await engine.registerWorkflow(workflow); + + const result = await engine.executeWorkflow("data-pipeline", { + input: "customer_data_2024.csv", + outputFormat: "insights_report", + }); + + console.log(`Status: ${result.status}`); + console.log(`Duration: ${result.duration}ms`); + console.log(`Pipeline stages completed: ${result.taskResults.filter(t => t.status === "completed").length}/${result.taskResults.length}`); + console.log(`Final pipeline result: ${result.finalResult}\n`); +} + +async function demonstrateIntegration(engine: WorkflowOrchestrationEngine) { + console.log("πŸ”— Example 6: Integration with SubAgentManager"); + console.log("=" .repeat(50)); + + // Create a supervisor agent with sub-agents + const supervisorAgent = new Agent({ + name: "DataSupervisor", + instructions: "You are a data processing supervisor that coordinates multiple specialized agents.", + llm, + model: "claude-3-haiku-20240307", + subAgents: [dataExtractorAgent, dataAnalyzerAgent, dataProcessorAgent], + }); + + // Create integration layer + const integration = new OrchestrationIntegration( + supervisorAgent, + supervisorAgent.getSubAgentManager(), + engine + ); + + await integration.start(); + + console.log("Testing enhanced delegation with orchestration..."); + + // Use enhanced delegation that can leverage orchestration + const result = await integration.enhancedDelegateTask({ + task: "Process customer feedback data and generate insights", + targetAgents: ["DataExtractor", "DataAnalyzer", "DataProcessor"], + useOrchestration: true, + executionMode: "sequential", + timeout: 30000, + context: { + dataSource: "customer_feedback_2024", + analysisType: "sentiment_and_trends", + }, + }); + + console.log(`Integration result status: ${result.status}`); + console.log(`Tasks executed: ${result.taskResults.length}`); + + // Show backward compatibility + console.log("\nTesting backward compatibility..."); + const simpleResult = await integration.enhancedDelegateTask({ + task: "Quick data validation", + targetAgents: ["DataAnalyzer"], + useOrchestration: false, // Use traditional SubAgentManager + }); + + console.log(`Simple delegation completed: ${Array.isArray(simpleResult) ? simpleResult.length : 1} results`); + + await integration.stop(); + console.log("Integration demonstration completed\n"); +} + +// Error handling wrapper +async function runExample() { + try { + await main(); + } catch (error) { + console.error("❌ Example failed:", error); + process.exit(1); + } +} + +// Check for required environment variables +if (!process.env.ANTHROPIC_API_KEY) { + console.error("❌ Please set ANTHROPIC_API_KEY environment variable"); + console.log("πŸ’‘ You can get an API key from: https://console.anthropic.com/"); + process.exit(1); +} + +// Run the example +runExample(); + diff --git a/packages/core/src/index.ts b/packages/core/src/index.ts index 5f6ae5581..0983172d1 100644 --- a/packages/core/src/index.ts +++ b/packages/core/src/index.ts @@ -13,6 +13,8 @@ export * from "./tool"; export * from "./tool/reasoning/index"; export * from "./memory"; export * from "./agent/providers"; +export * from "./orchestration"; + export type { AgentOptions, AgentResponse, diff --git a/packages/core/src/orchestration/README.md b/packages/core/src/orchestration/README.md new file mode 100644 index 000000000..7c5ca1bb1 --- /dev/null +++ b/packages/core/src/orchestration/README.md @@ -0,0 +1,586 @@ +# VoltAgent Workflow Orchestration Engine + +A unified workflow orchestration system that consolidates workflow and task management into a single cohesive engine, building upon VoltAgent's existing SubAgentManager while providing advanced orchestration capabilities. + +## 🎯 Overview + +The Workflow Orchestration Engine addresses the need for sophisticated workflow management in VoltAgent by providing: + +- **Multiple Execution Modes**: Sequential, parallel, conditional, pipeline, and graph-based execution +- **Advanced Task Scheduling**: Intelligent queuing, dependency management, and resource allocation +- **State Management**: Workflow persistence, recovery, and monitoring +- **Comprehensive Validation**: Pre-execution validation and runtime error handling +- **Event-Driven Architecture**: Real-time monitoring and observability +- **Backward Compatibility**: Seamless integration with existing SubAgentManager + +## πŸ—οΈ Architecture + +``` +β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” +β”‚ Workflow Orchestration Engine β”‚ +β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ +β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ +β”‚ β”‚ Workflow β”‚ β”‚ Task β”‚ β”‚ State β”‚ β”‚ +β”‚ β”‚ Engine β”‚ β”‚ Scheduler β”‚ β”‚ Manager β”‚ β”‚ +β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ +β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ +β”‚ β”‚ Validator β”‚ β”‚ Integration β”‚ β”‚ Events β”‚ β”‚ +β”‚ β”‚ β”‚ β”‚ Layer β”‚ β”‚ System β”‚ β”‚ +β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ +β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ +β”‚ SubAgentManager β”‚ +β”‚ (Backward Compatible) β”‚ +β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ +``` + +## πŸš€ Quick Start + +### Basic Usage + +```typescript +import { + WorkflowOrchestrationEngine, + WorkflowDefinition, + createSequentialWorkflow +} from '@voltagent/core/orchestration'; + +// Create orchestration engine with agents +const engine = new WorkflowOrchestrationEngine([agent1, agent2, agent3]); +await engine.start(); + +// Define a simple workflow +const workflow = createSequentialWorkflow('data-processing', 'Data Processing', [ + { + id: 'extract', + name: 'Extract Data', + agentName: 'DataExtractor', + input: 'Extract data from database' + }, + { + id: 'transform', + name: 'Transform Data', + agentName: 'DataTransformer', + input: 'Transform extracted data' + } +]); + +// Register and execute +await engine.registerWorkflow(workflow); +const result = await engine.executeWorkflow('data-processing'); +console.log('Workflow completed:', result.status); +``` + +### Advanced Workflow Definition + +```typescript +const advancedWorkflow: WorkflowDefinition = { + id: 'advanced-processing', + name: 'Advanced Data Processing', + version: '1.0.0', + mode: 'graph', // Dependency-based execution + tasks: [ + { + id: 'validate', + name: 'Validate Input', + agentName: 'DataValidator', + input: 'Validate incoming data', + timeout: 30000, + retries: 2 + }, + { + id: 'extract', + name: 'Extract Data', + agentName: 'DataExtractor', + input: 'Extract validated data', + dependencies: ['validate'] + }, + { + id: 'analyze', + name: 'Analyze Quality', + agentName: 'QualityAnalyzer', + input: 'Analyze data quality', + dependencies: ['extract'] + }, + { + id: 'process-good', + name: 'Process High Quality Data', + agentName: 'DataProcessor', + input: 'Process high quality data', + dependencies: ['analyze'], + conditions: [{ + type: 'result', + taskId: 'analyze', + operator: 'contains', + value: 'high_quality' + }] + }, + { + id: 'clean-bad', + name: 'Clean Low Quality Data', + agentName: 'DataCleaner', + input: 'Clean low quality data', + dependencies: ['analyze'], + conditions: [{ + type: 'result', + taskId: 'analyze', + operator: 'contains', + value: 'low_quality' + }] + } + ], + retryPolicy: { + maxRetries: 3, + backoffStrategy: 'exponential', + baseDelay: 1000, + maxDelay: 30000 + }, + errorHandling: { + onTaskFailure: 'continue', + onWorkflowFailure: 'partial_complete' + } +}; +``` + +## πŸ“‹ Execution Modes + +### 1. Sequential Mode +Tasks execute one after another in order. + +```typescript +const sequentialWorkflow = createSequentialWorkflow('seq-workflow', 'Sequential Processing', [ + { id: 'step1', name: 'First Step', agentName: 'Agent1', input: 'Do step 1' }, + { id: 'step2', name: 'Second Step', agentName: 'Agent2', input: 'Do step 2' }, + { id: 'step3', name: 'Third Step', agentName: 'Agent3', input: 'Do step 3' } +]); +``` + +### 2. Parallel Mode +All tasks execute simultaneously. + +```typescript +const parallelWorkflow = createParallelWorkflow('par-workflow', 'Parallel Processing', [ + { id: 'task1', name: 'Task 1', agentName: 'Agent1', input: 'Process batch 1' }, + { id: 'task2', name: 'Task 2', agentName: 'Agent2', input: 'Process batch 2' }, + { id: 'task3', name: 'Task 3', agentName: 'Agent3', input: 'Process batch 3' } +]); +``` + +### 3. Pipeline Mode +Tasks execute sequentially with data flowing between them. + +```typescript +const pipelineWorkflow = createPipelineWorkflow('pipe-workflow', 'Data Pipeline', [ + { id: 'extract', name: 'Extract', agentName: 'Extractor', input: 'Extract raw data' }, + { id: 'transform', name: 'Transform', agentName: 'Transformer', input: 'Transform data' }, + { id: 'load', name: 'Load', agentName: 'Loader', input: 'Load processed data' } +]); +``` + +### 4. Graph Mode +Tasks execute based on dependency relationships. + +```typescript +const graphWorkflow = createGraphWorkflow('graph-workflow', 'Dependency Graph', [ + { id: 'init', name: 'Initialize', agentName: 'Initializer', input: 'Initialize system' }, + { + id: 'process1', + name: 'Process 1', + agentName: 'Processor1', + input: 'Process type 1', + dependencies: ['init'] + }, + { + id: 'process2', + name: 'Process 2', + agentName: 'Processor2', + input: 'Process type 2', + dependencies: ['init'] + }, + { + id: 'finalize', + name: 'Finalize', + agentName: 'Finalizer', + input: 'Finalize results', + dependencies: ['process1', 'process2'] + } +]); +``` + +### 5. Conditional Mode +Tasks execute based on runtime conditions. + +```typescript +const conditionalWorkflow: WorkflowDefinition = { + id: 'conditional-workflow', + name: 'Conditional Processing', + version: '1.0.0', + mode: 'conditional', + tasks: [ + { + id: 'check', + name: 'Check Condition', + agentName: 'Checker', + input: 'Check system status' + }, + { + id: 'action-a', + name: 'Action A', + agentName: 'ProcessorA', + input: 'Execute action A', + conditions: [{ + type: 'result', + taskId: 'check', + operator: 'equals', + value: 'condition_a_met' + }] + }, + { + id: 'action-b', + name: 'Action B', + agentName: 'ProcessorB', + input: 'Execute action B', + conditions: [{ + type: 'result', + taskId: 'check', + operator: 'equals', + value: 'condition_b_met' + }] + } + ] +}; +``` + +## πŸ”§ Configuration + +### Engine Configuration + +```typescript +const engine = new WorkflowOrchestrationEngine(agents, { + maxConcurrentWorkflows: 10, // Maximum concurrent workflows + maxConcurrentTasks: 50, // Maximum concurrent tasks + persistenceEnabled: true // Enable state persistence +}); +``` + +### Retry Policies + +```typescript +const retryPolicy: RetryPolicy = { + maxRetries: 3, + backoffStrategy: 'exponential', // 'linear', 'exponential', 'fixed' + baseDelay: 1000, // Base delay in milliseconds + maxDelay: 30000, // Maximum delay for exponential backoff + retryableErrors: ['TIMEOUT', 'NETWORK_ERROR'] // Specific errors to retry +}; +``` + +### Error Handling + +```typescript +const errorHandling: ErrorHandlingStrategy = { + onTaskFailure: 'continue', // 'stop', 'continue', 'retry', 'skip_dependents' + onWorkflowFailure: 'partial_complete', // 'stop', 'rollback', 'partial_complete' + errorNotification: true, + customErrorHandler: async (error, context) => { + console.log('Custom error handling:', error.message); + // Custom error handling logic + } +}; +``` + +## πŸ“Š Monitoring and Events + +### Event Handling + +```typescript +engine.onWorkflowEvent((event) => { + console.log(`Event: ${event.type}`, { + workflowId: event.workflowId, + executionId: event.executionId, + taskId: event.taskId, + timestamp: event.timestamp, + data: event.data + }); +}); + +// Event types: +// - workflow_started +// - workflow_completed +// - workflow_failed +// - task_started +// - task_completed +// - task_failed +// - task_retried +``` + +### Execution Status + +```typescript +// Get current execution status +const status = await engine.getExecutionStatus(executionId); +console.log('Current status:', status?.status); +console.log('Completed tasks:', status?.taskResults.size); + +// Get execution history +const history = await engine.getExecutionHistory('workflow-id', 10); +console.log('Recent executions:', history.length); +``` + +## πŸ”„ Integration with SubAgentManager + +The orchestration engine provides seamless backward compatibility with existing SubAgentManager usage: + +```typescript +import { OrchestrationIntegration } from '@voltagent/core/orchestration/integration'; + +// Create integration layer +const integration = new OrchestrationIntegration( + parentAgent, + subAgentManager, + orchestrationEngine +); + +await integration.start(); + +// Enhanced delegate task with orchestration capabilities +const result = await integration.enhancedDelegateTask({ + task: 'Process complex data', + targetAgents: ['Agent1', 'Agent2', 'Agent3'], + useOrchestration: true, // Auto-detect or force orchestration + executionMode: 'parallel', // Execution mode for orchestration + timeout: 60000, + retries: 2 +}); + +// Create workflow-aware delegate tool +const enhancedTool = integration.createEnhancedDelegateTool(); +``` + +## πŸ§ͺ Testing + +### Unit Tests + +```typescript +import { WorkflowOrchestrationEngine } from '@voltagent/core/orchestration'; + +describe('Workflow Orchestration', () => { + let engine: WorkflowOrchestrationEngine; + + beforeEach(async () => { + engine = new WorkflowOrchestrationEngine(mockAgents); + await engine.start(); + }); + + afterEach(async () => { + await engine.stop(); + }); + + it('should execute sequential workflow', async () => { + const workflow = createSequentialWorkflow('test', 'Test', tasks); + await engine.registerWorkflow(workflow); + + const result = await engine.executeWorkflow('test'); + expect(result.status).toBe('completed'); + }); +}); +``` + +### Dry Run Testing + +```typescript +// Validate workflow without execution +const result = await engine.executeWorkflow('workflow-id', {}, { + dryRun: true +}); + +console.log('Validation result:', result.status); +console.log('Would execute tasks:', result.taskResults.length); +``` + +## πŸ” Validation + +The orchestration engine includes comprehensive validation: + +```typescript +import { WorkflowValidator } from '@voltagent/core/orchestration'; + +const validator = new WorkflowValidator(); + +// Validate workflow definition +const validation = await validator.validateWorkflow(workflow, availableAgents); + +if (!validation.isValid) { + console.error('Validation errors:', validation.errors); + console.warn('Validation warnings:', validation.warnings); +} + +// Validate agent compatibility +const compatibility = validator.validateAgentCompatibility(workflow, agent); +``` + +## πŸ’Ύ State Management + +### Persistence Configuration + +```typescript +// Enable persistence for workflow recovery +const engine = new WorkflowOrchestrationEngine(agents, { + persistenceEnabled: true +}); + +// State is automatically saved to: +// - Browser: localStorage +// - Node.js: .voltagent/workflow-state/ directory +``` + +### Manual State Operations + +```typescript +import { WorkflowStateManager } from '@voltagent/core/orchestration'; + +const stateManager = new WorkflowStateManager(true); + +// Save workflow state +await stateManager.saveWorkflowState(context); + +// Load workflow state +const context = await stateManager.loadWorkflowState(executionId); + +// List active workflows +const activeWorkflows = await stateManager.listActiveWorkflows(); + +// Get storage statistics +const stats = await stateManager.getStorageStats(); +console.log('Storage usage:', stats); +``` + +## πŸŽ›οΈ Advanced Features + +### Custom Conditions + +```typescript +const customCondition: TaskCondition = { + type: 'custom', + operator: 'equals', + value: true, + customEvaluator: (context) => { + // Custom logic to determine if task should execute + const previousResult = context.taskResults.get('previous-task'); + return previousResult?.result?.includes('success'); + } +}; +``` + +### Workflow Composition + +```typescript +// Compose complex workflows from simpler ones +const subWorkflow1 = createSequentialWorkflow('sub1', 'Sub Workflow 1', tasks1); +const subWorkflow2 = createParallelWorkflow('sub2', 'Sub Workflow 2', tasks2); + +const compositeWorkflow: WorkflowDefinition = { + id: 'composite', + name: 'Composite Workflow', + version: '1.0.0', + mode: 'graph', + tasks: [ + // Tasks that reference sub-workflows + { + id: 'execute-sub1', + name: 'Execute Sub Workflow 1', + agentName: 'WorkflowExecutor', + input: JSON.stringify({ workflowId: 'sub1', input: {} }) + }, + { + id: 'execute-sub2', + name: 'Execute Sub Workflow 2', + agentName: 'WorkflowExecutor', + input: JSON.stringify({ workflowId: 'sub2', input: {} }), + dependencies: ['execute-sub1'] + } + ] +}; +``` + +## πŸ“ˆ Performance Considerations + +### Resource Management + +- **Concurrent Limits**: Configure `maxConcurrentWorkflows` and `maxConcurrentTasks` based on system resources +- **Memory Usage**: Monitor workflow state size, especially for long-running workflows +- **Persistence Overhead**: Consider disabling persistence for short-lived workflows + +### Optimization Tips + +1. **Use Parallel Mode** for independent tasks to maximize throughput +2. **Implement Timeouts** to prevent stuck workflows +3. **Configure Retry Policies** appropriately to balance reliability and performance +4. **Monitor Event Volume** in high-frequency scenarios +5. **Clean Up State** regularly for long-running systems + +## πŸ”§ Troubleshooting + +### Common Issues + +1. **Circular Dependencies**: Use the validator to detect dependency cycles +2. **Agent Not Found**: Ensure all referenced agents are registered +3. **Memory Leaks**: Monitor state manager and clean up completed workflows +4. **Performance Issues**: Check concurrent limits and task complexity + +### Debug Mode + +```typescript +// Enable detailed logging +process.env.VOLTAGENT_DEBUG = 'orchestration'; + +// Monitor events for debugging +engine.onWorkflowEvent((event) => { + if (event.type.includes('failed')) { + console.error('Failure event:', event); + } +}); +``` + +## πŸš€ Migration Guide + +### From SubAgentManager + +1. **Identify Patterns**: Analyze existing delegate_task usage +2. **Create Workflows**: Convert complex delegations to workflow definitions +3. **Use Integration Layer**: Leverage OrchestrationIntegration for gradual migration +4. **Test Thoroughly**: Validate behavior matches existing functionality + +### Example Migration + +```typescript +// Before: Simple delegation +const results = await subAgentManager.handoffToMultiple({ + task: 'Process data', + targetAgents: [agent1, agent2, agent3] +}); + +// After: Workflow orchestration +const workflow = createParallelWorkflow('data-processing', 'Process Data', [ + { id: 'task1', name: 'Process 1', agentName: 'Agent1', input: 'Process data part 1' }, + { id: 'task2', name: 'Process 2', agentName: 'Agent2', input: 'Process data part 2' }, + { id: 'task3', name: 'Process 3', agentName: 'Agent3', input: 'Process data part 3' } +]); + +await engine.registerWorkflow(workflow); +const result = await engine.executeWorkflow('data-processing'); +``` + +## πŸ“š API Reference + +See the TypeScript definitions in `types.ts` for complete API documentation. + +## 🀝 Contributing + +When contributing to the orchestration engine: + +1. **Add Tests**: Include comprehensive test coverage +2. **Update Documentation**: Keep README and code comments current +3. **Validate Backward Compatibility**: Ensure SubAgentManager integration works +4. **Performance Testing**: Validate performance with realistic workloads +5. **Error Handling**: Implement robust error handling and recovery + diff --git a/packages/core/src/orchestration/engine.spec.ts b/packages/core/src/orchestration/engine.spec.ts new file mode 100644 index 000000000..aae14fa73 --- /dev/null +++ b/packages/core/src/orchestration/engine.spec.ts @@ -0,0 +1,607 @@ +import { WorkflowOrchestrationEngine } from "./engine"; +import type { + WorkflowDefinition, + WorkflowExecutionOptions, + WorkflowExecutionResult, + TaskExecutionResult, +} from "./types"; + +// Mock Agent class +class MockAgent { + public id: string; + public name: string; + public instructions: string; + public tools: any[] = []; + public memory: any = null; + + constructor(name: string, instructions = "") { + this.id = `agent_${name.toLowerCase()}_${Date.now()}`; + this.name = name; + this.instructions = instructions; + } + + async generateText(input: string, options?: any): Promise<{ text: string; conversationId?: string }> { + // Simulate agent processing + await new Promise(resolve => setTimeout(resolve, 100)); + + // Simulate different responses based on agent name + let response = `${this.name} processed: ${input}`; + + if (this.name.includes("Analyzer")) { + response = input.includes("error") ? "low_quality" : "high_quality"; + } else if (this.name.includes("Processor")) { + response = `Processed data: ${input}`; + } else if (this.name.includes("Cleaner")) { + response = `Cleaned data: ${input}`; + } + + return { + text: response, + conversationId: options?.conversationId || `conv_${Date.now()}`, + }; + } +} + +describe("WorkflowOrchestrationEngine", () => { + let engine: WorkflowOrchestrationEngine; + let mockAgents: MockAgent[]; + + beforeEach(async () => { + mockAgents = [ + new MockAgent("DataExtractor", "Extracts data from sources"), + new MockAgent("DataAnalyzer", "Analyzes data quality"), + new MockAgent("DataProcessor", "Processes high quality data"), + new MockAgent("DataCleaner", "Cleans low quality data"), + new MockAgent("DataTransformer", "Transforms data format"), + ]; + + engine = new WorkflowOrchestrationEngine(mockAgents as any, { + maxConcurrentWorkflows: 5, + maxConcurrentTasks: 10, + persistenceEnabled: false, + }); + + await engine.start(); + }); + + afterEach(async () => { + await engine.stop(); + }); + + describe("Workflow Registration", () => { + it("should register a valid workflow", async () => { + const workflow: WorkflowDefinition = { + id: "test_workflow", + name: "Test Workflow", + version: "1.0.0", + mode: "sequential", + tasks: [ + { + id: "task1", + name: "Extract Data", + agentName: "DataExtractor", + input: "Extract from database", + }, + ], + }; + + await expect(engine.registerWorkflow(workflow)).resolves.not.toThrow(); + + const retrieved = await engine.getWorkflow("test_workflow"); + expect(retrieved).toEqual(workflow); + }); + + it("should reject invalid workflow", async () => { + const invalidWorkflow: WorkflowDefinition = { + id: "", + name: "Invalid Workflow", + version: "1.0.0", + mode: "sequential", + tasks: [], + }; + + await expect(engine.registerWorkflow(invalidWorkflow)).rejects.toThrow(); + }); + + it("should reject workflow with non-existent agent", async () => { + const workflow: WorkflowDefinition = { + id: "invalid_agent_workflow", + name: "Invalid Agent Workflow", + version: "1.0.0", + mode: "sequential", + tasks: [ + { + id: "task1", + name: "Invalid Task", + agentName: "NonExistentAgent", + input: "Do something", + }, + ], + }; + + await expect(engine.registerWorkflow(workflow)).rejects.toThrow(); + }); + }); + + describe("Sequential Execution", () => { + it("should execute tasks sequentially", async () => { + const workflow: WorkflowDefinition = { + id: "sequential_workflow", + name: "Sequential Workflow", + version: "1.0.0", + mode: "sequential", + tasks: [ + { + id: "extract", + name: "Extract Data", + agentName: "DataExtractor", + input: "Extract from source", + }, + { + id: "transform", + name: "Transform Data", + agentName: "DataTransformer", + input: "Transform extracted data", + }, + ], + }; + + await engine.registerWorkflow(workflow); + const result = await engine.executeWorkflow("sequential_workflow"); + + expect(result.status).toBe("completed"); + expect(result.taskResults).toHaveLength(2); + expect(result.taskResults[0].taskId).toBe("extract"); + expect(result.taskResults[1].taskId).toBe("transform"); + + // Check that tasks executed in order + const extractEndTime = result.taskResults[0].endTime!.getTime(); + const transformStartTime = result.taskResults[1].startTime.getTime(); + expect(transformStartTime).toBeGreaterThanOrEqual(extractEndTime); + }); + }); + + describe("Parallel Execution", () => { + it("should execute tasks in parallel", async () => { + const workflow: WorkflowDefinition = { + id: "parallel_workflow", + name: "Parallel Workflow", + version: "1.0.0", + mode: "parallel", + tasks: [ + { + id: "extract1", + name: "Extract Data 1", + agentName: "DataExtractor", + input: "Extract from source 1", + }, + { + id: "extract2", + name: "Extract Data 2", + agentName: "DataAnalyzer", + input: "Extract from source 2", + }, + ], + }; + + await engine.registerWorkflow(workflow); + const result = await engine.executeWorkflow("parallel_workflow"); + + expect(result.status).toBe("completed"); + expect(result.taskResults).toHaveLength(2); + + // Check that tasks started around the same time (parallel execution) + const startTime1 = result.taskResults[0].startTime.getTime(); + const startTime2 = result.taskResults[1].startTime.getTime(); + expect(Math.abs(startTime1 - startTime2)).toBeLessThan(1000); // Within 1 second + }); + }); + + describe("Conditional Execution", () => { + it("should execute tasks based on conditions", async () => { + const workflow: WorkflowDefinition = { + id: "conditional_workflow", + name: "Conditional Workflow", + version: "1.0.0", + mode: "conditional", + tasks: [ + { + id: "analyze", + name: "Analyze Data", + agentName: "DataAnalyzer", + input: "good data", // This will return "high_quality" + }, + { + id: "process", + name: "Process Good Data", + agentName: "DataProcessor", + input: "Process the data", + conditions: [ + { + type: "result", + taskId: "analyze", + operator: "contains", + value: "high_quality", + }, + ], + }, + { + id: "clean", + name: "Clean Bad Data", + agentName: "DataCleaner", + input: "Clean the data", + conditions: [ + { + type: "result", + taskId: "analyze", + operator: "contains", + value: "low_quality", + }, + ], + }, + ], + }; + + await engine.registerWorkflow(workflow); + const result = await engine.executeWorkflow("conditional_workflow"); + + expect(result.status).toBe("completed"); + expect(result.taskResults).toHaveLength(3); + + // Check that analyze task completed + const analyzeResult = result.taskResults.find(r => r.taskId === "analyze"); + expect(analyzeResult?.status).toBe("completed"); + + // Check that process task executed (condition met) + const processResult = result.taskResults.find(r => r.taskId === "process"); + expect(processResult?.status).toBe("completed"); + + // Check that clean task was skipped (condition not met) + const cleanResult = result.taskResults.find(r => r.taskId === "clean"); + expect(cleanResult?.status).toBe("skipped"); + }); + }); + + describe("Pipeline Execution", () => { + it("should execute tasks in pipeline mode with data flow", async () => { + const workflow: WorkflowDefinition = { + id: "pipeline_workflow", + name: "Pipeline Workflow", + version: "1.0.0", + mode: "pipeline", + tasks: [ + { + id: "extract", + name: "Extract Data", + agentName: "DataExtractor", + input: "Extract from database", + }, + { + id: "transform", + name: "Transform Data", + agentName: "DataTransformer", + input: "Transform the data", + }, + ], + }; + + await engine.registerWorkflow(workflow); + const result = await engine.executeWorkflow("pipeline_workflow", { input: "initial data" }); + + expect(result.status).toBe("completed"); + expect(result.taskResults).toHaveLength(2); + + // Check that data flowed between tasks + const extractResult = result.taskResults.find(r => r.taskId === "extract"); + const transformResult = result.taskResults.find(r => r.taskId === "transform"); + + expect(extractResult?.status).toBe("completed"); + expect(transformResult?.status).toBe("completed"); + }); + }); + + describe("Graph Execution", () => { + it("should execute tasks based on dependency graph", async () => { + const workflow: WorkflowDefinition = { + id: "graph_workflow", + name: "Graph Workflow", + version: "1.0.0", + mode: "graph", + tasks: [ + { + id: "extract", + name: "Extract Data", + agentName: "DataExtractor", + input: "Extract from source", + }, + { + id: "analyze", + name: "Analyze Data", + agentName: "DataAnalyzer", + input: "Analyze extracted data", + dependencies: ["extract"], + }, + { + id: "process", + name: "Process Data", + agentName: "DataProcessor", + input: "Process analyzed data", + dependencies: ["analyze"], + }, + { + id: "transform", + name: "Transform Data", + agentName: "DataTransformer", + input: "Transform processed data", + dependencies: ["process"], + }, + ], + }; + + await engine.registerWorkflow(workflow); + const result = await engine.executeWorkflow("graph_workflow"); + + expect(result.status).toBe("completed"); + expect(result.taskResults).toHaveLength(4); + + // Check execution order based on dependencies + const extractResult = result.taskResults.find(r => r.taskId === "extract")!; + const analyzeResult = result.taskResults.find(r => r.taskId === "analyze")!; + const processResult = result.taskResults.find(r => r.taskId === "process")!; + const transformResult = result.taskResults.find(r => r.taskId === "transform")!; + + expect(extractResult.endTime!.getTime()).toBeLessThanOrEqual(analyzeResult.startTime.getTime()); + expect(analyzeResult.endTime!.getTime()).toBeLessThanOrEqual(processResult.startTime.getTime()); + expect(processResult.endTime!.getTime()).toBeLessThanOrEqual(transformResult.startTime.getTime()); + }); + + it("should handle parallel branches in dependency graph", async () => { + const workflow: WorkflowDefinition = { + id: "parallel_graph_workflow", + name: "Parallel Graph Workflow", + version: "1.0.0", + mode: "graph", + tasks: [ + { + id: "extract", + name: "Extract Data", + agentName: "DataExtractor", + input: "Extract from source", + }, + { + id: "analyze", + name: "Analyze Data", + agentName: "DataAnalyzer", + input: "Analyze data", + dependencies: ["extract"], + }, + { + id: "clean", + name: "Clean Data", + agentName: "DataCleaner", + input: "Clean data", + dependencies: ["extract"], + }, + { + id: "process", + name: "Process Data", + agentName: "DataProcessor", + input: "Process data", + dependencies: ["analyze", "clean"], + }, + ], + }; + + await engine.registerWorkflow(workflow); + const result = await engine.executeWorkflow("parallel_graph_workflow"); + + expect(result.status).toBe("completed"); + expect(result.taskResults).toHaveLength(4); + + // Check that analyze and clean can run in parallel after extract + const extractResult = result.taskResults.find(r => r.taskId === "extract")!; + const analyzeResult = result.taskResults.find(r => r.taskId === "analyze")!; + const cleanResult = result.taskResults.find(r => r.taskId === "clean")!; + const processResult = result.taskResults.find(r => r.taskId === "process")!; + + // Extract must complete before analyze and clean + expect(extractResult.endTime!.getTime()).toBeLessThanOrEqual(analyzeResult.startTime.getTime()); + expect(extractResult.endTime!.getTime()).toBeLessThanOrEqual(cleanResult.startTime.getTime()); + + // Process must wait for both analyze and clean + expect(analyzeResult.endTime!.getTime()).toBeLessThanOrEqual(processResult.startTime.getTime()); + expect(cleanResult.endTime!.getTime()).toBeLessThanOrEqual(processResult.startTime.getTime()); + }); + }); + + describe("Workflow Execution Options", () => { + it("should handle dry run execution", async () => { + const workflow: WorkflowDefinition = { + id: "dry_run_workflow", + name: "Dry Run Workflow", + version: "1.0.0", + mode: "sequential", + tasks: [ + { + id: "task1", + name: "Test Task", + agentName: "DataExtractor", + input: "Test input", + }, + ], + }; + + await engine.registerWorkflow(workflow); + const result = await engine.executeWorkflow("dry_run_workflow", {}, { dryRun: true }); + + expect(result.status).toBe("completed"); + expect(result.taskResults).toHaveLength(1); + expect(result.taskResults[0].result).toContain("[DRY RUN]"); + expect(result.finalResult).toContain("[DRY RUN]"); + }); + + it("should pass user context to agents", async () => { + const workflow: WorkflowDefinition = { + id: "context_workflow", + name: "Context Workflow", + version: "1.0.0", + mode: "sequential", + tasks: [ + { + id: "task1", + name: "Context Task", + agentName: "DataExtractor", + input: "Process with context", + }, + ], + }; + + const userContext = new Map([["userId", "test123"], ["sessionId", "session456"]]); + + await engine.registerWorkflow(workflow); + const result = await engine.executeWorkflow("context_workflow", {}, { userContext }); + + expect(result.status).toBe("completed"); + expect(result.taskResults).toHaveLength(1); + }); + }); + + describe("Error Handling", () => { + it("should handle task failures gracefully", async () => { + // Create a mock agent that always fails + const failingAgent = new MockAgent("FailingAgent"); + failingAgent.generateText = async () => { + throw new Error("Simulated task failure"); + }; + + engine.registerAgent(failingAgent as any); + + const workflow: WorkflowDefinition = { + id: "failing_workflow", + name: "Failing Workflow", + version: "1.0.0", + mode: "sequential", + tasks: [ + { + id: "failing_task", + name: "Failing Task", + agentName: "FailingAgent", + input: "This will fail", + }, + ], + }; + + await engine.registerWorkflow(workflow); + const result = await engine.executeWorkflow("failing_workflow"); + + expect(result.status).toBe("failed"); + expect(result.taskResults).toHaveLength(1); + expect(result.taskResults[0].status).toBe("failed"); + expect(result.taskResults[0].error).toBeDefined(); + expect(result.error).toBeDefined(); + }); + }); + + describe("Workflow Management", () => { + it("should list all registered workflows", async () => { + const workflow1: WorkflowDefinition = { + id: "workflow1", + name: "Workflow 1", + version: "1.0.0", + mode: "sequential", + tasks: [ + { + id: "task1", + name: "Task 1", + agentName: "DataExtractor", + input: "Test", + }, + ], + }; + + const workflow2: WorkflowDefinition = { + id: "workflow2", + name: "Workflow 2", + version: "1.0.0", + mode: "parallel", + tasks: [ + { + id: "task2", + name: "Task 2", + agentName: "DataAnalyzer", + input: "Test", + }, + ], + }; + + await engine.registerWorkflow(workflow1); + await engine.registerWorkflow(workflow2); + + const workflows = await engine.listWorkflows(); + expect(workflows).toHaveLength(2); + expect(workflows.map(w => w.id)).toContain("workflow1"); + expect(workflows.map(w => w.id)).toContain("workflow2"); + }); + + it("should unregister workflows", async () => { + const workflow: WorkflowDefinition = { + id: "temp_workflow", + name: "Temporary Workflow", + version: "1.0.0", + mode: "sequential", + tasks: [ + { + id: "task1", + name: "Task 1", + agentName: "DataExtractor", + input: "Test", + }, + ], + }; + + await engine.registerWorkflow(workflow); + expect(await engine.getWorkflow("temp_workflow")).toBeDefined(); + + await engine.unregisterWorkflow("temp_workflow"); + expect(await engine.getWorkflow("temp_workflow")).toBeNull(); + }); + }); + + describe("Event Handling", () => { + it("should emit workflow events", async () => { + const events: any[] = []; + + engine.onWorkflowEvent((event) => { + events.push(event); + }); + + const workflow: WorkflowDefinition = { + id: "event_workflow", + name: "Event Workflow", + version: "1.0.0", + mode: "sequential", + tasks: [ + { + id: "task1", + name: "Event Task", + agentName: "DataExtractor", + input: "Test events", + }, + ], + }; + + await engine.registerWorkflow(workflow); + await engine.executeWorkflow("event_workflow"); + + // Check that events were emitted + expect(events.length).toBeGreaterThan(0); + + const eventTypes = events.map(e => e.type); + expect(eventTypes).toContain("workflow_started"); + expect(eventTypes).toContain("task_started"); + expect(eventTypes).toContain("task_completed"); + expect(eventTypes).toContain("workflow_completed"); + }); + }); +}); + diff --git a/packages/core/src/orchestration/engine.ts b/packages/core/src/orchestration/engine.ts new file mode 100644 index 000000000..ff45dc5e2 --- /dev/null +++ b/packages/core/src/orchestration/engine.ts @@ -0,0 +1,748 @@ +import { EventEmitter } from "events"; +import type { Agent } from "../agent"; +import type { BaseMessage } from "../agent/providers"; +import type { + WorkflowDefinition, + WorkflowExecutionContext, + WorkflowExecutionResult, + WorkflowExecutionOptions, + WorkflowTask, + TaskExecutionResult, + WorkflowEvent, + WorkflowError, + IWorkflowOrchestrator, + TaskStatus, + WorkflowStatus, + QueuedTask, +} from "./types"; +import { TaskScheduler } from "./scheduler"; +import { WorkflowStateManager } from "./state-manager"; +import { WorkflowValidator } from "./validator"; + +/** + * Unified Workflow Orchestration Engine + * + * Consolidates workflow and task management into a single cohesive system + * that builds upon the existing SubAgentManager while providing advanced + * orchestration capabilities. + */ +export class WorkflowOrchestrationEngine extends EventEmitter implements IWorkflowOrchestrator { + private workflows = new Map(); + private activeExecutions = new Map(); + private agents = new Map>(); + private scheduler: TaskScheduler; + private stateManager: WorkflowStateManager; + private validator: WorkflowValidator; + private isRunning = false; + + constructor( + agents: Agent[] = [], + options: { + maxConcurrentWorkflows?: number; + maxConcurrentTasks?: number; + persistenceEnabled?: boolean; + } = {} + ) { + super(); + + // Register agents + agents.forEach(agent => this.registerAgent(agent)); + + // Initialize components + this.scheduler = new TaskScheduler({ + maxConcurrentWorkflows: options.maxConcurrentWorkflows || 10, + maxConcurrentTasks: options.maxConcurrentTasks || 50, + taskQueueSize: 1000, + heartbeatInterval: 5000, + cleanupInterval: 60000, + persistenceEnabled: options.persistenceEnabled || false, + }); + + this.stateManager = new WorkflowStateManager(options.persistenceEnabled || false); + this.validator = new WorkflowValidator(); + + // Set up event forwarding + this.scheduler.on("taskCompleted", this.handleTaskCompleted.bind(this)); + this.scheduler.on("taskFailed", this.handleTaskFailed.bind(this)); + this.scheduler.on("workflowCompleted", this.handleWorkflowCompleted.bind(this)); + } + + /** + * Register an agent for workflow execution + */ + public registerAgent(agent: Agent): void { + this.agents.set(agent.name, agent); + } + + /** + * Unregister an agent + */ + public unregisterAgent(agentName: string): void { + this.agents.delete(agentName); + } + + /** + * Start the orchestration engine + */ + public async start(): Promise { + if (this.isRunning) return; + + await this.scheduler.start(); + this.isRunning = true; + + // Restore active workflows from persistence + if (this.stateManager.isPersistenceEnabled()) { + await this.restoreActiveWorkflows(); + } + + this.emit("engineStarted"); + } + + /** + * Stop the orchestration engine + */ + public async stop(): Promise { + if (!this.isRunning) return; + + await this.scheduler.stop(); + this.isRunning = false; + + this.emit("engineStopped"); + } + + /** + * Register a workflow definition + */ + public async registerWorkflow(definition: WorkflowDefinition): Promise { + // Validate workflow definition + const validation = await this.validator.validateWorkflow(definition, this.agents); + if (!validation.isValid) { + throw new Error(`Invalid workflow definition: ${validation.errors.join(", ")}`); + } + + this.workflows.set(definition.id, definition); + this.emit("workflowRegistered", { workflowId: definition.id }); + } + + /** + * Unregister a workflow definition + */ + public async unregisterWorkflow(workflowId: string): Promise { + // Cancel any active executions + const activeExecutions = Array.from(this.activeExecutions.values()) + .filter(ctx => ctx.workflowId === workflowId); + + for (const execution of activeExecutions) { + await this.cancelWorkflow(execution.executionId); + } + + this.workflows.delete(workflowId); + this.emit("workflowUnregistered", { workflowId }); + } + + /** + * Get a workflow definition + */ + public async getWorkflow(workflowId: string): Promise { + return this.workflows.get(workflowId) || null; + } + + /** + * List all registered workflows + */ + public async listWorkflows(): Promise { + return Array.from(this.workflows.values()); + } + + /** + * Execute a workflow + */ + public async executeWorkflow( + workflowId: string, + input: Record = {}, + options: WorkflowExecutionOptions = {} + ): Promise { + const workflow = this.workflows.get(workflowId); + if (!workflow) { + throw new Error(`Workflow not found: ${workflowId}`); + } + + // Create execution context + const executionId = crypto.randomUUID(); + const context: WorkflowExecutionContext = { + workflowId, + executionId, + status: "pending", + startTime: new Date(), + taskResults: new Map(), + globalContext: new Map(Object.entries(input)), + userContext: options.userContext, + conversationId: options.conversationId, + userId: options.userId, + parentAgentId: options.parentAgentId, + parentHistoryEntryId: options.parentHistoryEntryId, + }; + + this.activeExecutions.set(executionId, context); + + try { + // Dry run validation if requested + if (options.dryRun) { + return this.performDryRun(workflow, context); + } + + // Save initial state + await this.stateManager.saveWorkflowState(context); + + // Start workflow execution + context.status = "running"; + this.emitWorkflowEvent("workflow_started", context); + + // Execute based on workflow mode + const result = await this.executeWorkflowByMode(workflow, context); + + // Clean up + this.activeExecutions.delete(executionId); + await this.stateManager.deleteWorkflowState(executionId); + + return result; + + } catch (error) { + context.status = "failed"; + context.endTime = new Date(); + + const workflowError: WorkflowError = { + code: "WORKFLOW_EXECUTION_ERROR", + message: error instanceof Error ? error.message : String(error), + originalError: error instanceof Error ? error : undefined, + timestamp: new Date(), + }; + + this.emitWorkflowEvent("workflow_failed", context, { error: workflowError }); + + const result: WorkflowExecutionResult = { + workflowId, + executionId, + status: "failed", + startTime: context.startTime, + endTime: context.endTime, + duration: context.endTime.getTime() - context.startTime.getTime(), + taskResults: Array.from(context.taskResults.values()), + error: workflowError, + }; + + this.activeExecutions.delete(executionId); + return result; + } + } + + /** + * Pause a workflow execution + */ + public async pauseWorkflow(executionId: string): Promise { + const context = this.activeExecutions.get(executionId); + if (!context) { + throw new Error(`Execution not found: ${executionId}`); + } + + context.status = "paused"; + await this.stateManager.saveWorkflowState(context); + this.emitWorkflowEvent("workflow_paused", context); + } + + /** + * Resume a paused workflow execution + */ + public async resumeWorkflow(executionId: string): Promise { + const context = this.activeExecutions.get(executionId); + if (!context) { + throw new Error(`Execution not found: ${executionId}`); + } + + if (context.status !== "paused") { + throw new Error(`Workflow is not paused: ${executionId}`); + } + + context.status = "running"; + await this.stateManager.saveWorkflowState(context); + this.emitWorkflowEvent("workflow_resumed", context); + } + + /** + * Cancel a workflow execution + */ + public async cancelWorkflow(executionId: string): Promise { + const context = this.activeExecutions.get(executionId); + if (!context) { + throw new Error(`Execution not found: ${executionId}`); + } + + context.status = "cancelled"; + context.endTime = new Date(); + + // Cancel any pending tasks + await this.scheduler.cancelWorkflowTasks(executionId); + + await this.stateManager.deleteWorkflowState(executionId); + this.activeExecutions.delete(executionId); + + this.emitWorkflowEvent("workflow_cancelled", context); + } + + /** + * Get execution status + */ + public async getExecutionStatus(executionId: string): Promise { + return this.activeExecutions.get(executionId) || + await this.stateManager.loadWorkflowState(executionId); + } + + /** + * Register workflow event callback + */ + public onWorkflowEvent(callback: (event: WorkflowEvent) => void): void { + this.on("workflowEvent", callback); + } + + /** + * Get execution history + */ + public async getExecutionHistory(workflowId?: string, limit = 100): Promise { + // This would typically query a persistent store + // For now, return empty array as this is a basic implementation + return []; + } + + /** + * Execute workflow based on its execution mode + */ + private async executeWorkflowByMode( + workflow: WorkflowDefinition, + context: WorkflowExecutionContext + ): Promise { + switch (workflow.mode) { + case "sequential": + return this.executeSequential(workflow, context); + case "parallel": + return this.executeParallel(workflow, context); + case "conditional": + return this.executeConditional(workflow, context); + case "pipeline": + return this.executePipeline(workflow, context); + case "graph": + return this.executeGraph(workflow, context); + default: + throw new Error(`Unsupported execution mode: ${workflow.mode}`); + } + } + + /** + * Execute tasks sequentially + */ + private async executeSequential( + workflow: WorkflowDefinition, + context: WorkflowExecutionContext + ): Promise { + for (const task of workflow.tasks) { + const result = await this.executeTask(task, context); + + if (result.status === "failed" && workflow.errorHandling?.onTaskFailure === "stop") { + break; + } + } + + return this.createWorkflowResult(workflow, context); + } + + /** + * Execute tasks in parallel + */ + private async executeParallel( + workflow: WorkflowDefinition, + context: WorkflowExecutionContext + ): Promise { + const taskPromises = workflow.tasks.map(task => this.executeTask(task, context)); + await Promise.allSettled(taskPromises); + + return this.createWorkflowResult(workflow, context); + } + + /** + * Execute tasks with conditional logic + */ + private async executeConditional( + workflow: WorkflowDefinition, + context: WorkflowExecutionContext + ): Promise { + for (const task of workflow.tasks) { + if (this.shouldExecuteTask(task, context)) { + await this.executeTask(task, context); + } else { + // Mark task as skipped + const result: TaskExecutionResult = { + taskId: task.id, + status: "skipped", + startTime: new Date(), + endTime: new Date(), + retryCount: 0, + }; + context.taskResults.set(task.id, result); + } + } + + return this.createWorkflowResult(workflow, context); + } + + /** + * Execute tasks in pipeline mode (data flows between tasks) + */ + private async executePipeline( + workflow: WorkflowDefinition, + context: WorkflowExecutionContext + ): Promise { + let pipelineData: unknown = context.globalContext.get("input"); + + for (const task of workflow.tasks) { + // Pass previous task result as input to next task + const taskInput = typeof task.input === "string" ? task.input : { ...task.input, pipelineData }; + const modifiedTask = { ...task, input: taskInput }; + + const result = await this.executeTask(modifiedTask, context); + + if (result.status === "completed") { + pipelineData = result.result; + } else if (workflow.errorHandling?.onTaskFailure === "stop") { + break; + } + } + + return this.createWorkflowResult(workflow, context); + } + + /** + * Execute tasks based on dependency graph + */ + private async executeGraph( + workflow: WorkflowDefinition, + context: WorkflowExecutionContext + ): Promise { + const taskMap = new Map(workflow.tasks.map(task => [task.id, task])); + const completed = new Set(); + const running = new Set(); + + while (completed.size < workflow.tasks.length) { + const readyTasks = workflow.tasks.filter(task => + !completed.has(task.id) && + !running.has(task.id) && + this.areDependenciesMet(task, completed) + ); + + if (readyTasks.length === 0) { + // Check if we're deadlocked + if (running.size === 0) { + throw new Error("Workflow deadlock detected - no tasks can proceed"); + } + // Wait for running tasks to complete + await new Promise(resolve => setTimeout(resolve, 100)); + continue; + } + + // Execute ready tasks in parallel + const taskPromises = readyTasks.map(async task => { + running.add(task.id); + try { + await this.executeTask(task, context); + completed.add(task.id); + } finally { + running.delete(task.id); + } + }); + + await Promise.allSettled(taskPromises); + } + + return this.createWorkflowResult(workflow, context); + } + + /** + * Execute a single task + */ + private async executeTask( + task: WorkflowTask, + context: WorkflowExecutionContext + ): Promise { + const agent = this.agents.get(task.agentName); + if (!agent) { + throw new Error(`Agent not found: ${task.agentName}`); + } + + const result: TaskExecutionResult = { + taskId: task.id, + status: "running", + startTime: new Date(), + retryCount: 0, + }; + + context.taskResults.set(task.id, result); + context.currentTask = task.id; + + this.emitWorkflowEvent("task_started", context, { taskId: task.id }); + + try { + // Prepare task input + const taskInput = typeof task.input === "string" + ? task.input + : JSON.stringify(task.input); + + // Execute task using agent + const response = await agent.generateText(taskInput, { + conversationId: context.conversationId, + userId: context.userId, + parentAgentId: context.parentAgentId, + parentHistoryEntryId: context.parentHistoryEntryId, + userContext: context.userContext, + }); + + result.status = "completed"; + result.result = response.text; + result.endTime = new Date(); + result.duration = result.endTime.getTime() - result.startTime.getTime(); + result.agentConversationId = response.conversationId; + + this.emitWorkflowEvent("task_completed", context, { taskId: task.id, result: response.text }); + + } catch (error) { + result.status = "failed"; + result.error = { + code: "TASK_EXECUTION_ERROR", + message: error instanceof Error ? error.message : String(error), + taskId: task.id, + originalError: error instanceof Error ? error : undefined, + timestamp: new Date(), + }; + result.endTime = new Date(); + result.duration = result.endTime.getTime() - result.startTime.getTime(); + + this.emitWorkflowEvent("task_failed", context, { taskId: task.id, error: result.error }); + } + + // Update context + context.taskResults.set(task.id, result); + await this.stateManager.saveWorkflowState(context); + + return result; + } + + /** + * Check if task conditions are met + */ + private shouldExecuteTask(task: WorkflowTask, context: WorkflowExecutionContext): boolean { + if (!task.conditions || task.conditions.length === 0) { + return true; + } + + return task.conditions.every(condition => { + if (condition.type === "custom" && condition.customEvaluator) { + return condition.customEvaluator(context); + } + + if (condition.taskId) { + const taskResult = context.taskResults.get(condition.taskId); + if (!taskResult) return false; + + if (condition.type === "status") { + return this.evaluateCondition(taskResult.status, condition.operator, condition.value); + } else if (condition.type === "result") { + return this.evaluateCondition(taskResult.result, condition.operator, condition.value); + } + } + + return false; + }); + } + + /** + * Check if task dependencies are met + */ + private areDependenciesMet(task: WorkflowTask, completed: Set): boolean { + if (!task.dependencies || task.dependencies.length === 0) { + return true; + } + + return task.dependencies.every(depId => completed.has(depId)); + } + + /** + * Evaluate a condition + */ + private evaluateCondition(actual: unknown, operator: string, expected: unknown): boolean { + switch (operator) { + case "equals": + return actual === expected; + case "not_equals": + return actual !== expected; + case "contains": + return String(actual).includes(String(expected)); + case "greater_than": + return Number(actual) > Number(expected); + case "less_than": + return Number(actual) < Number(expected); + case "exists": + return actual !== undefined && actual !== null; + default: + return false; + } + } + + /** + * Create workflow execution result + */ + private createWorkflowResult( + workflow: WorkflowDefinition, + context: WorkflowExecutionContext + ): WorkflowExecutionResult { + context.status = "completed"; + context.endTime = new Date(); + + const taskResults = Array.from(context.taskResults.values()); + const hasFailedTasks = taskResults.some(result => result.status === "failed"); + + if (hasFailedTasks) { + context.status = "failed"; + } + + this.emitWorkflowEvent("workflow_completed", context); + + return { + workflowId: workflow.id, + executionId: context.executionId, + status: context.status, + startTime: context.startTime, + endTime: context.endTime, + duration: context.endTime!.getTime() - context.startTime.getTime(), + taskResults, + finalResult: this.extractFinalResult(taskResults), + }; + } + + /** + * Extract final result from task results + */ + private extractFinalResult(taskResults: TaskExecutionResult[]): unknown { + const lastCompletedTask = taskResults + .filter(result => result.status === "completed") + .sort((a, b) => (b.endTime?.getTime() || 0) - (a.endTime?.getTime() || 0))[0]; + + return lastCompletedTask?.result; + } + + /** + * Perform dry run validation + */ + private async performDryRun( + workflow: WorkflowDefinition, + context: WorkflowExecutionContext + ): Promise { + // Simulate execution without actually running tasks + const taskResults: TaskExecutionResult[] = workflow.tasks.map(task => ({ + taskId: task.id, + status: "completed" as TaskStatus, + result: `[DRY RUN] Task ${task.name} would execute with agent ${task.agentName}`, + startTime: new Date(), + endTime: new Date(), + duration: 0, + retryCount: 0, + })); + + return { + workflowId: workflow.id, + executionId: context.executionId, + status: "completed", + startTime: context.startTime, + endTime: new Date(), + duration: 0, + taskResults, + finalResult: "[DRY RUN] Workflow validation successful", + }; + } + + /** + * Restore active workflows from persistence + */ + private async restoreActiveWorkflows(): Promise { + const activeWorkflowIds = await this.stateManager.listActiveWorkflows(); + + for (const executionId of activeWorkflowIds) { + const context = await this.stateManager.loadWorkflowState(executionId); + if (context && context.status === "running") { + this.activeExecutions.set(executionId, context); + // Resume execution logic would go here + } + } + } + + /** + * Emit workflow event + */ + private emitWorkflowEvent( + type: WorkflowEvent["type"], + context: WorkflowExecutionContext, + data?: Record + ): void { + const event: WorkflowEvent = { + type, + workflowId: context.workflowId, + executionId: context.executionId, + taskId: data?.taskId as string, + timestamp: new Date(), + data, + }; + + this.emit("workflowEvent", event); + } + + /** + * Handle task completion from scheduler + */ + private handleTaskCompleted(taskId: string, result: unknown): void { + // Update task result in active executions + for (const context of this.activeExecutions.values()) { + const taskResult = context.taskResults.get(taskId); + if (taskResult) { + taskResult.status = "completed"; + taskResult.result = result; + taskResult.endTime = new Date(); + break; + } + } + } + + /** + * Handle task failure from scheduler + */ + private handleTaskFailed(taskId: string, error: WorkflowError): void { + // Update task result in active executions + for (const context of this.activeExecutions.values()) { + const taskResult = context.taskResults.get(taskId); + if (taskResult) { + taskResult.status = "failed"; + taskResult.error = error; + taskResult.endTime = new Date(); + break; + } + } + } + + /** + * Handle workflow completion from scheduler + */ + private handleWorkflowCompleted(executionId: string): void { + const context = this.activeExecutions.get(executionId); + if (context) { + context.status = "completed"; + context.endTime = new Date(); + this.emitWorkflowEvent("workflow_completed", context); + } + } +} + diff --git a/packages/core/src/orchestration/index.ts b/packages/core/src/orchestration/index.ts new file mode 100644 index 000000000..116a5a3c6 --- /dev/null +++ b/packages/core/src/orchestration/index.ts @@ -0,0 +1,280 @@ +/** + * VoltAgent Unified Workflow Orchestration Engine + * + * This module provides a comprehensive workflow orchestration system that consolidates + * workflow and task management into a single unified engine. It builds upon the existing + * SubAgentManager while providing advanced orchestration capabilities including: + * + * - Multiple execution modes (sequential, parallel, conditional, pipeline, graph) + * - Advanced task scheduling and queuing + * - Workflow state management and persistence + * - Comprehensive validation and error handling + * - Event-driven monitoring and observability + * - Retry policies and fault tolerance + * + * @example Basic Usage + * ```typescript + * import { WorkflowOrchestrationEngine, WorkflowDefinition } from '@voltagent/core/orchestration'; + * + * // Create orchestration engine with agents + * const engine = new WorkflowOrchestrationEngine([agent1, agent2]); + * await engine.start(); + * + * // Define a workflow + * const workflow: WorkflowDefinition = { + * id: 'data-processing', + * name: 'Data Processing Pipeline', + * version: '1.0.0', + * mode: 'pipeline', + * tasks: [ + * { + * id: 'extract', + * name: 'Extract Data', + * agentName: 'DataExtractor', + * input: 'Extract data from source' + * }, + * { + * id: 'transform', + * name: 'Transform Data', + * agentName: 'DataTransformer', + * input: 'Transform the extracted data', + * dependencies: ['extract'] + * } + * ] + * }; + * + * // Register and execute workflow + * await engine.registerWorkflow(workflow); + * const result = await engine.executeWorkflow('data-processing', { source: 'database' }); + * ``` + * + * @example Advanced Workflow with Conditions + * ```typescript + * const conditionalWorkflow: WorkflowDefinition = { + * id: 'conditional-processing', + * name: 'Conditional Processing', + * version: '1.0.0', + * mode: 'conditional', + * tasks: [ + * { + * id: 'check-data', + * name: 'Check Data Quality', + * agentName: 'QualityChecker', + * input: 'Check data quality' + * }, + * { + * id: 'process-good-data', + * name: 'Process Good Data', + * agentName: 'DataProcessor', + * input: 'Process high quality data', + * conditions: [{ + * type: 'result', + * taskId: 'check-data', + * operator: 'contains', + * value: 'high_quality' + * }] + * }, + * { + * id: 'clean-bad-data', + * name: 'Clean Bad Data', + * agentName: 'DataCleaner', + * input: 'Clean low quality data', + * conditions: [{ + * type: 'result', + * taskId: 'check-data', + * operator: 'contains', + * value: 'low_quality' + * }] + * } + * ] + * }; + * ``` + */ + +// Core orchestration engine +export { WorkflowOrchestrationEngine } from './engine'; + +// Task scheduling +export { TaskScheduler } from './scheduler'; + +// State management +export { WorkflowStateManager } from './state-manager'; + +// Validation +export { WorkflowValidator } from './validator'; +export type { ValidationResult } from './validator'; + +// Type definitions +export type { + // Core workflow types + WorkflowDefinition, + WorkflowTask, + WorkflowExecutionContext, + WorkflowExecutionResult, + WorkflowExecutionOptions, + + // Task types + TaskExecutionResult, + TaskCondition, + QueuedTask, + + // Status types + WorkflowStatus, + TaskStatus, + ExecutionMode, + + // Configuration types + RetryPolicy, + ErrorHandlingStrategy, + SchedulerConfig, + + // Event types + WorkflowEvent, + WorkflowError, + + // Interface types + IWorkflowOrchestrator, + ITaskScheduler, + IWorkflowStateManager, +} from './types'; + +/** + * Utility function to create a simple sequential workflow + */ +export function createSequentialWorkflow( + id: string, + name: string, + tasks: Array<{ + id: string; + name: string; + agentName: string; + input: string | Record; + timeout?: number; + retries?: number; + }> +): WorkflowDefinition { + return { + id, + name, + version: '1.0.0', + mode: 'sequential', + tasks: tasks.map(task => ({ + ...task, + description: task.name, + })), + }; +} + +/** + * Utility function to create a parallel workflow + */ +export function createParallelWorkflow( + id: string, + name: string, + tasks: Array<{ + id: string; + name: string; + agentName: string; + input: string | Record; + timeout?: number; + retries?: number; + }> +): WorkflowDefinition { + return { + id, + name, + version: '1.0.0', + mode: 'parallel', + tasks: tasks.map(task => ({ + ...task, + description: task.name, + })), + }; +} + +/** + * Utility function to create a pipeline workflow + */ +export function createPipelineWorkflow( + id: string, + name: string, + tasks: Array<{ + id: string; + name: string; + agentName: string; + input: string | Record; + timeout?: number; + retries?: number; + }> +): WorkflowDefinition { + return { + id, + name, + version: '1.0.0', + mode: 'pipeline', + tasks: tasks.map(task => ({ + ...task, + description: task.name, + })), + }; +} + +/** + * Utility function to create a graph workflow with dependencies + */ +export function createGraphWorkflow( + id: string, + name: string, + tasks: Array<{ + id: string; + name: string; + agentName: string; + input: string | Record; + dependencies?: string[]; + timeout?: number; + retries?: number; + }> +): WorkflowDefinition { + return { + id, + name, + version: '1.0.0', + mode: 'graph', + tasks: tasks.map(task => ({ + ...task, + description: task.name, + })), + }; +} + +/** + * Default retry policy for workflows + */ +export const DEFAULT_RETRY_POLICY: RetryPolicy = { + maxRetries: 3, + backoffStrategy: 'exponential', + baseDelay: 1000, // 1 second + maxDelay: 30000, // 30 seconds +}; + +/** + * Default error handling strategy + */ +export const DEFAULT_ERROR_HANDLING: ErrorHandlingStrategy = { + onTaskFailure: 'stop', + onWorkflowFailure: 'stop', + errorNotification: true, +}; + +/** + * Default scheduler configuration + */ +export const DEFAULT_SCHEDULER_CONFIG: SchedulerConfig = { + maxConcurrentWorkflows: 10, + maxConcurrentTasks: 50, + taskQueueSize: 1000, + heartbeatInterval: 5000, // 5 seconds + cleanupInterval: 60000, // 1 minute + persistenceEnabled: false, +}; + diff --git a/packages/core/src/orchestration/integration.ts b/packages/core/src/orchestration/integration.ts new file mode 100644 index 000000000..73dad476c --- /dev/null +++ b/packages/core/src/orchestration/integration.ts @@ -0,0 +1,455 @@ +import type { Agent } from "../agent"; +import type { SubAgentManager } from "../agent/subagent"; +import { WorkflowOrchestrationEngine } from "./engine"; +import type { + WorkflowDefinition, + WorkflowExecutionOptions, + WorkflowExecutionResult, +} from "./types"; + +/** + * Integration layer between the new Workflow Orchestration Engine + * and the existing SubAgentManager to ensure backward compatibility + * and provide a migration path. + */ +export class OrchestrationIntegration { + private orchestrationEngine: WorkflowOrchestrationEngine; + private subAgentManager: SubAgentManager; + private parentAgent: Agent; + + constructor( + parentAgent: Agent, + subAgentManager: SubAgentManager, + orchestrationEngine?: WorkflowOrchestrationEngine + ) { + this.parentAgent = parentAgent; + this.subAgentManager = subAgentManager; + + // Create orchestration engine if not provided + this.orchestrationEngine = orchestrationEngine || new WorkflowOrchestrationEngine( + subAgentManager.getSubAgents(), + { + maxConcurrentWorkflows: 5, + maxConcurrentTasks: 20, + persistenceEnabled: false, + } + ); + } + + /** + * Start the orchestration integration + */ + public async start(): Promise { + await this.orchestrationEngine.start(); + + // Register default workflows based on sub-agents + await this.registerDefaultWorkflows(); + } + + /** + * Stop the orchestration integration + */ + public async stop(): Promise { + await this.orchestrationEngine.stop(); + } + + /** + * Enhanced delegate task that can use either simple delegation or workflow orchestration + */ + public async enhancedDelegateTask(options: { + task: string; + targetAgents: string[]; + context?: Record; + useOrchestration?: boolean; + executionMode?: "sequential" | "parallel"; + timeout?: number; + retries?: number; + userContext?: Map; + conversationId?: string; + userId?: string; + parentAgentId?: string; + parentHistoryEntryId?: string; + }): Promise { + const { + task, + targetAgents, + context = {}, + useOrchestration = false, + executionMode = "parallel", + timeout, + retries, + userContext, + conversationId, + userId, + parentAgentId, + parentHistoryEntryId, + } = options; + + if (!useOrchestration || targetAgents.length === 1) { + // Use traditional SubAgentManager for simple cases + return this.delegateUsingSubAgentManager(options); + } + + // Use workflow orchestration for complex cases + return this.delegateUsingOrchestration(options); + } + + /** + * Create a workflow from a simple task delegation + */ + public createWorkflowFromDelegation(options: { + task: string; + targetAgents: string[]; + executionMode: "sequential" | "parallel"; + timeout?: number; + retries?: number; + }): WorkflowDefinition { + const { task, targetAgents, executionMode, timeout, retries } = options; + + const workflowId = `delegation_${Date.now()}_${Math.random().toString(36).substr(2, 9)}`; + + return { + id: workflowId, + name: `Delegated Task: ${task.substring(0, 50)}...`, + version: "1.0.0", + mode: executionMode, + tasks: targetAgents.map((agentName, index) => ({ + id: `task_${index}`, + name: `Execute with ${agentName}`, + agentName, + input: task, + timeout, + retries, + metadata: { + originalDelegation: true, + parentAgent: this.parentAgent.name, + }, + })), + globalTimeout: timeout ? timeout * targetAgents.length : undefined, + }; + } + + /** + * Migrate existing sub-agent patterns to workflows + */ + public async migrateToWorkflows(): Promise { + const subAgents = this.subAgentManager.getSubAgents(); + + // Create common workflow patterns + await this.createCommonWorkflowPatterns(subAgents); + } + + /** + * Get orchestration engine for advanced usage + */ + public getOrchestrationEngine(): WorkflowOrchestrationEngine { + return this.orchestrationEngine; + } + + /** + * Check if orchestration should be used based on task complexity + */ + public shouldUseOrchestration(options: { + targetAgents: string[]; + hasConditions?: boolean; + hasDependencies?: boolean; + requiresStateManagement?: boolean; + }): boolean { + const { targetAgents, hasConditions, hasDependencies, requiresStateManagement } = options; + + // Use orchestration for complex scenarios + return ( + targetAgents.length > 3 || + hasConditions || + hasDependencies || + requiresStateManagement || + false + ); + } + + /** + * Delegate using traditional SubAgentManager + */ + private async delegateUsingSubAgentManager(options: { + task: string; + targetAgents: string[]; + context?: Record; + userContext?: Map; + conversationId?: string; + userId?: string; + parentAgentId?: string; + parentHistoryEntryId?: string; + }): Promise { + const { + task, + targetAgents, + context = {}, + userContext, + conversationId, + userId, + parentAgentId, + parentHistoryEntryId, + } = options; + + // Find target agents + const agents = targetAgents + .map(name => this.subAgentManager.getSubAgents().find(agent => agent.name === name)) + .filter(agent => agent !== undefined) as Agent[]; + + if (agents.length === 0) { + throw new Error(`No valid target agents found: ${targetAgents.join(", ")}`); + } + + // Use SubAgentManager's handoffToMultiple method + const results = await this.subAgentManager.handoffToMultiple({ + task, + targetAgents: agents, + context, + sourceAgent: this.parentAgent, + parentAgentId, + parentHistoryEntryId, + userContext, + conversationId, + userId, + }); + + return results.map((result, index) => ({ + agentName: agents[index].name, + response: result.result, + conversationId: result.conversationId, + status: result.status, + error: result.error, + })); + } + + /** + * Delegate using workflow orchestration + */ + private async delegateUsingOrchestration(options: { + task: string; + targetAgents: string[]; + context?: Record; + executionMode?: "sequential" | "parallel"; + timeout?: number; + retries?: number; + userContext?: Map; + conversationId?: string; + userId?: string; + parentAgentId?: string; + parentHistoryEntryId?: string; + }): Promise { + const { + task, + targetAgents, + context = {}, + executionMode = "parallel", + timeout, + retries, + userContext, + conversationId, + userId, + parentAgentId, + parentHistoryEntryId, + } = options; + + // Create workflow definition + const workflow = this.createWorkflowFromDelegation({ + task, + targetAgents, + executionMode, + timeout, + retries, + }); + + // Register workflow temporarily + await this.orchestrationEngine.registerWorkflow(workflow); + + try { + // Execute workflow + const result = await this.orchestrationEngine.executeWorkflow( + workflow.id, + { task, ...context }, + { + userContext, + conversationId, + userId, + parentAgentId, + parentHistoryEntryId, + timeout, + } + ); + + return result; + } finally { + // Clean up temporary workflow + await this.orchestrationEngine.unregisterWorkflow(workflow.id); + } + } + + /** + * Register default workflows based on sub-agents + */ + private async registerDefaultWorkflows(): Promise { + const subAgents = this.subAgentManager.getSubAgents(); + + if (subAgents.length === 0) return; + + // Create a parallel workflow for all agents + const parallelWorkflow: WorkflowDefinition = { + id: "parallel_all_agents", + name: "Parallel Execution - All Agents", + version: "1.0.0", + mode: "parallel", + tasks: subAgents.map((agent, index) => ({ + id: `parallel_task_${index}`, + name: `Execute with ${agent.name}`, + agentName: agent.name, + input: "{{task}}", // Template that will be replaced + timeout: 30000, // 30 seconds + retries: 1, + })), + }; + + // Create a sequential workflow for all agents + const sequentialWorkflow: WorkflowDefinition = { + id: "sequential_all_agents", + name: "Sequential Execution - All Agents", + version: "1.0.0", + mode: "sequential", + tasks: subAgents.map((agent, index) => ({ + id: `sequential_task_${index}`, + name: `Execute with ${agent.name}`, + agentName: agent.name, + input: "{{task}}", // Template that will be replaced + timeout: 30000, // 30 seconds + retries: 1, + })), + }; + + await this.orchestrationEngine.registerWorkflow(parallelWorkflow); + await this.orchestrationEngine.registerWorkflow(sequentialWorkflow); + } + + /** + * Create common workflow patterns + */ + private async createCommonWorkflowPatterns(subAgents: Agent[]): Promise { + if (subAgents.length < 2) return; + + // Create a pipeline workflow if we have multiple agents + const pipelineWorkflow: WorkflowDefinition = { + id: "agent_pipeline", + name: "Agent Pipeline", + version: "1.0.0", + mode: "pipeline", + tasks: subAgents.map((agent, index) => ({ + id: `pipeline_task_${index}`, + name: `Pipeline Step ${index + 1}: ${agent.name}`, + agentName: agent.name, + input: index === 0 ? "{{task}}" : "{{pipelineData}}", + timeout: 30000, + retries: 1, + dependencies: index > 0 ? [`pipeline_task_${index - 1}`] : undefined, + })), + }; + + await this.orchestrationEngine.registerWorkflow(pipelineWorkflow); + + // Create conditional workflows if we have specific agent types + const analysisAgents = subAgents.filter(agent => + agent.name.toLowerCase().includes("analy") || + agent.instructions.toLowerCase().includes("analy") + ); + + const processingAgents = subAgents.filter(agent => + agent.name.toLowerCase().includes("process") || + agent.instructions.toLowerCase().includes("process") + ); + + if (analysisAgents.length > 0 && processingAgents.length > 0) { + const conditionalWorkflow: WorkflowDefinition = { + id: "analysis_then_processing", + name: "Analysis Then Processing", + version: "1.0.0", + mode: "conditional", + tasks: [ + { + id: "analysis_task", + name: "Analysis Phase", + agentName: analysisAgents[0].name, + input: "{{task}}", + timeout: 30000, + retries: 1, + }, + { + id: "processing_task", + name: "Processing Phase", + agentName: processingAgents[0].name, + input: "Process based on analysis: {{task}}", + timeout: 30000, + retries: 1, + conditions: [{ + type: "status", + taskId: "analysis_task", + operator: "equals", + value: "completed", + }], + }, + ], + }; + + await this.orchestrationEngine.registerWorkflow(conditionalWorkflow); + } + } + + /** + * Create a workflow-aware delegate tool that can be used as a drop-in replacement + */ + public createEnhancedDelegateTool(): any { + return { + id: "enhanced_delegate_task", + name: "enhanced_delegate_task", + description: "Enhanced task delegation with workflow orchestration capabilities", + parameters: { + type: "object", + properties: { + task: { + type: "string", + description: "The task to delegate", + }, + targetAgents: { + type: "array", + items: { type: "string" }, + description: "List of agent names to delegate the task to", + }, + context: { + type: "object", + description: "Additional context for the task", + }, + useOrchestration: { + type: "boolean", + description: "Whether to use workflow orchestration (default: auto-detect)", + }, + executionMode: { + type: "string", + enum: ["sequential", "parallel"], + description: "Execution mode when using orchestration", + }, + timeout: { + type: "number", + description: "Task timeout in milliseconds", + }, + retries: { + type: "number", + description: "Number of retries for failed tasks", + }, + }, + required: ["task", "targetAgents"], + }, + execute: async (params: any) => { + return this.enhancedDelegateTask(params); + }, + }; + } +} + diff --git a/packages/core/src/orchestration/scheduler.ts b/packages/core/src/orchestration/scheduler.ts new file mode 100644 index 000000000..22a85025b --- /dev/null +++ b/packages/core/src/orchestration/scheduler.ts @@ -0,0 +1,383 @@ +import { EventEmitter } from "events"; +import type { + ITaskScheduler, + QueuedTask, + SchedulerConfig, + WorkflowError, + TaskStatus, +} from "./types"; + +/** + * Task Scheduler for managing task execution queues and concurrency + */ +export class TaskScheduler extends EventEmitter implements ITaskScheduler { + private config: SchedulerConfig; + private taskQueue: QueuedTask[] = []; + private runningTasks = new Map(); + private completedTasks = new Set(); + private isRunning = false; + private heartbeatTimer?: NodeJS.Timeout; + private cleanupTimer?: NodeJS.Timeout; + + constructor(config: SchedulerConfig) { + super(); + this.config = config; + } + + /** + * Start the task scheduler + */ + public async start(): Promise { + if (this.isRunning) return; + + this.isRunning = true; + this.startHeartbeat(); + this.startCleanup(); + this.processQueue(); + + this.emit("schedulerStarted"); + } + + /** + * Stop the task scheduler + */ + public async stop(): Promise { + if (!this.isRunning) return; + + this.isRunning = false; + + if (this.heartbeatTimer) { + clearInterval(this.heartbeatTimer); + this.heartbeatTimer = undefined; + } + + if (this.cleanupTimer) { + clearInterval(this.cleanupTimer); + this.cleanupTimer = undefined; + } + + // Wait for running tasks to complete or timeout + const timeout = 30000; // 30 seconds + const startTime = Date.now(); + + while (this.runningTasks.size > 0 && Date.now() - startTime < timeout) { + await new Promise(resolve => setTimeout(resolve, 100)); + } + + // Force cancel remaining tasks + for (const [taskId] of this.runningTasks) { + await this.cancelTask(taskId); + } + + this.emit("schedulerStopped"); + } + + /** + * Schedule a task for execution + */ + public async scheduleTask(task: QueuedTask): Promise { + if (this.taskQueue.length >= this.config.taskQueueSize) { + throw new Error("Task queue is full"); + } + + // Insert task in priority order + const insertIndex = this.findInsertionIndex(task); + this.taskQueue.splice(insertIndex, 0, task); + + this.emit("taskScheduled", task); + + // Try to process immediately if scheduler is running + if (this.isRunning) { + setImmediate(() => this.processQueue()); + } + } + + /** + * Cancel a scheduled or running task + */ + public async cancelTask(taskId: string): Promise { + // Remove from queue if not yet running + const queueIndex = this.taskQueue.findIndex(task => task.id === taskId); + if (queueIndex !== -1) { + this.taskQueue.splice(queueIndex, 1); + this.emit("taskCancelled", taskId); + return; + } + + // Cancel running task + const runningTask = this.runningTasks.get(taskId); + if (runningTask) { + this.runningTasks.delete(taskId); + this.emit("taskCancelled", taskId); + } + } + + /** + * Cancel all tasks for a specific workflow + */ + public async cancelWorkflowTasks(executionId: string): Promise { + // Cancel queued tasks + this.taskQueue = this.taskQueue.filter(task => { + if (task.executionId === executionId) { + this.emit("taskCancelled", task.id); + return false; + } + return true; + }); + + // Cancel running tasks + const runningTasksToCancel = Array.from(this.runningTasks.values()) + .filter(task => task.executionId === executionId); + + for (const task of runningTasksToCancel) { + await this.cancelTask(task.id); + } + } + + /** + * Get current queue status + */ + public async getQueueStatus(): Promise<{ pending: number; running: number; completed: number }> { + return { + pending: this.taskQueue.length, + running: this.runningTasks.size, + completed: this.completedTasks.size, + }; + } + + /** + * Process the task queue + */ + private async processQueue(): Promise { + if (!this.isRunning) return; + + // Check if we can start more tasks + const availableSlots = this.config.maxConcurrentTasks - this.runningTasks.size; + if (availableSlots <= 0) return; + + // Get ready tasks (dependencies met and scheduled time reached) + const readyTasks = this.getReadyTasks(); + const tasksToStart = readyTasks.slice(0, availableSlots); + + // Start tasks + for (const task of tasksToStart) { + await this.startTask(task); + } + + // Schedule next processing cycle + if (this.taskQueue.length > 0 || this.runningTasks.size > 0) { + setImmediate(() => this.processQueue()); + } + } + + /** + * Get tasks that are ready to execute + */ + private getReadyTasks(): QueuedTask[] { + const now = new Date(); + + return this.taskQueue.filter(task => { + // Check if scheduled time has arrived + if (task.scheduledTime > now) return false; + + // Check if dependencies are met + return this.areDependenciesMet(task); + }); + } + + /** + * Check if task dependencies are satisfied + */ + private areDependenciesMet(task: QueuedTask): boolean { + if (!task.dependencies || task.dependencies.length === 0) { + return true; + } + + // Check if all dependency tasks are completed + return task.dependencies.every(depId => this.completedTasks.has(depId)); + } + + /** + * Start executing a task + */ + private async startTask(task: QueuedTask): Promise { + // Remove from queue and add to running tasks + const queueIndex = this.taskQueue.findIndex(t => t.id === task.id); + if (queueIndex !== -1) { + this.taskQueue.splice(queueIndex, 1); + } + + this.runningTasks.set(task.id, task); + this.emit("taskStarted", task); + + try { + // Execute the task (this would integrate with the workflow engine) + const result = await this.executeTask(task); + + // Task completed successfully + this.runningTasks.delete(task.id); + this.completedTasks.add(task.id); + + this.emit("taskCompleted", task.id, result); + + } catch (error) { + // Task failed + this.runningTasks.delete(task.id); + + const workflowError: WorkflowError = { + code: "TASK_EXECUTION_ERROR", + message: error instanceof Error ? error.message : String(error), + taskId: task.id, + originalError: error instanceof Error ? error : undefined, + timestamp: new Date(), + }; + + this.emit("taskFailed", task.id, workflowError); + + // Handle retry logic + if (this.shouldRetryTask(task, workflowError)) { + await this.retryTask(task); + } + } + } + + /** + * Execute a single task (placeholder - actual execution happens in workflow engine) + */ + private async executeTask(task: QueuedTask): Promise { + // This is a placeholder - actual task execution is handled by the workflow engine + // The scheduler just manages the queuing and concurrency + return new Promise((resolve, reject) => { + // Simulate task execution + setTimeout(() => { + if (Math.random() > 0.1) { // 90% success rate for simulation + resolve(`Task ${task.id} completed successfully`); + } else { + reject(new Error(`Task ${task.id} failed`)); + } + }, Math.random() * 1000); // Random execution time + }); + } + + /** + * Determine if a task should be retried + */ + private shouldRetryTask(task: QueuedTask, error: WorkflowError): boolean { + const maxRetries = task.task.retries || 0; + const currentRetries = this.getTaskRetryCount(task.id); + + return currentRetries < maxRetries; + } + + /** + * Retry a failed task + */ + private async retryTask(task: QueuedTask): Promise { + // Create a new task instance for retry + const retryTask: QueuedTask = { + ...task, + id: `${task.id}_retry_${Date.now()}`, + scheduledTime: this.calculateRetryDelay(task), + }; + + await this.scheduleTask(retryTask); + this.emit("taskRetried", task.id, retryTask.id); + } + + /** + * Calculate retry delay based on backoff strategy + */ + private calculateRetryDelay(task: QueuedTask): Date { + const retryCount = this.getTaskRetryCount(task.id); + const baseDelay = 1000; // 1 second base delay + + let delay: number; + + // Simple exponential backoff + delay = baseDelay * Math.pow(2, retryCount); + + // Add jitter to prevent thundering herd + delay += Math.random() * 1000; + + return new Date(Date.now() + delay); + } + + /** + * Get retry count for a task + */ + private getTaskRetryCount(taskId: string): number { + // Extract retry count from task ID if it's a retry + const retryMatch = taskId.match(/_retry_(\d+)$/); + return retryMatch ? parseInt(retryMatch[1], 10) : 0; + } + + /** + * Find insertion index for task based on priority + */ + private findInsertionIndex(task: QueuedTask): number { + for (let i = 0; i < this.taskQueue.length; i++) { + if (this.taskQueue[i].priority < task.priority) { + return i; + } + } + return this.taskQueue.length; + } + + /** + * Start heartbeat monitoring + */ + private startHeartbeat(): void { + this.heartbeatTimer = setInterval(() => { + this.emit("heartbeat", { + timestamp: new Date(), + queueSize: this.taskQueue.length, + runningTasks: this.runningTasks.size, + completedTasks: this.completedTasks.size, + }); + }, this.config.heartbeatInterval); + } + + /** + * Start cleanup process + */ + private startCleanup(): void { + this.cleanupTimer = setInterval(() => { + this.performCleanup(); + }, this.config.cleanupInterval); + } + + /** + * Perform periodic cleanup + */ + private performCleanup(): void { + // Clean up old completed task IDs to prevent memory leaks + const maxCompletedTasks = 10000; + if (this.completedTasks.size > maxCompletedTasks) { + const tasksToRemove = this.completedTasks.size - maxCompletedTasks; + const taskArray = Array.from(this.completedTasks); + + for (let i = 0; i < tasksToRemove; i++) { + this.completedTasks.delete(taskArray[i]); + } + } + + // Check for stuck tasks (running too long) + const maxTaskDuration = 300000; // 5 minutes + const now = Date.now(); + + for (const [taskId, task] of this.runningTasks) { + const taskAge = now - task.scheduledTime.getTime(); + if (taskAge > maxTaskDuration) { + console.warn(`Task ${taskId} has been running for ${taskAge}ms, considering it stuck`); + this.cancelTask(taskId); + } + } + + this.emit("cleanupCompleted", { + timestamp: new Date(), + completedTasksCount: this.completedTasks.size, + runningTasksCount: this.runningTasks.size, + }); + } +} + diff --git a/packages/core/src/orchestration/state-manager.ts b/packages/core/src/orchestration/state-manager.ts new file mode 100644 index 000000000..b127c281b --- /dev/null +++ b/packages/core/src/orchestration/state-manager.ts @@ -0,0 +1,412 @@ +import type { + IWorkflowStateManager, + WorkflowExecutionContext, + TaskExecutionResult, +} from "./types"; + +/** + * Workflow State Manager for persistence and recovery + */ +export class WorkflowStateManager implements IWorkflowStateManager { + private persistenceEnabled: boolean; + private inMemoryStore = new Map(); + private activeWorkflows = new Set(); + + constructor(persistenceEnabled = false) { + this.persistenceEnabled = persistenceEnabled; + } + + /** + * Check if persistence is enabled + */ + public isPersistenceEnabled(): boolean { + return this.persistenceEnabled; + } + + /** + * Save workflow execution state + */ + public async saveWorkflowState(context: WorkflowExecutionContext): Promise { + if (!this.persistenceEnabled) { + // Store in memory for non-persistent mode + this.inMemoryStore.set(context.executionId, this.cloneContext(context)); + + if (context.status === "running" || context.status === "paused") { + this.activeWorkflows.add(context.executionId); + } else { + this.activeWorkflows.delete(context.executionId); + } + + return; + } + + try { + // In a real implementation, this would save to a database + // For now, we'll use localStorage in browser or file system in Node.js + const serializedContext = this.serializeContext(context); + + if (typeof window !== "undefined" && window.localStorage) { + // Browser environment + localStorage.setItem(`workflow_${context.executionId}`, serializedContext); + + // Update active workflows list + const activeList = this.getActiveWorkflowsList(); + if (context.status === "running" || context.status === "paused") { + activeList.add(context.executionId); + } else { + activeList.delete(context.executionId); + } + localStorage.setItem("active_workflows", JSON.stringify(Array.from(activeList))); + + } else { + // Node.js environment - use file system + const fs = await import("fs/promises"); + const path = await import("path"); + + const stateDir = path.join(process.cwd(), ".voltagent", "workflow-state"); + await fs.mkdir(stateDir, { recursive: true }); + + const filePath = path.join(stateDir, `${context.executionId}.json`); + await fs.writeFile(filePath, serializedContext, "utf8"); + + // Update active workflows list + const activeListPath = path.join(stateDir, "active-workflows.json"); + const activeList = await this.loadActiveWorkflowsFromFile(activeListPath); + + if (context.status === "running" || context.status === "paused") { + activeList.add(context.executionId); + } else { + activeList.delete(context.executionId); + } + + await fs.writeFile(activeListPath, JSON.stringify(Array.from(activeList)), "utf8"); + } + + } catch (error) { + console.error("Failed to save workflow state:", error); + // Fall back to in-memory storage + this.inMemoryStore.set(context.executionId, this.cloneContext(context)); + } + } + + /** + * Load workflow execution state + */ + public async loadWorkflowState(executionId: string): Promise { + // Try in-memory store first + const inMemoryContext = this.inMemoryStore.get(executionId); + if (inMemoryContext) { + return this.cloneContext(inMemoryContext); + } + + if (!this.persistenceEnabled) { + return null; + } + + try { + let serializedContext: string | null = null; + + if (typeof window !== "undefined" && window.localStorage) { + // Browser environment + serializedContext = localStorage.getItem(`workflow_${executionId}`); + } else { + // Node.js environment + const fs = await import("fs/promises"); + const path = await import("path"); + + const filePath = path.join(process.cwd(), ".voltagent", "workflow-state", `${executionId}.json`); + + try { + serializedContext = await fs.readFile(filePath, "utf8"); + } catch (error) { + // File doesn't exist + return null; + } + } + + if (!serializedContext) { + return null; + } + + return this.deserializeContext(serializedContext); + + } catch (error) { + console.error("Failed to load workflow state:", error); + return null; + } + } + + /** + * Delete workflow execution state + */ + public async deleteWorkflowState(executionId: string): Promise { + // Remove from in-memory store + this.inMemoryStore.delete(executionId); + this.activeWorkflows.delete(executionId); + + if (!this.persistenceEnabled) { + return; + } + + try { + if (typeof window !== "undefined" && window.localStorage) { + // Browser environment + localStorage.removeItem(`workflow_${executionId}`); + + // Update active workflows list + const activeList = this.getActiveWorkflowsList(); + activeList.delete(executionId); + localStorage.setItem("active_workflows", JSON.stringify(Array.from(activeList))); + + } else { + // Node.js environment + const fs = await import("fs/promises"); + const path = await import("path"); + + const filePath = path.join(process.cwd(), ".voltagent", "workflow-state", `${executionId}.json`); + + try { + await fs.unlink(filePath); + } catch (error) { + // File might not exist, ignore error + } + + // Update active workflows list + const activeListPath = path.join(process.cwd(), ".voltagent", "workflow-state", "active-workflows.json"); + const activeList = await this.loadActiveWorkflowsFromFile(activeListPath); + activeList.delete(executionId); + + await fs.writeFile(activeListPath, JSON.stringify(Array.from(activeList)), "utf8"); + } + + } catch (error) { + console.error("Failed to delete workflow state:", error); + } + } + + /** + * List all active workflow execution IDs + */ + public async listActiveWorkflows(): Promise { + if (!this.persistenceEnabled) { + return Array.from(this.activeWorkflows); + } + + try { + if (typeof window !== "undefined" && window.localStorage) { + // Browser environment + const activeList = this.getActiveWorkflowsList(); + return Array.from(activeList); + } else { + // Node.js environment + const path = await import("path"); + const activeListPath = path.join(process.cwd(), ".voltagent", "workflow-state", "active-workflows.json"); + const activeList = await this.loadActiveWorkflowsFromFile(activeListPath); + return Array.from(activeList); + } + } catch (error) { + console.error("Failed to list active workflows:", error); + return Array.from(this.activeWorkflows); + } + } + + /** + * Serialize workflow context for storage + */ + private serializeContext(context: WorkflowExecutionContext): string { + // Convert Maps to objects for JSON serialization + const serializable = { + ...context, + taskResults: Array.from(context.taskResults.entries()).map(([key, value]) => ({ + key, + value: { + ...value, + startTime: value.startTime.toISOString(), + endTime: value.endTime?.toISOString(), + }, + })), + globalContext: Array.from(context.globalContext.entries()), + userContext: context.userContext ? Array.from(context.userContext.entries()) : undefined, + startTime: context.startTime.toISOString(), + endTime: context.endTime?.toISOString(), + }; + + return JSON.stringify(serializable, null, 2); + } + + /** + * Deserialize workflow context from storage + */ + private deserializeContext(serialized: string): WorkflowExecutionContext { + const data = JSON.parse(serialized); + + // Convert objects back to Maps and Dates + const taskResults = new Map(); + if (data.taskResults) { + for (const item of data.taskResults) { + const result: TaskExecutionResult = { + ...item.value, + startTime: new Date(item.value.startTime), + endTime: item.value.endTime ? new Date(item.value.endTime) : undefined, + }; + taskResults.set(item.key, result); + } + } + + const globalContext = new Map(data.globalContext || []); + const userContext = data.userContext ? new Map(data.userContext) : undefined; + + return { + ...data, + taskResults, + globalContext, + userContext, + startTime: new Date(data.startTime), + endTime: data.endTime ? new Date(data.endTime) : undefined, + }; + } + + /** + * Clone workflow context for immutability + */ + private cloneContext(context: WorkflowExecutionContext): WorkflowExecutionContext { + return { + ...context, + taskResults: new Map(context.taskResults), + globalContext: new Map(context.globalContext), + userContext: context.userContext ? new Map(context.userContext) : undefined, + }; + } + + /** + * Get active workflows list from localStorage + */ + private getActiveWorkflowsList(): Set { + try { + const stored = localStorage.getItem("active_workflows"); + if (stored) { + return new Set(JSON.parse(stored)); + } + } catch (error) { + console.error("Failed to parse active workflows list:", error); + } + return new Set(); + } + + /** + * Load active workflows list from file + */ + private async loadActiveWorkflowsFromFile(filePath: string): Promise> { + try { + const fs = await import("fs/promises"); + const content = await fs.readFile(filePath, "utf8"); + return new Set(JSON.parse(content)); + } catch (error) { + // File doesn't exist or is invalid, return empty set + return new Set(); + } + } + + /** + * Clear all workflow state (for testing/cleanup) + */ + public async clearAllState(): Promise { + this.inMemoryStore.clear(); + this.activeWorkflows.clear(); + + if (!this.persistenceEnabled) { + return; + } + + try { + if (typeof window !== "undefined" && window.localStorage) { + // Browser environment - remove all workflow-related items + const keysToRemove: string[] = []; + for (let i = 0; i < localStorage.length; i++) { + const key = localStorage.key(i); + if (key && (key.startsWith("workflow_") || key === "active_workflows")) { + keysToRemove.push(key); + } + } + keysToRemove.forEach(key => localStorage.removeItem(key)); + } else { + // Node.js environment - remove state directory + const fs = await import("fs/promises"); + const path = await import("path"); + + const stateDir = path.join(process.cwd(), ".voltagent", "workflow-state"); + try { + await fs.rm(stateDir, { recursive: true, force: true }); + } catch (error) { + // Directory might not exist, ignore error + } + } + } catch (error) { + console.error("Failed to clear workflow state:", error); + } + } + + /** + * Get storage statistics + */ + public async getStorageStats(): Promise<{ + totalWorkflows: number; + activeWorkflows: number; + storageSize: number; + }> { + const activeCount = this.activeWorkflows.size; + let totalCount = this.inMemoryStore.size; + let storageSize = 0; + + if (this.persistenceEnabled) { + try { + if (typeof window !== "undefined" && window.localStorage) { + // Browser environment + let size = 0; + let count = 0; + for (let i = 0; i < localStorage.length; i++) { + const key = localStorage.key(i); + if (key && key.startsWith("workflow_")) { + const value = localStorage.getItem(key); + if (value) { + size += value.length; + count++; + } + } + } + totalCount = count; + storageSize = size; + } else { + // Node.js environment + const fs = await import("fs/promises"); + const path = await import("path"); + + const stateDir = path.join(process.cwd(), ".voltagent", "workflow-state"); + try { + const files = await fs.readdir(stateDir); + const workflowFiles = files.filter(file => file.endsWith(".json") && file !== "active-workflows.json"); + totalCount = workflowFiles.length; + + // Calculate total size + for (const file of workflowFiles) { + const filePath = path.join(stateDir, file); + const stats = await fs.stat(filePath); + storageSize += stats.size; + } + } catch (error) { + // Directory doesn't exist + } + } + } catch (error) { + console.error("Failed to get storage stats:", error); + } + } + + return { + totalWorkflows: totalCount, + activeWorkflows: activeCount, + storageSize, + }; + } +} + diff --git a/packages/core/src/orchestration/types.ts b/packages/core/src/orchestration/types.ts new file mode 100644 index 000000000..d28580070 --- /dev/null +++ b/packages/core/src/orchestration/types.ts @@ -0,0 +1,258 @@ +import type { Agent } from "../agent"; +import type { BaseMessage } from "../agent/providers"; + +/** + * Workflow execution status + */ +export type WorkflowStatus = + | "pending" + | "running" + | "completed" + | "failed" + | "cancelled" + | "paused"; + +/** + * Task execution status + */ +export type TaskStatus = + | "pending" + | "running" + | "completed" + | "failed" + | "skipped" + | "cancelled"; + +/** + * Workflow execution mode + */ +export type ExecutionMode = + | "sequential" // Execute tasks one after another + | "parallel" // Execute all tasks simultaneously + | "conditional" // Execute based on conditions + | "pipeline" // Execute in pipeline with data flow + | "graph"; // Execute based on dependency graph + +/** + * Task definition within a workflow + */ +export interface WorkflowTask { + id: string; + name: string; + description?: string; + agentName: string; + input: string | Record; + dependencies?: string[]; // Task IDs this task depends on + conditions?: TaskCondition[]; + timeout?: number; // Timeout in milliseconds + retries?: number; + metadata?: Record; +} + +/** + * Condition for conditional task execution + */ +export interface TaskCondition { + type: "result" | "status" | "custom"; + taskId?: string; // For result/status conditions + operator: "equals" | "not_equals" | "contains" | "greater_than" | "less_than" | "exists"; + value: unknown; + customEvaluator?: (context: WorkflowExecutionContext) => boolean; +} + +/** + * Workflow definition + */ +export interface WorkflowDefinition { + id: string; + name: string; + description?: string; + version: string; + mode: ExecutionMode; + tasks: WorkflowTask[]; + globalTimeout?: number; + retryPolicy?: RetryPolicy; + errorHandling?: ErrorHandlingStrategy; + metadata?: Record; +} + +/** + * Retry policy for tasks and workflows + */ +export interface RetryPolicy { + maxRetries: number; + backoffStrategy: "linear" | "exponential" | "fixed"; + baseDelay: number; // Base delay in milliseconds + maxDelay?: number; // Maximum delay for exponential backoff + retryableErrors?: string[]; // Specific error types to retry +} + +/** + * Error handling strategy + */ +export interface ErrorHandlingStrategy { + onTaskFailure: "stop" | "continue" | "retry" | "skip_dependents"; + onWorkflowFailure: "stop" | "rollback" | "partial_complete"; + errorNotification?: boolean; + customErrorHandler?: (error: WorkflowError, context: WorkflowExecutionContext) => Promise; +} + +/** + * Workflow execution context + */ +export interface WorkflowExecutionContext { + workflowId: string; + executionId: string; + status: WorkflowStatus; + startTime: Date; + endTime?: Date; + currentTask?: string; + taskResults: Map; + globalContext: Map; + userContext?: Map; + conversationId?: string; + userId?: string; + parentAgentId?: string; + parentHistoryEntryId?: string; +} + +/** + * Task execution result + */ +export interface TaskExecutionResult { + taskId: string; + status: TaskStatus; + result?: unknown; + error?: WorkflowError; + startTime: Date; + endTime?: Date; + duration?: number; + retryCount: number; + agentConversationId?: string; + messages?: BaseMessage[]; +} + +/** + * Workflow execution result + */ +export interface WorkflowExecutionResult { + workflowId: string; + executionId: string; + status: WorkflowStatus; + startTime: Date; + endTime?: Date; + duration?: number; + taskResults: TaskExecutionResult[]; + finalResult?: unknown; + error?: WorkflowError; + metadata?: Record; +} + +/** + * Workflow error with context + */ +export interface WorkflowError { + code: string; + message: string; + taskId?: string; + originalError?: Error; + context?: Record; + timestamp: Date; +} + +/** + * Workflow event for monitoring and observability + */ +export interface WorkflowEvent { + type: "workflow_started" | "workflow_completed" | "workflow_failed" | + "task_started" | "task_completed" | "task_failed" | "task_retried"; + workflowId: string; + executionId: string; + taskId?: string; + timestamp: Date; + data?: Record; +} + +/** + * Workflow scheduler configuration + */ +export interface SchedulerConfig { + maxConcurrentWorkflows: number; + maxConcurrentTasks: number; + taskQueueSize: number; + heartbeatInterval: number; // Milliseconds + cleanupInterval: number; // Milliseconds + persistenceEnabled: boolean; +} + +/** + * Task queue item + */ +export interface QueuedTask { + id: string; + workflowId: string; + executionId: string; + task: WorkflowTask; + priority: number; + scheduledTime: Date; + dependencies: string[]; + context: WorkflowExecutionContext; +} + +/** + * Workflow orchestration engine interface + */ +export interface IWorkflowOrchestrator { + // Workflow management + registerWorkflow(definition: WorkflowDefinition): Promise; + unregisterWorkflow(workflowId: string): Promise; + getWorkflow(workflowId: string): Promise; + listWorkflows(): Promise; + + // Execution management + executeWorkflow(workflowId: string, input?: Record, options?: WorkflowExecutionOptions): Promise; + pauseWorkflow(executionId: string): Promise; + resumeWorkflow(executionId: string): Promise; + cancelWorkflow(executionId: string): Promise; + getExecutionStatus(executionId: string): Promise; + + // Monitoring and observability + onWorkflowEvent(callback: (event: WorkflowEvent) => void): void; + getExecutionHistory(workflowId?: string, limit?: number): Promise; +} + +/** + * Task scheduler interface + */ +export interface ITaskScheduler { + scheduleTask(task: QueuedTask): Promise; + cancelTask(taskId: string): Promise; + getQueueStatus(): Promise<{ pending: number; running: number; completed: number }>; + start(): Promise; + stop(): Promise; +} + +/** + * Workflow execution options + */ +export interface WorkflowExecutionOptions { + priority?: number; + timeout?: number; + userContext?: Map; + conversationId?: string; + userId?: string; + parentAgentId?: string; + parentHistoryEntryId?: string; + dryRun?: boolean; // Validate workflow without executing +} + +/** + * State persistence interface + */ +export interface IWorkflowStateManager { + saveWorkflowState(context: WorkflowExecutionContext): Promise; + loadWorkflowState(executionId: string): Promise; + deleteWorkflowState(executionId: string): Promise; + listActiveWorkflows(): Promise; +} + diff --git a/packages/core/src/orchestration/validator.ts b/packages/core/src/orchestration/validator.ts new file mode 100644 index 000000000..4d5d3d9d2 --- /dev/null +++ b/packages/core/src/orchestration/validator.ts @@ -0,0 +1,512 @@ +import type { Agent } from "../agent"; +import type { + WorkflowDefinition, + WorkflowTask, + TaskCondition, + RetryPolicy, + ErrorHandlingStrategy, +} from "./types"; + +/** + * Validation result for workflows and tasks + */ +export interface ValidationResult { + isValid: boolean; + errors: string[]; + warnings: string[]; +} + +/** + * Workflow Validator for ensuring workflow definitions are correct + */ +export class WorkflowValidator { + /** + * Validate a complete workflow definition + */ + public async validateWorkflow( + workflow: WorkflowDefinition, + availableAgents: Map> + ): Promise { + const errors: string[] = []; + const warnings: string[] = []; + + // Basic workflow validation + this.validateBasicWorkflowStructure(workflow, errors); + + // Task validation + this.validateTasks(workflow.tasks, availableAgents, errors, warnings); + + // Dependency validation + this.validateDependencies(workflow.tasks, errors); + + // Execution mode specific validation + this.validateExecutionMode(workflow, errors, warnings); + + // Retry policy validation + if (workflow.retryPolicy) { + this.validateRetryPolicy(workflow.retryPolicy, errors, warnings); + } + + // Error handling validation + if (workflow.errorHandling) { + this.validateErrorHandling(workflow.errorHandling, errors, warnings); + } + + // Performance and resource validation + this.validatePerformanceConstraints(workflow, warnings); + + return { + isValid: errors.length === 0, + errors, + warnings, + }; + } + + /** + * Validate basic workflow structure + */ + private validateBasicWorkflowStructure(workflow: WorkflowDefinition, errors: string[]): void { + if (!workflow.id || typeof workflow.id !== "string" || workflow.id.trim() === "") { + errors.push("Workflow ID is required and must be a non-empty string"); + } + + if (!workflow.name || typeof workflow.name !== "string" || workflow.name.trim() === "") { + errors.push("Workflow name is required and must be a non-empty string"); + } + + if (!workflow.version || typeof workflow.version !== "string") { + errors.push("Workflow version is required and must be a string"); + } + + if (!workflow.mode || !["sequential", "parallel", "conditional", "pipeline", "graph"].includes(workflow.mode)) { + errors.push("Workflow mode must be one of: sequential, parallel, conditional, pipeline, graph"); + } + + if (!workflow.tasks || !Array.isArray(workflow.tasks) || workflow.tasks.length === 0) { + errors.push("Workflow must have at least one task"); + } + + if (workflow.globalTimeout && (typeof workflow.globalTimeout !== "number" || workflow.globalTimeout <= 0)) { + errors.push("Global timeout must be a positive number"); + } + } + + /** + * Validate workflow tasks + */ + private validateTasks( + tasks: WorkflowTask[], + availableAgents: Map>, + errors: string[], + warnings: string[] + ): void { + const taskIds = new Set(); + + for (const task of tasks) { + // Check for duplicate task IDs + if (taskIds.has(task.id)) { + errors.push(`Duplicate task ID: ${task.id}`); + } else { + taskIds.add(task.id); + } + + // Validate individual task + this.validateTask(task, availableAgents, errors, warnings); + } + } + + /** + * Validate a single task + */ + private validateTask( + task: WorkflowTask, + availableAgents: Map>, + errors: string[], + warnings: string[] + ): void { + if (!task.id || typeof task.id !== "string" || task.id.trim() === "") { + errors.push("Task ID is required and must be a non-empty string"); + } + + if (!task.name || typeof task.name !== "string" || task.name.trim() === "") { + errors.push(`Task ${task.id}: name is required and must be a non-empty string`); + } + + if (!task.agentName || typeof task.agentName !== "string" || task.agentName.trim() === "") { + errors.push(`Task ${task.id}: agentName is required and must be a non-empty string`); + } else if (!availableAgents.has(task.agentName)) { + errors.push(`Task ${task.id}: agent '${task.agentName}' is not available. Available agents: ${Array.from(availableAgents.keys()).join(", ")}`); + } + + if (task.input === undefined || task.input === null) { + errors.push(`Task ${task.id}: input is required`); + } + + if (task.timeout && (typeof task.timeout !== "number" || task.timeout <= 0)) { + errors.push(`Task ${task.id}: timeout must be a positive number`); + } + + if (task.retries && (typeof task.retries !== "number" || task.retries < 0)) { + errors.push(`Task ${task.id}: retries must be a non-negative number`); + } + + // Validate task conditions + if (task.conditions) { + this.validateTaskConditions(task, errors, warnings); + } + + // Performance warnings + if (task.timeout && task.timeout > 300000) { // 5 minutes + warnings.push(`Task ${task.id}: timeout is very long (${task.timeout}ms), consider breaking into smaller tasks`); + } + + if (task.retries && task.retries > 5) { + warnings.push(`Task ${task.id}: high retry count (${task.retries}), consider reviewing task reliability`); + } + } + + /** + * Validate task conditions + */ + private validateTaskConditions(task: WorkflowTask, errors: string[], warnings: string[]): void { + if (!task.conditions || !Array.isArray(task.conditions)) { + return; + } + + for (const condition of task.conditions) { + this.validateTaskCondition(task.id, condition, errors, warnings); + } + } + + /** + * Validate a single task condition + */ + private validateTaskCondition( + taskId: string, + condition: TaskCondition, + errors: string[], + warnings: string[] + ): void { + if (!["result", "status", "custom"].includes(condition.type)) { + errors.push(`Task ${taskId}: condition type must be 'result', 'status', or 'custom'`); + } + + if (!["equals", "not_equals", "contains", "greater_than", "less_than", "exists"].includes(condition.operator)) { + errors.push(`Task ${taskId}: condition operator must be one of: equals, not_equals, contains, greater_than, less_than, exists`); + } + + if (condition.type === "custom") { + if (!condition.customEvaluator || typeof condition.customEvaluator !== "function") { + errors.push(`Task ${taskId}: custom condition must have a customEvaluator function`); + } + } else { + if (!condition.taskId || typeof condition.taskId !== "string") { + errors.push(`Task ${taskId}: result/status conditions must specify a taskId`); + } + } + + if (condition.value === undefined && condition.operator !== "exists") { + warnings.push(`Task ${taskId}: condition value is undefined, this may cause unexpected behavior`); + } + } + + /** + * Validate task dependencies + */ + private validateDependencies(tasks: WorkflowTask[], errors: string[]): void { + const taskIds = new Set(tasks.map(task => task.id)); + + for (const task of tasks) { + if (task.dependencies) { + for (const depId of task.dependencies) { + if (!taskIds.has(depId)) { + errors.push(`Task ${task.id}: dependency '${depId}' does not exist`); + } + + if (depId === task.id) { + errors.push(`Task ${task.id}: cannot depend on itself`); + } + } + } + } + + // Check for circular dependencies + this.validateCircularDependencies(tasks, errors); + } + + /** + * Validate circular dependencies using DFS + */ + private validateCircularDependencies(tasks: WorkflowTask[], errors: string[]): void { + const taskMap = new Map(tasks.map(task => [task.id, task])); + const visited = new Set(); + const recursionStack = new Set(); + + const hasCycle = (taskId: string, path: string[]): boolean => { + if (recursionStack.has(taskId)) { + errors.push(`Circular dependency detected: ${path.join(" -> ")} -> ${taskId}`); + return true; + } + + if (visited.has(taskId)) { + return false; + } + + visited.add(taskId); + recursionStack.add(taskId); + + const task = taskMap.get(taskId); + if (task && task.dependencies) { + for (const depId of task.dependencies) { + if (hasCycle(depId, [...path, taskId])) { + return true; + } + } + } + + recursionStack.delete(taskId); + return false; + }; + + for (const task of tasks) { + if (!visited.has(task.id)) { + hasCycle(task.id, []); + } + } + } + + /** + * Validate execution mode specific requirements + */ + private validateExecutionMode(workflow: WorkflowDefinition, errors: string[], warnings: string[]): void { + switch (workflow.mode) { + case "sequential": + this.validateSequentialMode(workflow, warnings); + break; + case "parallel": + this.validateParallelMode(workflow, warnings); + break; + case "conditional": + this.validateConditionalMode(workflow, errors, warnings); + break; + case "pipeline": + this.validatePipelineMode(workflow, errors, warnings); + break; + case "graph": + this.validateGraphMode(workflow, errors, warnings); + break; + } + } + + /** + * Validate sequential execution mode + */ + private validateSequentialMode(workflow: WorkflowDefinition, warnings: string[]): void { + if (workflow.tasks.length > 20) { + warnings.push("Sequential workflow has many tasks, consider breaking into smaller workflows or using parallel execution"); + } + + // Check if any tasks have dependencies (not needed in sequential mode) + const tasksWithDeps = workflow.tasks.filter(task => task.dependencies && task.dependencies.length > 0); + if (tasksWithDeps.length > 0) { + warnings.push("Sequential mode ignores task dependencies, consider using graph mode if dependencies are important"); + } + } + + /** + * Validate parallel execution mode + */ + private validateParallelMode(workflow: WorkflowDefinition, warnings: string[]): void { + if (workflow.tasks.length > 50) { + warnings.push("Parallel workflow has many tasks, this may overwhelm system resources"); + } + + // Check if any tasks have dependencies (not ideal for parallel mode) + const tasksWithDeps = workflow.tasks.filter(task => task.dependencies && task.dependencies.length > 0); + if (tasksWithDeps.length > 0) { + warnings.push("Parallel mode ignores task dependencies, consider using graph mode for dependent tasks"); + } + } + + /** + * Validate conditional execution mode + */ + private validateConditionalMode(workflow: WorkflowDefinition, errors: string[], warnings: string[]): void { + const tasksWithoutConditions = workflow.tasks.filter(task => !task.conditions || task.conditions.length === 0); + + if (tasksWithoutConditions.length === workflow.tasks.length) { + warnings.push("Conditional workflow has no tasks with conditions, consider using sequential mode"); + } + + if (tasksWithoutConditions.length > 0) { + warnings.push(`${tasksWithoutConditions.length} tasks have no conditions and will always execute`); + } + } + + /** + * Validate pipeline execution mode + */ + private validatePipelineMode(workflow: WorkflowDefinition, errors: string[], warnings: string[]): void { + if (workflow.tasks.length < 2) { + warnings.push("Pipeline mode is most effective with multiple tasks that process data sequentially"); + } + + // Check if tasks are designed for data flow + const tasksWithObjectInput = workflow.tasks.filter(task => typeof task.input === "object"); + if (tasksWithObjectInput.length === 0) { + warnings.push("Pipeline mode works best when tasks can accept structured data input"); + } + } + + /** + * Validate graph execution mode + */ + private validateGraphMode(workflow: WorkflowDefinition, errors: string[], warnings: string[]): void { + const tasksWithDeps = workflow.tasks.filter(task => task.dependencies && task.dependencies.length > 0); + + if (tasksWithDeps.length === 0) { + warnings.push("Graph mode is most effective when tasks have dependencies, consider using parallel mode"); + } + + // Check for isolated tasks (no dependencies and no dependents) + const dependentTasks = new Set(); + workflow.tasks.forEach(task => { + if (task.dependencies) { + task.dependencies.forEach(depId => dependentTasks.add(depId)); + } + }); + + const isolatedTasks = workflow.tasks.filter(task => + (!task.dependencies || task.dependencies.length === 0) && + !dependentTasks.has(task.id) + ); + + if (isolatedTasks.length > 0) { + warnings.push(`${isolatedTasks.length} tasks are isolated (no dependencies or dependents): ${isolatedTasks.map(t => t.id).join(", ")}`); + } + } + + /** + * Validate retry policy + */ + private validateRetryPolicy(policy: RetryPolicy, errors: string[], warnings: string[]): void { + if (typeof policy.maxRetries !== "number" || policy.maxRetries < 0) { + errors.push("Retry policy maxRetries must be a non-negative number"); + } + + if (!["linear", "exponential", "fixed"].includes(policy.backoffStrategy)) { + errors.push("Retry policy backoffStrategy must be 'linear', 'exponential', or 'fixed'"); + } + + if (typeof policy.baseDelay !== "number" || policy.baseDelay <= 0) { + errors.push("Retry policy baseDelay must be a positive number"); + } + + if (policy.maxDelay && (typeof policy.maxDelay !== "number" || policy.maxDelay <= policy.baseDelay)) { + errors.push("Retry policy maxDelay must be greater than baseDelay"); + } + + if (policy.maxRetries > 10) { + warnings.push("High retry count may indicate underlying issues with task reliability"); + } + + if (policy.baseDelay > 60000) { // 1 minute + warnings.push("Long base delay may significantly impact workflow execution time"); + } + } + + /** + * Validate error handling strategy + */ + private validateErrorHandling(strategy: ErrorHandlingStrategy, errors: string[], warnings: string[]): void { + if (!["stop", "continue", "retry", "skip_dependents"].includes(strategy.onTaskFailure)) { + errors.push("Error handling onTaskFailure must be 'stop', 'continue', 'retry', or 'skip_dependents'"); + } + + if (!["stop", "rollback", "partial_complete"].includes(strategy.onWorkflowFailure)) { + errors.push("Error handling onWorkflowFailure must be 'stop', 'rollback', or 'partial_complete'"); + } + + if (strategy.customErrorHandler && typeof strategy.customErrorHandler !== "function") { + errors.push("Custom error handler must be a function"); + } + + if (strategy.onTaskFailure === "continue" && strategy.onWorkflowFailure === "stop") { + warnings.push("Inconsistent error handling: tasks continue on failure but workflow stops on failure"); + } + } + + /** + * Validate performance constraints + */ + private validatePerformanceConstraints(workflow: WorkflowDefinition, warnings: string[]): void { + const totalTasks = workflow.tasks.length; + const tasksWithTimeout = workflow.tasks.filter(task => task.timeout).length; + const avgTimeout = workflow.tasks + .filter(task => task.timeout) + .reduce((sum, task) => sum + (task.timeout || 0), 0) / (tasksWithTimeout || 1); + + if (totalTasks > 100) { + warnings.push(`Workflow has ${totalTasks} tasks, consider breaking into smaller workflows for better maintainability`); + } + + if (workflow.mode === "sequential" && avgTimeout > 30000) { // 30 seconds + warnings.push("Sequential workflow with long-running tasks may have poor user experience"); + } + + if (workflow.mode === "parallel" && totalTasks > 20) { + warnings.push("Large parallel workflows may overwhelm system resources"); + } + + // Check for potential memory issues + const tasksWithLargeInput = workflow.tasks.filter(task => + typeof task.input === "object" && JSON.stringify(task.input).length > 10000 + ); + + if (tasksWithLargeInput.length > 0) { + warnings.push(`${tasksWithLargeInput.length} tasks have large input data, monitor memory usage`); + } + } + + /** + * Validate workflow for specific agent capabilities + */ + public validateAgentCompatibility( + workflow: WorkflowDefinition, + agent: Agent + ): ValidationResult { + const errors: string[] = []; + const warnings: string[] = []; + + const agentTasks = workflow.tasks.filter(task => task.agentName === agent.name); + + if (agentTasks.length === 0) { + warnings.push(`Agent ${agent.name} is not used in this workflow`); + return { isValid: true, errors, warnings }; + } + + // Check if agent has required tools for complex tasks + const complexTasks = agentTasks.filter(task => + typeof task.input === "object" || + (task.conditions && task.conditions.length > 0) + ); + + if (complexTasks.length > 0 && (!agent.tools || agent.tools.length === 0)) { + warnings.push(`Agent ${agent.name} handles complex tasks but has no tools configured`); + } + + // Check agent memory configuration for workflows with context sharing + if (workflow.mode === "pipeline" || workflow.mode === "graph") { + if (!agent.memory) { + warnings.push(`Agent ${agent.name} should have memory configured for context-aware workflows`); + } + } + + return { + isValid: errors.length === 0, + errors, + warnings, + }; + } +} + diff --git a/requirements.txt b/requirements.txt new file mode 100644 index 000000000..95f83eb90 --- /dev/null +++ b/requirements.txt @@ -0,0 +1,3 @@ +psycopg2-binary>=2.9.0 +requests>=2.28.0 +python-dotenv>=0.19.0 diff --git a/setup_postgres.bat b/setup_postgres.bat new file mode 100644 index 000000000..f95b9cbdd --- /dev/null +++ b/setup_postgres.bat @@ -0,0 +1,76 @@ +@echo off +echo ======================================== +echo Cloudflare Postgres Setup for Codegen +echo ======================================== +echo. + +REM Check if Python is installed +python --version >nul 2>&1 +if errorlevel 1 ( + echo ERROR: Python is not installed or not in PATH + echo Please install Python 3.7+ and try again + pause + exit /b 1 +) + +REM Install required packages +echo Installing required Python packages... +pip install -r requirements.txt + +REM Set environment variables if provided +if not "%CLOUDFLARE_GLOBAL_API_KEY%"=="" ( + echo Using provided Cloudflare Global API Key +) else ( + set CLOUDFLARE_GLOBAL_API_KEY=eae82cf159577a8838cc83612104c09c5a0d6 +) + +if not "%CLOUDFLARE_EMAIL%"=="" ( + echo Using provided Cloudflare Email: %CLOUDFLARE_EMAIL% +) else ( + echo. + echo WARNING: CLOUDFLARE_EMAIL not set - you may need to provide it + echo Set it with: set CLOUDFLARE_EMAIL=your-email@example.com +) + +if not "%CLOUDFLARE_ACCOUNT_ID%"=="" ( + echo Using provided Cloudflare Account ID +) else ( + set CLOUDFLARE_ACCOUNT_ID=2b2a1d3effa7f7fe4fe2a8c4e48681e3 +) + +if not "%CLOUDFLARE_WORKER_NAME%"=="" ( + echo Using provided Worker Name +) else ( + set CLOUDFLARE_WORKER_NAME=neon-db +) + +if not "%CLOUDFLARE_WORKER_URL%"=="" ( + echo Using provided Worker URL +) else ( + set CLOUDFLARE_WORKER_URL=https://neon-db.pixeliumperfecto.workers.dev +) + +REM PostgreSQL admin credentials (optional) +if not "%POSTGRES_ADMIN_USER%"=="" ( + echo Using provided PostgreSQL admin user: %POSTGRES_ADMIN_USER% +) + +if not "%POSTGRES_ADMIN_PASSWORD%"=="" ( + echo Using provided PostgreSQL admin password +) else ( + echo. + echo NOTE: If PostgreSQL connection fails, you'll be prompted for admin credentials +) + +echo. +echo Starting Postgres setup... +echo. + +REM Run the Python setup script +python cloudflare_postgres_setup.py + +echo. +echo Setup completed! Check the output above for any errors. +echo Your database credentials are saved in .env file +echo. +pause diff --git a/setup_postgres.ps1 b/setup_postgres.ps1 new file mode 100644 index 000000000..bfdc3f8e9 --- /dev/null +++ b/setup_postgres.ps1 @@ -0,0 +1,81 @@ +# PowerShell script for Cloudflare Postgres Setup +# Run with: powershell -ExecutionPolicy Bypass -File setup_postgres.ps1 + +Write-Host "========================================" -ForegroundColor Cyan +Write-Host "Cloudflare Postgres Setup for Codegen" -ForegroundColor Cyan +Write-Host "========================================" -ForegroundColor Cyan +Write-Host "" + +# Check if Python is installed +try { + $pythonVersion = python --version 2>&1 + Write-Host "βœ… Python found: $pythonVersion" -ForegroundColor Green +} catch { + Write-Host "❌ Python is not installed or not in PATH" -ForegroundColor Red + Write-Host "Please install Python 3.7+ and try again" -ForegroundColor Yellow + Read-Host "Press Enter to exit" + exit 1 +} + +# Set environment variables if not already set +if (-not $env:CLOUDFLARE_API_TOKEN) { + $env:CLOUDFLARE_API_TOKEN = "eae82cf159577a8838cc83612104c09c5a0d6" + Write-Host "Using default Cloudflare API Token" -ForegroundColor Yellow +} + +if (-not $env:CLOUDFLARE_ACCOUNT_ID) { + $env:CLOUDFLARE_ACCOUNT_ID = "2b2a1d3effa7f7fe4fe2a8c4e48681e3" + Write-Host "Using default Cloudflare Account ID" -ForegroundColor Yellow +} + +if (-not $env:CLOUDFLARE_WORKER_NAME) { + $env:CLOUDFLARE_WORKER_NAME = "neon-db" + Write-Host "Using default Worker Name: neon-db" -ForegroundColor Yellow +} + +if (-not $env:CLOUDFLARE_WORKER_URL) { + $env:CLOUDFLARE_WORKER_URL = "https://neon-db.pixeliumperfecto.workers.dev" + Write-Host "Using default Worker URL" -ForegroundColor Yellow +} + +# PostgreSQL admin credentials (optional) +if ($env:POSTGRES_ADMIN_USER) { + Write-Host "Using provided PostgreSQL admin user: $env:POSTGRES_ADMIN_USER" -ForegroundColor Green +} + +if ($env:POSTGRES_ADMIN_PASSWORD) { + Write-Host "Using provided PostgreSQL admin password" -ForegroundColor Green +} else { + Write-Host "" + Write-Host "NOTE: If PostgreSQL connection fails, you'll be prompted for admin credentials" -ForegroundColor Yellow +} + +Write-Host "" +Write-Host "Installing required Python packages..." -ForegroundColor Blue +try { + pip install -r requirements.txt + Write-Host "βœ… Dependencies installed successfully" -ForegroundColor Green +} catch { + Write-Host "❌ Failed to install dependencies" -ForegroundColor Red + Read-Host "Press Enter to exit" + exit 1 +} + +Write-Host "" +Write-Host "Starting Postgres setup..." -ForegroundColor Blue +Write-Host "" + +# Run the Python setup script +try { + python cloudflare_postgres_setup.py + Write-Host "" + Write-Host "βœ… Setup completed! Check the output above for any errors." -ForegroundColor Green + Write-Host "Your database credentials are saved in .env file" -ForegroundColor Green +} catch { + Write-Host "❌ Setup script failed" -ForegroundColor Red +} + +Write-Host "" +Write-Host "You can now run 'python test_connection.py' to verify the setup" -ForegroundColor Cyan +Write-Host "" +Read-Host "Press Enter to exit" diff --git a/setup_with_your_credentials.bat b/setup_with_your_credentials.bat new file mode 100644 index 000000000..d47bae0f6 --- /dev/null +++ b/setup_with_your_credentials.bat @@ -0,0 +1,57 @@ +@echo off +echo ======================================== +echo Cloudflare Postgres Setup for Codegen +echo Using Your Provided Credentials +echo ======================================== +echo. + +REM Set your specific Cloudflare credentials +set CLOUDFLARE_GLOBAL_API_KEY=eae82cf159577a8838cc83612104c09c5a0d6 +set CLOUDFLARE_ACCOUNT_ID=2b2a1d3effa7f7fe4fe2a8c4e48681e3 +set CLOUDFLARE_WORKER_NAME=neon-db +set CLOUDFLARE_WORKER_URL=https://neon-db.pixeliumperfecto.workers.dev + +REM You need to set your Cloudflare email address +echo IMPORTANT: Please enter your Cloudflare account email address +set /p CLOUDFLARE_EMAIL="Cloudflare Email: " + +if "%CLOUDFLARE_EMAIL%"=="" ( + echo ERROR: Email is required for Global API Key authentication + pause + exit /b 1 +) + +echo. +echo Using credentials: +echo - Global API Key: %CLOUDFLARE_GLOBAL_API_KEY% +echo - Email: %CLOUDFLARE_EMAIL% +echo - Account ID: %CLOUDFLARE_ACCOUNT_ID% +echo - Worker Name: %CLOUDFLARE_WORKER_NAME% +echo. + +REM Check if Python is installed +python --version >nul 2>&1 +if errorlevel 1 ( + echo ERROR: Python is not installed or not in PATH + echo Please install Python 3.7+ and try again + pause + exit /b 1 +) + +REM Install required packages +echo Installing required Python packages... +pip install -r requirements.txt + +echo. +echo Starting Postgres setup... +echo. + +REM Run the Python setup script +python cloudflare_postgres_setup.py + +echo. +echo Setup completed! Check the output above for any errors. +echo Your database credentials are saved in .env file +echo. +pause + diff --git a/test_connection.py b/test_connection.py new file mode 100644 index 000000000..baacfb0f6 --- /dev/null +++ b/test_connection.py @@ -0,0 +1,173 @@ +#!/usr/bin/env python3 +""" +Test script to verify Postgres connection and Cloudflare Worker +""" + +import os +import sys +import requests +import psycopg2 +from pathlib import Path +from dotenv import load_dotenv + +def load_env_config(): + """Load configuration from .env file""" + env_file = Path('.env') + if not env_file.exists(): + print("❌ .env file not found. Run setup first!") + return None + + load_dotenv(env_file) + + config = { + 'host': os.getenv('POSTGRES_HOST', 'localhost'), + 'port': int(os.getenv('POSTGRES_PORT', 5432)), + 'database': os.getenv('POSTGRES_DATABASE', 'codegen_db'), + 'user': os.getenv('POSTGRES_USER', 'codegen_user'), + 'password': os.getenv('POSTGRES_PASSWORD'), + 'worker_url': os.getenv('CLOUDFLARE_WORKER_URL') + } + + if not config['password']: + print("❌ Password not found in .env file") + return None + + return config + +def test_postgres_connection(config): + """Test PostgreSQL database connection""" + print("🐘 Testing PostgreSQL connection...") + + try: + conn = psycopg2.connect( + host=config['host'], + port=config['port'], + database=config['database'], + user=config['user'], + password=config['password'], + sslmode='prefer' + ) + + cur = conn.cursor() + + # Test basic query + cur.execute("SELECT version();") + version = cur.fetchone()[0] + print(f"βœ… Connected to PostgreSQL") + print(f" Version: {version}") + + # Test permissions (should work) + cur.execute("SELECT current_user, current_database();") + user, db = cur.fetchone() + print(f" Current user: {user}") + print(f" Current database: {db}") + + # Test if we can create tables (should fail for read-only user) + try: + cur.execute("CREATE TABLE test_table (id INTEGER);") + print("⚠️ Warning: User has write permissions (should be read-only)") + cur.execute("DROP TABLE test_table;") + except psycopg2.Error: + print("βœ… User correctly has read-only permissions") + + conn.close() + return True + + except Exception as e: + print(f"❌ PostgreSQL connection failed: {e}") + return False + +def test_cloudflare_worker(config): + """Test Cloudflare Worker accessibility""" + print("\n☁️ Testing Cloudflare Worker...") + + if not config['worker_url']: + print("❌ Worker URL not found in config") + return False + + try: + # Test health endpoint + response = requests.get(f"{config['worker_url']}/health", timeout=10) + + if response.status_code == 200: + data = response.json() + print(f"βœ… Worker health check passed") + print(f" Status: {data.get('status')}") + print(f" Worker: {data.get('worker')}") + print(f" Timestamp: {data.get('timestamp')}") + else: + print(f"⚠️ Worker responded with status: {response.status_code}") + return False + + # Test db-info endpoint + response = requests.get(f"{config['worker_url']}/db-info", timeout=10) + + if response.status_code == 200: + data = response.json() + print(f"βœ… Worker db-info endpoint accessible") + print(f" Host: {data.get('host')}") + print(f" Port: {data.get('port')}") + print(f" Database: {data.get('database')}") + else: + print(f"⚠️ db-info endpoint responded with status: {response.status_code}") + + return True + + except Exception as e: + print(f"❌ Worker test failed: {e}") + return False + +def print_codegen_instructions(config): + """Print instructions for Codegen integration""" + print("\n" + "=" * 50) + print("πŸ€– CODEGEN INTEGRATION INSTRUCTIONS") + print("=" * 50) + print("\n1. Open Codegen Settings") + print("2. Go to Integrations β†’ Postgres") + print("3. Click 'Add Credential'") + print("4. Enter these values:") + print(f" β€’ Credential Name: My Local Database") + print(f" β€’ Description: Local Postgres via Cloudflare") + print(f" β€’ Host: {config['host']}") + print(f" β€’ Port: {config['port']}") + print(f" β€’ Database Name: {config['database']}") + print(f" β€’ Username: {config['user']}") + print(f" β€’ Password: {config['password']}") + print(f" β€’ SSL Mode: prefer") + print("\n5. Click 'Test Connection' to verify") + print("6. Click 'Save' if test passes") + print("\nβœ… Your local Postgres is now ready for Codegen!") + +def main(): + """Main test function""" + print("πŸ§ͺ Testing Cloudflare Postgres Setup") + print("=" * 40) + + # Load configuration + config = load_env_config() + if not config: + sys.exit(1) + + # Test PostgreSQL + postgres_ok = test_postgres_connection(config) + + # Test Cloudflare Worker + worker_ok = test_cloudflare_worker(config) + + # Summary + print("\n" + "=" * 40) + print("πŸ“Š TEST SUMMARY") + print("=" * 40) + print(f"PostgreSQL: {'βœ… PASS' if postgres_ok else '❌ FAIL'}") + print(f"Cloudflare Worker: {'βœ… PASS' if worker_ok else '❌ FAIL'}") + + if postgres_ok and worker_ok: + print("\nπŸŽ‰ All tests passed!") + print_codegen_instructions(config) + else: + print("\n❌ Some tests failed. Please check the setup.") + sys.exit(1) + +if __name__ == "__main__": + main() +