Multi-perspective AI design and implementation through parallel architect ensembles for Claude Code.
Hivedriver spawns N independent architect teams that tackle the same design problem simultaneously -- each through a different lens, optionally with different models -- then reconciles their outputs into a single, stress-tested proposal. An implementation pipeline then builds the design through parallel implementers with multi-model code review.
/hive-mind 3 "Design a notification service"
|
+--------------+--------------+
| | |
Architect-1 Architect-2 Architect-3
Lens: Baseline Lens: Resilient Lens: Minimalist
Model: Opus Model: Kimi Model: MiniMax
| | |
DRAFT DRAFT DRAFT
| | |
+-----+----+----+ +------+----+----+ +------+----+----+
| | | | | | | | | | | |
verif. sec. perf gov verif. sec. perf gov verif. sec. perf gov
(claude-p) (ext.) (claude-p) (ext.) (claude-p) (ext.)
| | | | | | | | | | | |
+--+--+----+---+ +---+--+----+----+ +---+--+----+----+
| | |
SYNTHESIZE SYNTHESIZE SYNTHESIZE
| | |
+------------------+--------------------+
|
RECONCILIATION
(divergences -> user decides)
|
+-------------+-------------+
| | |
verif. robustness simplicity
(claude-p) (ext.) (ext.)
| | |
+------+------+-------------+
|
FINAL DESIGN
(challenge-tested,
multi-perspective,
reconciled)
|
IMPLEMENTATION PIPELINE
planner -> implementers (xN)
-> integrator -> review gate -> tests
Stage 1: Hive-Mind Design -- Multi-model ensemble that produces final-design.md. Runs the full design process: workshop, pre-seed considerations, parallel architect drafts, per-architect challenge cycles, synthesis, reconciliation, and a final challenge cycle.
Stage 2: Implementation Pipeline -- Consumes the design and builds it. Planner decomposes into parallel work units, Sonnet implementers execute, integrator merges, then a review gate (2x code review from different models, security analysis, design-outcomes check) validates before testing.
Both stages are independent -- you can run hive-mind for design-only, or feed an existing design directly into the implementation pipeline.
- Parallel architect ensembles -- 2-6 independent architect teams with varied lenses and model diversity
- Multi-model diversity -- Opus, Kimi K2.5, MiniMax M2.7, Gemini, Mistral, and more via LiteLLM
- Verification with shell access -- challengers run
--help, check file paths, grep configs (not just reasoning) - Challenger lens rotation -- 3 lenses randomly drawn from 12 per architect (Essential, Standard, Elective tiers)
- Multi-model review gate -- code reviewed by both Sonnet and an external model for diverse perspective
- Graceful degradation -- if architects or challengers fail, the pipeline adapts and proceeds
- Resume mode -- interrupted runs can be resumed from the last checkpoint
- Configurable notifications -- email, webhook, or skip
- Additive installation -- installs alongside existing Claude Code agent configurations without conflicts
Hivedriver supports external LLM providers for architect slots and challenger roles. This gives genuine model diversity -- different training data, different reasoning patterns, different blind spots.
Providers whose SDKs or APIs are compatible with the Anthropic messages format get the richest integration. They can be spawned via claude -p with tool access (file reads, shell commands, grep).
How it works: Claude Code's --model flag or provider-specific CLI tools route requests to the external model while maintaining the Claude Code tool-use protocol.
Configuration:
# In hive-mind.env -- add the model to the challenger pool
CHALLENGER_MODEL_POOL="minimax,kimi,mistral-large,mistral-small"
# Models are spawned via claude -p with the --bare flag:
# claude -p --bare --model kimi "Your architect/challenger prompt here"Currently supported (Pattern A):
| Provider | Model | Context | Notes |
|---|---|---|---|
| Moonshot AI | Kimi K2.5 | 256K | Full tool access via claude -p --bare |
| MiniMax | M2.7 | 1M | Full tool access via claude -p --bare |
These models participate as full architects (draft + synthesize) and as lens challengers with the same capabilities as Claude-native agents.
Providers that expose an OpenAI-compatible chat completions endpoint can be proxied through LiteLLM. These models participate as text-only challengers (no tool access).
How it works: LiteLLM runs as a local proxy, translating OpenAI-format requests to provider-native APIs. Hivedriver sends the review/challenge prompt as a user message and receives a text response.
Configuration:
# litellm_config.yaml
model_list:
- model_name: mistral-large
litellm_params:
model: mistral/mistral-large-latest
api_key: os.environ/MISTRAL_API_KEY
- model_name: mistral-small
litellm_params:
model: mistral/mistral-small-latest
api_key: os.environ/MISTRAL_API_KEY
- model_name: llama-3.3-70b
litellm_params:
model: groq/llama-3.3-70b-versatile
api_key: os.environ/GROQ_API_KEY# Start the proxy
litellm --config litellm_config.yaml --port 4000
# In hive-mind.env
CHALLENGER_MODEL_POOL="minimax,kimi,mistral-large,mistral-small"Challenger prompts are sent via curl:
curl -s http://127.0.0.1:4000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model": "mistral-large", "messages": [{"role": "user", "content": "..."}]}'Services like OpenRouter aggregate many providers behind a single OpenAI-compatible endpoint. Configure them through LiteLLM the same as Pattern B:
# litellm_config.yaml
model_list:
- model_name: deepseek-r1
litellm_params:
model: openrouter/deepseek/deepseek-r1
api_key: os.environ/OPENROUTER_API_KEY
api_base: https://openrouter.ai/api/v1
- model_name: claude-3.5-sonnet-openrouter
litellm_params:
model: openrouter/anthropic/claude-3.5-sonnet
api_key: os.environ/OPENROUTER_API_KEY
api_base: https://openrouter.ai/api/v1
- model_name: gemini-2.5-pro
litellm_params:
model: openrouter/google/gemini-2.5-pro-preview
api_key: os.environ/OPENROUTER_API_KEY
api_base: https://openrouter.ai/api/v1# Add to the challenger pool
CHALLENGER_MODEL_POOL="minimax,kimi,deepseek-r1,gemini-2.5-pro"OpenRouter models follow Pattern B (text-only challengers, no tool access). They're useful for adding diversity from providers you don't want to manage individual API keys for.
Google's Gemini models integrate through the Gemini CLI with either OAuth authentication or a Gemini API key. When configured, one architect slot can be assigned to Gemini for cross-provider design diversity.
# In hive-mind.env
GEMINI_ENABLED="true"
# Architect slot assignment (Gemini takes slot N)
# /hive-mind 3 --m opus,kimi,gemini "Design prompt"Gemini architects produce full design drafts. Gemini can also provide optional second-opinion analysis during challenge phases via MCP tools.
- Anthropic-API compatible (has tool use): Add the model name to your CLI/SDK config. Spawn via
claude -p --bare --model <name>. Add toCHALLENGER_MODEL_POOL. - OpenAI-API compatible: Add a model entry to
litellm_config.yaml. Start/restart the LiteLLM proxy. Add the model name toCHALLENGER_MODEL_POOL. - OpenRouter: Same as OpenAI-compatible but with
api_base: https://openrouter.ai/api/v1and an OpenRouter API key. - Native CLI tool: Follow the Gemini pattern -- configure the CLI tool, add a spawn template to the hive-mind skill.
The CHALLENGER_MODEL_POOL in hive-mind.env controls which external models are used for lens challengers. Models are assigned round-robin. Verification challengers always use Opus regardless of this setting.
Let Claude walk you through installation and configuration interactively:
git clone https://github.com/Stupv/hivedriver.git
cd hivedriver
claude -p < INSTALL-WITH-CLAUDE.mdClaude will run the installer, then help you set up your identity, deployment targets, project classifications, and standing design policies through conversation. This is the easiest way to get a fully configured setup.
git clone https://github.com/Stupv/hivedriver.git
cd hivedriver
./install.shThe installer:
- Finds Claude Code installations on your machine
- Asks you to select a target (global
~/.claude/or project-level) - Offers two install modes:
- Replace -- clean install, overwrites any same-named agents
- Additive -- preserves your existing agents; conflicts are auto-namespaced with
hive-prefix
- Copies agent definitions, the
/hive-mindskill, pipeline doc, and config templates - Enables agent teams in
settings.json - Never overwrites your
hive-mind.envoruser-config.mdon re-install
- Claude Code (agent teams enabled automatically by installer)
- Claude Max/Pro subscription (OAuth) or Anthropic API key (agents use Opus, Sonnet, and Haiku)
- LiteLLM (optional) -- enables Mistral, Groq, and other OpenAI-compatible providers
- Gemini CLI (optional) -- enables Gemini architect slot
- Python 3 -- used by the installer for
settings.jsonmerging
/hive-mind Design a monitoring dashboard for Docker containers
/hive-mind 3 Design a REST API for user authentication
/hive-mind 4 --ao Design a data pipeline for log aggregation
/hive-mind resume 2026-03-28T14-30-00
| Argument | Description |
|---|---|
n |
Number of parallel architect teams (default 2, max 6) |
prompt |
The design challenge (required) |
--ao |
All-Opus (disable external model slots) |
--ag |
All-Gemini |
--m opus,kimi,minimax,... |
Explicit per-slot model assignment |
resume <timestamp> |
Resume an interrupted run |
After a /hive-mind run produces a design, implement it with the full pipeline:
"Implement the design in
<path>/final-design.mdusing the implementation pipeline"
The pipeline runs: Planner (task decomposition) -> Implementers (xN parallel, Sonnet) -> Integrator (merge) -> Review Gate (2x code review + security + design-outcomes, all parallel) -> Test Lead (strategy, execution, coverage).
Installed to ~/.claude/hive-mind.env. Set your name, notification preferences, and model configuration.
| Variable | Purpose | Example |
|---|---|---|
USER_NAME |
Your name in agent prompts | "Alice" |
HIVE_MIND_NOTIFY_CMD |
Notification on design completion | "mail-cli send --to ..." |
HIGH_SECURITY_PROJECTS |
Projects requiring deep security review | "auth-svc,payments" |
COMPLIANCE_FRAMEWORKS |
Compliance checklists to enforce | "gdpr,hipaa" |
GEMINI_ENABLED |
Enable Gemini architect slot | "true" |
EXTERNAL_CHALLENGERS_ENABLED |
Use external models for challengers | "true" |
CHALLENGER_MODEL_POOL |
Models for lens challengers (round-robin) | "minimax,kimi,mistral-large" |
DRAFT_CHALLENGER_COUNT |
Challengers per architect (Phase 3) | "3" |
FINAL_CHALLENGER_COUNT |
Challengers for final design (Phase 6) | "6" |
Installed to ~/.claude/skills/hive-mind/user-config.md. Define policies that architects must respect:
## Frontend Stack Policy
**Effective:** 2026-01-01 to 2026-06-01
**Rule:** Use Svelte 5 for new frontend projects, not React.
**Rationale:** Evaluating alternatives for developer experience.
**Escape hatch:** Revert if after 4 hours you can't render a styled page.Policies are injected into every architect brief as standing constraints.
~/.claude/
hive-mind.env # Identity, model config, preferences
agents/
planner.md # Task decomposer (read-only)
implementer.md # Work unit executor (Sonnet)
integrator.md # Parallel output merger (Opus)
code-reviewer.md # Direct code reviewer (Sonnet)
design-reviewer.md # Design outcomes checker (Opus)
security-analyst.md # Scanning + OWASP + compliance
test-lead.md # Test strategy + runner orchestration
test-runner.md # Test executor (Haiku)
skills/
hive-mind/
SKILL.md # /hive-mind orchestration
pipeline.md # Implementation pipeline reference
user-config.md # Your policies and constraints
settings.json # Agent teams env var added
Each run saves to <project>/.pipeline-runs/<timestamp>/hive-mind/:
original-prompt.md # Your exact prompt
architect-1/ # Baseline architect workspace
architect-brief.md # Brief with lens + model header
architect-draft.md # Initial design
challenge-verification.md # Fact-check results (claude -p)
challenge-{lens}.md # Rotated challenger reports
architect-design.md # Synthesised golden design
architect-2/ # Resilient lens workspace
architect-3/ # Minimalist lens workspace
reconciliation.md # Unified proposal (agreements + decisions)
final-challenge/ # Final stress-test reports
final-design.md # The definitive design document
Each run randomly selects 3 lenses per architect from this tiered pool. Verification is always-on and runs separately with full shell access.
| Lens | Focus |
|---|---|
| security | Auth, injection, secrets, OWASP top 10 |
| robustness | Failure modes, error handling, edge cases |
| ops | Deployment, monitoring, rollback, 3am scenarios |
| Lens | Focus |
|---|---|
| simplicity | Unnecessary complexity, YAGNI violations |
| integration | Cross-component interfaces, data flow boundaries |
| performance | Latency, throughput, resource consumption |
| data | Data models, migrations, consistency, integrity |
| migration | Upgrade paths, backwards compatibility, rollback |
| testability | Test surface area, mockability, determinism |
| ergonomics | Developer experience, API clarity, discoverability |
| Lens | Focus |
|---|---|
| governance | Decision traceability, audit trails, approval gates |
| observability | Logging, metrics, alerting, debuggability |
Selection is deterministic and priority-tiered: Essential lenses fill first, then Standard, then Elective. 220+ possible lens combinations across runs ensures diverse coverage over time.
MIT