feat(sandbox): remote & multi-backend sandbox support (Vercel, Daytona, Firecracker)#942
feat(sandbox): remote & multi-backend sandbox support (Vercel, Daytona, Firecracker)#942
Conversation
Refactor SandboxRouter from a single-backend model to a registry of named backends. Sessions can now be routed to different sandbox backends via per-session overrides, enabling simultaneous use of local (Docker, Podman) and remote (Vercel, Daytona — future) backends. Key changes: **Sandbox trait (`types.rs`)**: - Add `is_isolated()` method (default `false`). Remote/VM backends will return `true` to signal workspace sync is needed. **SandboxRouter (`router.rs`)**: - Replace single `backend: Arc<dyn Sandbox>` with `backends: HashMap` registry + `default_backend` for lock-free default access. - Add `backend_overrides: RwLock<HashMap<String, String>>` for per-session backend routing. - New methods: `resolve_backend()`, `register_backend()`, `available_backends()`, `set_backend_override()`, `remove_backend_override()`. - `is_sandboxed()` and `cleanup_session()` now use the resolved backend. - FailoverSandbox delegates `is_isolated()`. **Exec integration (`exec.rs`, `process.rs`, `file_system.rs`)**: - Replace `router.backend()` with `router.resolve_backend(sk).await` in all per-session execution paths. **Session metadata (`metadata.rs`)**: - Add `sandbox_backend: Option<String>` field to SessionEntry. - Add setter methods and SQL migration for the new column. **Gateway (`session_types.rs`, `service.rs`, `prepare_core.rs`)**: - Add `sandboxBackend` to PatchParams (with snake_case alias). - Handle backend override in session patch handler. - Restore persisted backend overrides on startup. This is Phase 0 of the remote sandbox plan. No new backends are added yet — all existing backends register under their names. Future phases will add Vercel, Daytona, and Firecracker backends that plug into this registry. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add tar-based workspace synchronization for isolated sandbox backends (Vercel, Daytona, Firecracker) that don't have direct host filesystem access. New module `crates/tools/src/sandbox/sync.rs`: - `sync_in()`: Tars the host workspace, uploads to sandbox, extracts - `sync_out()`: Tars sandbox workspace, downloads to host, extracts - `resolve_sync_workspace()`: Derives host path from home persistence config - Graceful degradation: sync failures are warned, not fatal Integration: - `exec.rs`: Calls sync_in after ensure_ready on first run for isolated backends (inside the announce_prepare block) - `router.rs`: Calls sync_out before cleanup in cleanup_session for isolated backends Design decisions: - Uses host `tar` command (no new crate dependencies) - Skips sync when workspace is empty or persistence is disabled - 100 MB size limit on sync-out reads (configurable later) - Non-fatal: sync failures log warnings but don't block execution - Only activates for backends returning `is_isolated() = true` This is Phase 1 of the remote sandbox plan. The sync module is ready for use by Vercel/Daytona backends when they're implemented. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Implement VercelSandbox, an isolated sandbox backend that creates
ephemeral Firecracker microVMs via the Vercel Sandbox REST API. This
is the first remote backend, unlocking sandboxed command execution on
cloud deployments where Docker-in-Docker is unavailable.
Implementation (`crates/tools/src/sandbox/vercel.rs`):
- `ensure_ready()`: POST /v1/sandboxes to create, poll for "running"
- `exec()`: POST /v1/sandboxes/{id}/cmd with wait=true NDJSON streaming,
then GET .../logs for stdout/stderr
- `write_file()`: Gzipped tar upload via POST /v1/sandboxes/{id}/fs/write
- `read_file()`: POST /v1/sandboxes/{id}/fs/read returning raw bytes
- `cleanup()`: POST /v1/sandboxes/{id}/stop
- `is_isolated() = true`: triggers workspace sync from Phase 1
Router integration:
- `backend = "vercel"` explicit selection in config
- Auto-detection: when no local runtime found and VERCEL_TOKEN is set,
auto_detect_backend falls through to Vercel
- Config from SandboxConfig fields or env vars (VERCEL_TOKEN,
VERCEL_PROJECT_ID, VERCEL_TEAM_ID)
Feature-gated: `vercel-sandbox` (default-on in both moltis-tools and
CLI crate). Depends on `flate2` for gzipped tar file writes.
Auth: Bearer token from VERCEL_TOKEN or VERCEL_OIDC_TOKEN env var.
All requests scoped to team via ?teamId= query parameter.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Implement DaytonaSandbox, an isolated sandbox backend that creates
ephemeral cloud sandboxes via the Daytona REST API. This is the
second remote backend, providing an open-source alternative to Vercel.
Implementation (`crates/tools/src/sandbox/daytona.rs`):
- `ensure_ready()`: POST /workspace to create sandbox
- `exec()`: POST /workspace/{id}/toolbox/process/execute
- `write_file()`: POST /workspace/{id}/toolbox/files/upload (multipart)
- `read_file()`: GET /workspace/{id}/toolbox/files/download
- `cleanup()`: DELETE /workspace/{id}
- `is_isolated() = true`: triggers workspace sync
Router integration:
- `backend = "daytona"` explicit selection in config
- Auto-detection: when no local runtime and DAYTONA_API_KEY is set
(checked after Vercel in the fallback chain)
- Config from SandboxConfig fields or env vars (DAYTONA_API_KEY,
DAYTONA_API_URL, DAYTONA_TARGET)
No feature gate (always compiled) since it has no extra deps beyond
reqwest which is already required. Auth via Bearer token.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Implement FirecrackerSandbox for VM-based isolation on Linux servers without Docker. Boots ephemeral Firecracker microVMs (~125ms boot) with SSH-based command execution and dedicated TAP networking. Implementation (`crates/tools/src/sandbox/firecracker.rs`): - `ensure_ready()`: Copy rootfs, create TAP device, boot VM via Firecracker REST API (curl over Unix socket), wait for SSH - `exec()`: SSH into guest VM to run commands - `cleanup()`: Graceful shutdown (SendCtrlAltDel), force kill, remove TAP device and rootfs copy - `is_isolated() = true`: triggers workspace sync - Copy-on-write rootfs via `cp --reflink=auto` - Automatic /30 subnet allocation per VM (172.16.x.y/30) Requirements: - Linux only (`#[cfg(target_os = "linux")]`) - `firecracker` binary, uncompressed kernel (`vmlinux`), ext4 rootfs with SSH server and `sandbox` user - Root or CAP_NET_ADMIN for TAP device creation Router integration: - `backend = "firecracker"` explicit selection - Config fields: firecracker_bin, firecracker_kernel, firecracker_rootfs, firecracker_ssh_key, firecracker_vcpus, firecracker_memory_mb Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add UI for configuring Vercel and Daytona sandbox backends in the Sandbox settings page, following the existing provider setup pattern. **TypeScript (ImagesPage.tsx)**: - New `RemoteBackendsSection` component with cards for Vercel and Daytona - Vercel card: token (password input), project ID, team ID fields - Daytona card: API key (password input), API URL field - Status badges showing configured/not-configured state - Fetch config on page load, save with restart-required notification - Updated `BACKEND_LABELS` and badge colors for all new backends - Updated disabled hint to mention remote backends as an option **Type system (SandboxBackendId)**: - New `SandboxBackendId` enum in Rust (`types.rs`) with `from_name()` converter. Used in gon data and API responses instead of raw strings. - New `SandboxBackendId` union type in TypeScript (`gon.ts`) for type-safe backend references throughout the frontend. - `SandboxGonInfo.available_backends` uses the typed enum on both sides. **Rust API (api.rs, lib.rs)**: - `GET /api/sandbox/remote-backends` — returns configured state for Vercel and Daytona (without exposing tokens) - `PUT /api/sandbox/remote-backends` — saves backend config to moltis.toml via `update_config()` **Config schema (tools.rs)**: - Added `vercel_token`, `vercel_project_id`, `vercel_team_id`, `vercel_runtime`, `vercel_timeout_ms`, `vercel_vcpus` fields - Added `daytona_api_key`, `daytona_api_url`, `daytona_target` fields - All fields serde(skip_serializing_if = "Option::is_none") for clean TOML - `From` impl in tools/types.rs now passes through config values Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Greptile SummaryThis PR introduces multi-backend sandbox support (Vercel Firecracker, Daytona, local Firecracker) with a Remaining concerns are P2: the Confidence Score: 4/5Safe to merge with awareness of the remaining P2 issues; all previously identified P0/P1 bugs have been resolved in this revision. All P0/P1 findings from prior review rounds (register_backend never called, truncate panic, subnet overflow, wrong ip= format, write lock held over async, boot_vm process leak, shell injections, etc.) have been fixed. The remaining open issues are P2: Box::leak accumulation in Daytona, silent no-op on unknown backend in the API, plaintext token storage, and misleading partial Vercel save UX. None of these block functionality. crates/tools/src/sandbox/daytona.rs (Box::leak workspace path accumulation), crates/web/src/api.rs (silent no-op on unknown backend, plaintext credentials).
|
| Filename | Overview |
|---|---|
| crates/tools/src/sandbox/vercel.rs | New Vercel Sandbox backend: correct floor_char_boundary truncation, proper workspace_dir override, isolated flag set; log streaming and gzipped tar upload look correct. |
| crates/tools/src/sandbox/daytona.rs | New Daytona backend: base64 wrapper correctly separates stderr; however LAST_DAYTONA_WORKSPACE Box::leak accumulates one leaked allocation per distinct workspace path, and workspace_dir() is globally shared across all sessions. |
| crates/tools/src/sandbox/firecracker.rs | New local Firecracker backend: subnet overflow guard, corrected kernel ip= format, process killed on all boot_vm error paths, write lock dropped before async cleanup — prior critical issues addressed. |
| crates/tools/src/sandbox/sync.rs | Workspace sync module: resolve_sync_workspace now has a non-home-persistence fallback path; sync_in/sync_out use per-backend workspace_dir() so each backend's correct path is used. |
| crates/tools/src/sandbox/router.rs | SandboxRouter refactored to HashMap of backends with per-session backend_overrides; mark_synced/is_synced/clear_synced lifecycle correct; resolve_backend returns Arc clone safely. |
| crates/gateway/src/server/prepare_core/sandbox.rs | New sandbox startup helper: build_sandbox_router now registers vercel, daytona, and firecracker backends into the router, fixing the multi-backend routing that was previously broken. |
| crates/web/src/api.rs | New remote-backends API endpoints: GET correctly hides tokens, but PUT silently no-ops on unknown backend names, and stores credentials in plaintext TOML. |
| crates/web/ui/src/pages/ImagesPage.tsx | RemoteBackendsSection UI added: Save Vercel button can be submitted with only metadata fields (no token), yielding a misleading no-op; Daytona and Vercel cards otherwise render correctly. |
| crates/tools/src/exec.rs | Sync-in integrated into first-run path: resolve_backend used correctly per session, concurrent waiters unblocked via mark_synced (intentionally unconditional per the comment), overall flow correct. |
Sequence Diagram
sequenceDiagram
participant Client
participant ExecTool
participant SandboxRouter
participant Backend
participant SyncModule
Client->>ExecTool: exec(command, session_key)
ExecTool->>SandboxRouter: resolve_backend(sk)
SandboxRouter-->>ExecTool: Arc<dyn Sandbox>
ExecTool->>SandboxRouter: mark_preparing_once(sk)
alt First caller
ExecTool->>Backend: ensure_ready(id, image)
Backend-->>ExecTool: VM/sandbox ready
ExecTool->>SyncModule: sync_in(backend, id, host_workspace, workspace_dir)
SyncModule->>Backend: write_file(tar.gz)
SyncModule->>Backend: exec(tar -xzf ...)
ExecTool->>SandboxRouter: mark_synced(sk)
else Concurrent caller
loop poll until synced or 120s timeout
ExecTool->>SandboxRouter: is_synced(sk)
end
end
ExecTool->>Backend: exec(id, command, opts)
Backend-->>ExecTool: ExecResult
Note over SandboxRouter: cleanup_session(sk)
SandboxRouter->>SyncModule: sync_out(backend, id, host_workspace, workspace_dir)
SandboxRouter->>Backend: cleanup(id)
Reviews (12): Last reviewed commit: "Merge remote-tracking branch 'origin/mai..." | Re-trigger Greptile
Merging this PR will not alter performance
Comparing Footnotes
|
Fix all 3 P1 and 1 P2 issues from Greptile review:
P1 — Lock contention in Firecracker cleanup():
Drop RwLock write guard immediately after remove(), before async
teardown (sleep, kill, remove_tap). Matches Vercel/Daytona pattern.
P1 — Wrong kernel ip= boot arg format:
Fix field order: ip=<client>::<gw>:<netmask>::<iface>:<autoconf>
Was: ip={guest}:::{host}:30:eth0:off (empty gateway, wrong netmask)
Now: ip={guest}::{host}:255.255.255.252::eth0:off
P1 — Firecracker config not configurable from TOML:
Add firecracker_bin, firecracker_kernel, firecracker_rootfs,
firecracker_ssh_key, firecracker_vcpus, firecracker_memory_mb,
vercel_snapshot_id, daytona_image to schema::SandboxConfig.
Wire through From impl so TOML values reach the backend.
P2 — curl dependency for Firecracker API:
Add explicit check for curl availability in ensure_ready() with
a clear error message. Document the requirement.
Also fix Daytona P2: quote path in mkdir -p to handle spaces.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
@greptile review |
Fix 3 P1 issues from Greptile iteration 2: P1 — Multi-backend registry never populated: Register Vercel and Daytona backends in prepare_core.rs at startup when credentials are present, even if they're not the default backend. Add public `select_backend_by_name()` to router for creating backends by explicit name. Make router module `pub` for gateway access. P1 — Workspace sync targets wrong directory: Add `workspace_dir()` method to Sandbox trait (default: /home/sandbox). Override in VercelSandbox (/vercel/sandbox) and DaytonaSandbox (/home/daytona). Sync calls now use `backend.workspace_dir()` instead of the hardcoded DEFAULT_SANDBOX_WORKSPACE constant (removed). FailoverSandbox delegates workspace_dir to active backend. P1 — Sync-in race with concurrent exec: Add `synced_sessions` tracking set to SandboxRouter. First exec caller performs sync_in and calls `mark_synced()`. Concurrent callers poll `is_synced()` and wait (up to 120s) before proceeding to exec. Cleanup clears the sync marker via `clear_synced_session()`. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
@greptile review |
P1 — Panic on non-ASCII output truncation: Use floor_char_boundary() before truncate() in both Vercel and Daytona backends, matching the pattern used throughout the rest of the codebase. P1 — Daytona stderr always empty: Wrap commands to redirect stderr to a temp file, then retrieve it in a follow-up exec call. This separates stdout/stderr despite the Daytona toolbox API combining them into a single result field. P1 — Firecracker missing from backend registration: Add "firecracker" to the prepare_core.rs backend registration loop, detected by firecracker binary presence at /usr/local/bin/firecracker or via config. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
@greptile review |
P1 — Workspace sync disabled when home_persistence=Off:
resolve_sync_workspace now always returns a path for isolated backends,
falling back to data_dir/sandbox/sync/{key} when home persistence is
disabled. Workspace sync is essential for remote backends to function.
P1 — Daytona workspace_dir mismatch with dynamic path:
Store the workspace directory from create_sandbox API response in a
static OnceLock<Mutex<String>>, returned by workspace_dir(). Sync now
targets the actual directory the API assigned, not a hardcoded default.
P2 — Vercel save button requires token for project/team ID updates:
Allow saving when any field (token, project ID, or team ID) has content.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
@greptile review |
P1 — Firecracker panic on non-ASCII output truncation: Use floor_char_boundary() before truncate() in ssh_run(), matching the fix already applied to Vercel and Daytona backends. P1 — Daytona workspace_dir() leaks memory on every call: Change LAST_DAYTONA_WORKSPACE from Mutex<String> to Mutex<&'static str>. Leak the string only once per distinct value (typically just one path across the process lifetime), then return the cached &'static str on subsequent calls. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add a `provision_packages()` method to the Sandbox trait so all backends install the same default packages configured in moltis.toml, regardless of whether they use Docker, Vercel, Daytona, or Firecracker. Trait method (`types.rs`): Default implementation uses `apt-get install` (works for Ubuntu-based backends: Daytona, Firecracker). Called once per session after ensure_ready and sync_in for isolated backends. Vercel override (`vercel.rs`): Vercel sandboxes run Amazon Linux 2023 which uses `dnf`, not `apt-get`. Overrides provision_packages() with `sudo dnf install` and a debian_to_amzn_package() mapping function that translates common Debian package names to their Amazon Linux equivalents (or skips packages that have no equivalent). Exec integration (`exec.rs`): After sync_in completes for isolated backends on first run, calls `backend.provision_packages(&id, &config.packages)`. Non-fatal: failures are warned but don't block execution. This ensures the 170+ default packages (curl, git, python3, nodejs, build-essential, ffmpeg, imagemagick, etc.) are available in remote sandboxes, matching the local Docker experience. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Implement `build_image()` for Firecracker and Vercel backends so packages are installed once and cached, avoiding slow per-session apt-get/dnf on every sandbox start. **Firecracker `build_image()`**: - Boots a temporary VM from the base rootfs - Installs all configured packages via apt-get - Shuts down and saves the rootfs as a cached "golden image" - Content-addressed filename (SHA256 of package list) - Future `ensure_ready()` copies from the pre-built rootfs instead of the bare one — instant boot with all packages ready **Vercel `build_image()`**: - Creates a temporary sandbox - Installs packages via sudo dnf - Takes a snapshot (Vercel's native snapshotting feature) - Returns the snapshot_id as the "tag" - Future `ensure_ready()` creates sandboxes from the snapshot — millisecond boot with packages pre-installed **Exec flow update**: - Skip `provision_packages()` when a pre-built image/snapshot was used (detected by comparing resolved image to DEFAULT_SANDBOX_IMAGE) - Only run runtime provisioning for fresh/bare sandboxes The gateway's existing background pre-build logic at startup already calls `build_image()` for the default backend — no gateway changes needed. Firecracker and Vercel now benefit from the same caching infrastructure that Docker uses. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
1. exec.rs: Only provision packages when sync_in succeeds. If sync
failed (can't connect to sandbox), skip provisioning. mark_synced
is still called unconditionally to unblock concurrent waiters —
the sandbox is ready for exec regardless of sync outcome.
2. daytona.rs: Replace `{ cmd ; }` shell group with `(cmd)` subshell
for stderr capture. Subshells safely tolerate any command content
including `}` characters, preventing shell injection.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Single-quote the working directory in the `cd` command to handle paths with spaces or shell metacharacters safely. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
1. firecracker.rs: Explicitly kill+wait the Firecracker process on SSH timeout instead of just dropping it. Tokio's Child does not kill on drop, so the VM would continue running as an orphan. 2. daytona.rs: Escape single quotes in the mkdir path using the standard shell pattern (replace ' with '\''). Prevents shell injection via crafted file paths. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Restructure the Sandbox settings page with 4 tabs: - General: backend banner, default image, shared home folder - Vercel: Vercel Sandbox configuration (token, project ID, team ID) - Daytona: Daytona configuration (API key, API URL for self-hosted) - Containers & Images: running containers, cached images, build custom Uses the existing TabBar component. Each tab has its own content function. Old RemoteBackendsSection removed (replaced by dedicated Vercel and Daytona tabs).
…ages
- Make project_id required for Vercel (API rejects without it):
- UI: placeholder says "required", save button disabled without it
- Rust: create_sandbox() returns clear error if project_id missing
- Docs: updated to show project_id as required
- Improve image build error messages:
- Detect Docker/Podman daemon unavailability and show clear message
instead of generic "Failed to build sandbox image"
- Includes the underlying error detail in the response
Users can now choose their preferred sandbox backend from a dropdown in Settings → Sandboxes → General (e.g., switch from Apple Container to Docker or Podman on macOS). The selection is saved to moltis.toml and takes effect on restart. The dropdown shows all available backends: auto, Docker, Podman, Apple Container, WASM, Vercel, Daytona, Restricted Host. Also exposes the current `backend` config value in the remote-backends GET response for the UI to read.
- image_cache.rs: log CLI name, tag, and context before build; log exit code, stderr, and stdout on failure at WARN level - api.rs: log build start (DEBUG) and build result (INFO/WARN) in the API handler - Better error message when CLI binary not found: "is docker installed and in PATH?" These logs will show in the server output to diagnose build failures that previously only showed "Failed to build sandbox image" in the UI.
When the primary container CLI (e.g. podman) fails with a daemon connection error, DockerImageBuilder now automatically tries the alternative CLI (docker). This fixes image building on macOS where Apple Container is the sandbox backend but podman's daemon may not be running, while Docker Desktop is available. Also adds DockerImageBuilder::with_cli() constructor and run_build() helper for explicit CLI selection.
The "Build custom image" web UI handler now respects the configured sandbox backend when choosing the container CLI: - apple-container → uses "docker" (Apple Container delegates builds to Docker) - docker → uses "docker" - podman → uses "podman" - auto/other → uses auto-detected CLI (with daemon fallback) Previously it always used container_cli() which returned "podman" even when podman wasn't running, causing builds to fail despite Docker/OrbStack being available. The built image now lands in the correct image store that the sandbox runtime (Apple Container) can access.
Same fix as the build handler — use docker when backend is apple-container, instead of podman (which isn't running). This fixes images not showing in the web UI Containers & Images tab.
…r_backend() Add `for_backend(backend_str)` constructor that maps sandbox backend config to the correct container CLI. Replace all inline `DockerImageBuilder::new()` and `container_cli()` calls in: - web/api.rs: list, delete, prune, check-packages, build handlers - gateway/channel_events: /sandbox image command handlers This ensures all image operations (list, build, delete, prune) use the correct CLI for the active backend (docker for apple-container, podman when explicitly configured, etc.) with automatic fallback. Also removes a duplicate remove_dir_all call.
The sandbox image dropdown in the chat panel now shows two sections: - **Backend**: lists all available/configured backends (Docker, Podman, Apple Container, Vercel, Daytona, etc.) with cloud badge for remote - **Image**: lists cached container images (as before) Selecting a backend sends `sandboxBackend` via sessions.patch, enabling per-session backend routing. The selection is persisted and restored on session switch. New API endpoint: `GET /api/sandbox/available-backends` Returns which backends are installed/configured: - Local: checks if docker/podman/container CLI is available - Remote: checks if VERCEL_TOKEN or DAYTONA_API_KEY is set - Each entry has id, label, kind (local/remote), available flag Also exports `is_cli_available()` from moltis-tools sandbox module for use by the web crate.
…eded Replace the single "Active backend" label + dropdown with a visual list of all available backends (fetched from /api/sandbox/available-backends). Each backend shows as a pill/chip with the active one highlighted. The "default for new sessions" dropdown only shows backends that are actually available. No restart required — per-session backend selection in the chat panel works immediately via the multi-backend router. Also fetches backend list on page init and resets on teardown.
The Vercel/Daytona 'configured' badge in the UI now shows as configured when credentials are set via environment variables (VERCEL_TOKEN, DAYTONA_API_KEY), not just when written to moltis.toml. This matches the runtime behavior where env vars are sufficient to enable the backends.
…ual checks Add ENV_ALIASES to the config crate's env override system. Third-party env vars like VERCEL_TOKEN, DAYTONA_API_KEY, etc. are now automatically mapped to their config equivalents during config loading: VERCEL_TOKEN → tools.exec.sandbox.vercel_token VERCEL_OIDC_TOKEN → tools.exec.sandbox.vercel_token VERCEL_PROJECT_ID → tools.exec.sandbox.vercel_project_id VERCEL_TEAM_ID → tools.exec.sandbox.vercel_team_id DAYTONA_API_KEY → tools.exec.sandbox.daytona_api_key DAYTONA_API_URL → tools.exec.sandbox.daytona_api_url DAYTONA_TARGET → tools.exec.sandbox.daytona_target Aliases only apply when the config field is empty/null (explicit config takes precedence). This means setting VERCEL_TOKEN in the environment is sufficient — no need to also write it to moltis.toml. Removed all manual std::env::var() checks for these variables from: - crates/tools/src/sandbox/router.rs - crates/web/src/api.rs - crates/gateway/src/server/prepare_core/sandbox.rs Everything now goes through the config crate as the single source of truth.
Verify VERCEL_TOKEN and DAYTONA_API_KEY aliases correctly populate config fields, and that explicit config values take precedence.
- Badge: use border-based styling (border-color + text color) instead of solid background + white text. Works in both light and dark mode. - Token inputs: show "•••••••• (set via env or config)" as placeholder when the backend is already configured, making it clear the token is set even though the value isn't displayed (secrets are never returned by the API).
When VERCEL_TOKEN or DAYTONA_API_KEY are set via env vars: - Token input is disabled (greyed out) - Placeholder shows "•••••••• (set via VERCEL_TOKEN env var)" - Hint text explains: "Token managed by environment variable. Remove VERCEL_TOKEN from env to configure here." API response now includes `from_env: true/false` for each backend, allowing the UI to distinguish between env-sourced and config-sourced credentials.
- Daytona badge: fix white-on-white by using border style (same as Vercel) - Backend pills in General tab: now clickable buttons that set the default backend on click (removed the dropdown select) - Prune all button: use primary button class (provider-btn) for visibility
- Backend pills: check against defaultBackendId signal (updates immediately on click, no refresh needed) - Fixed: clicking a pill now visually updates the active state - Prune all: use provider-btn-secondary provider-btn-sm (smaller) - Don't show "_global configuration saved" message for backend switch
| } | ||
|
|
||
| const BACKEND_LABELS: Record<string, string> = { | ||
| const BACKEND_LABELS: Record<SandboxBackendId, string> = { |
…kends Remote backends (Daytona, Vercel) have different workspace paths than the generic /home/sandbox used by Docker/Apple Container: - Daytona: /home/daytona - Vercel: /vercel/sandbox The exec flow passes /home/sandbox as the default working_dir for all sandboxed backends. Remote backends now remap this to their actual workspace path in their exec() methods. This fixes commands returning exit_code=-1 on Daytona because cd /home/sandbox failed (dir doesn't exist).
The Settings → Terminal page now has a target selector dropdown that shows "Host" (default) plus any running sandbox containers. Selecting a container opens a shell inside it via docker/podman/container exec. Backend changes: - pty.rs: new container_terminal_command_builder() that spawns `<cli> exec -it <container> bash` - spawn_host_terminal_runtime() accepts optional container_target param - WebSocket query param: ?container=<name> - handlers.rs: passes container query param through - types.rs: HostTerminalWsQuery gains container field Frontend changes: - TerminalPage.tsx: target selector dropdown populated from GET /api/sandbox/containers (running containers only) - Changing selector reconnects WebSocket with container param - Shows container emoji prefix for container options
Summary
Add support for remote and multi-backend sandboxes, enabling sandboxed command execution on cloud deployments where Docker-in-Docker is unavailable (DigitalOcean 1-click, Fly.io, Render).
Inspired by sandcastle's provider architecture (Docker, Podman, Vercel Firecracker, Daytona), this PR extends moltis's sandbox system with:
Commits
Multi-backend router — Refactor
SandboxRouterfrom single-backend toHashMap<String, Arc<dyn Sandbox>>registry with per-sessionbackend_overrides,resolve_backend(),register_backend(),available_backends(). Session metadata column + API + persistence.Workspace sync —
sync.rsmodule with tar-basedsync_in()/sync_out()for isolated backends. Integrated into exec flow (sync-in on first run) and cleanup (sync-out before destroy).Vercel Sandbox — Full
Sandboxtrait impl using Vercel REST API: create sandbox, NDJSON command streaming, gzipped tar file writes, raw file reads, stop. Feature-gatedvercel-sandbox(default-on). Auto-detected fromVERCEL_TOKEN.Daytona Sandbox — Full
Sandboxtrait impl using Daytona REST API: workspace creation, toolbox exec, multipart file upload/download, delete. Auto-detected fromDAYTONA_API_KEY.Local Firecracker — SSH-based exec over TAP networking, Firecracker REST API over Unix socket for VM lifecycle, copy-on-write rootfs, automatic /30 subnet allocation. Linux-only.
Web UI settings —
RemoteBackendsSectionin Sandbox settings page with Vercel and Daytona configuration cards.SandboxBackendIdtyped enum on both Rust and TypeScript sides.Validation
Completed
cargo fmt --all -- --checkcargo clippy -p moltis-tools -p moltis-sessions -p moltis-gateway --lib -- -D warningscargo test -p moltis-tools -p moltis-sessions -p moltis-gateway --lib(1382 tests, 0 failures)biome check --writeon changed TS filesRemaining
./scripts/local-validate.sh(full CI validation)Manual QA
VERCEL_TOKEN, configurebackend = "vercel"or leaveautoon a cloud deploy → verify sandbox creates, exec works, cleanup stops VMDAYTONA_API_KEY, configurebackend = "daytona"→ verify sandbox lifecyclesandboxBackendin session patchVERCEL_TOKEN→ verify auto-detection picks Vercel backend