THIS IS VERSION LOCKED TO OpenShell v0.0.36, NemoClaw v0.0.31, and OpenClaw 2026.4.27.
OpenShell Control is a local, development-stage dashboard for operating OpenShell sandboxes and their OpenClaw gateway dashboards.
It is currently built for active development and lab use. It includes a simple password gate, but it is not a hardened production control plane yet.
- Development software. Expect fast-moving APIs and sharp edges.
- Designed to run near the OpenShell gateway host.
- Uses local shell/CLI access for sandbox lifecycle, network policy grants, file transfer, and OpenClaw dashboard proxying.
- Authentication is intentionally simple so a future dev team can replace it with a real identity provider.
- View live OpenShell sandbox inventory.
- Create, destroy, and restart sandboxes.
- Launch a sandbox-specific OpenClaw Gateway Dashboard through the local proxy.
- Approve or reject pending OpenShell network permission requests.
- Configure per-sandbox inference routes for Ollama, NIM, vLLM, and external endpoints.
- Poll Ollama for available models.
- Search the official MCP Registry, install MCP server definitions, and manage preconfigured/custom MCP servers.
- Enforce per-sandbox MCP access through a control-plane broker so sandboxes only see allowed capabilities.
- Upload files into sandboxes and download files back out.
- Open an operator terminal for the selected sandbox.
- Simple local login, setup account, forgot password, and recovery token flow.
Tested on the current development host with:
- Ubuntu/Linux host
- Node.js
v22.22.2 - npm
10.9.7 - Docker
29.1.3 - OpenShell CLI
0.0.36 - NemoClaw
v0.0.31 - OpenClaw
2026.4.27
Minimum expected versions:
- Node.js
20+ - npm
10+ - Docker
24+ - OpenShell CLI and gateway compatible with OpenShell
0.0.36
The app uses Next.js 14.2.35, React 18.3.1, TypeScript, Tailwind CSS, ws, node-pty, and the official MCP TypeScript SDK.
Install and verify:
node -v
npm -v
docker ps
openshell --versionOptional, but useful for stdio MCP servers installed through the broker:
npx --version
uvx --versionOpenShell must already be installed and able to reach its gateway. On this host the active gateway metadata lives under:
~/.config/openshell/gateways/The installer does not create an OpenShell gateway for you. Start or connect OpenShell first, then install this dashboard.
From the repository root:
./install.shThe installer:
- checks Node, npm, Docker, OpenShell CLI availability, sandbox inventory reachability, and default port occupancy;
- installs or verifies the MCP package runners used by bundled stdio servers:
npxanduvx; - installs npm dependencies with
npm ciwhenpackage-lock.jsonexists; - runs a non-blocking
npm auditsummary; - creates
.env.localif needed; - generates a local dashboard password, signing secret, and recovery token if they are missing;
- adds MCP broker defaults for token TTL and request timeout;
- runs
npm run buildas a verification step.
It refuses to run as root unless --allow-root is supplied. It does not install or manage a systemd service.
Options:
./install.sh --no-build
./install.sh --no-audit
./install.sh --clean-next
./install.sh --allow-root
./install.sh --start
./install.sh --helpAfter install, read the generated local password from:
grep OPENSHELL_CONTROL_PASSWORD .env.localnpm run devOpen:
http://localhost:3000
Default ports:
3000: dashboard HTTP server3001: OpenClaw dashboard websocket sidecar3011: operator terminal upstream
This project currently uses a simple local password and signed HTTP-only cookie.
Configuration keys:
OPENSHELL_CONTROL_PASSWORD=...
OPENSHELL_CONTROL_AUTH_SECRET=...
OPENSHELL_CONTROL_RECOVERY_TOKEN=...Pages:
/login/setup-account/forgot-password
There is no email sender. Forgot-password uses OPENSHELL_CONTROL_RECOVERY_TOKEN from .env.local, which means it is a host-admin recovery flow. Anyone who can read .env.local can reset the dashboard password.
After changing .env.local, restart the server:
pkill -f 'node server.mjs|npm run dev' || true
npm run devThe dashboard shells out to the OpenShell CLI for several operations:
openshell listopenshell sandbox execopenshell sandbox deleteopenshell rule getopenshell rule approveopenshell rule reject
OpenClaw dashboard access is loopback-only inside the host/sandbox context, so the UI uses local proxy routes:
/api/openshell/dashboard/proxy/api/openshell/instances/[instanceId]/dashboard/proxy
The custom server in server.mjs also handles websocket upgrades for:
- operator terminal websocket traffic;
- OpenClaw dashboard websocket traffic.
Those upgrade paths are protected by the same auth cookie as the HTTP routes.
The Wizards page includes Spawn a Controller Node for preparing a small OpenShell Control install on another VPS. This is intended for topologies where the browser-facing dashboard and the OpenShell gateway/sandbox hosts are not the same machine.
The wizard supports two deployment paths:
- Manual Deploy generates an SSH/bootstrap script, controller
.envblock, parent-controller URL, node shared secret, OpenShell/OpenClaw routing settings, and readiness checks. - Autodeploy connects to the remote host over SSH using a one-time password supplied in the browser, optionally runs the bootstrap with sudo, installs a systemd service when available, and returns bounded stdout/stderr plus the observed host-key fingerprint.
Autodeploy does not store the SSH password or write it into generated scripts. For host-key safety, provide an expected SHA256 host-key fingerprint or explicitly select trust-on-first-deploy on a trusted management network. After the controller is running, use its local OpenShell CLI context to manage the sandboxes reachable from that VPS.
Inference endpoint configuration is development-stage.
The UI supports per-sandbox route profiles and can apply them live to OpenClaw where possible. Depending on the sandbox and provider, changes may require a sandbox restart to fully take effect inside the container.
Supported provider categories in the UI:
- Ollama
- NVIDIA NIM
- vLLM
- external HTTP-compatible endpoints
OpenShell Control can install and broker MCP servers without disclosing the full MCP inventory to sandboxes.
The MCP page supports:
- registry search with paged results;
- preconfigured servers, including Blender MCP;
- custom stdio or HTTP MCP servers;
- global enable/disable state;
- per-sandbox
Disabled,Allow All, andAllow Onlyaccess policy.
The sandbox page shows an MCP indicator on each sandbox card. A sandbox lights up when at least one MCP server is allowed by policy.
For sandbox handoff, OpenShell Control writes:
/sandbox/openshell_control_mcp.md
That file contains only the MCP broker endpoints and a sandbox-scoped token. It does not list denied servers, launch commands, credentials, or registry metadata. The broker validates the token and enforces access policy on every capabilities and tool-call request.
Broker endpoints:
/api/mcp/broker/capabilities
/api/mcp/broker/call
Broker configuration keys:
MCP_BROKER_TOKEN_TTL_HOURS=168
MCP_BROKER_REQUEST_TIMEOUT_MS=45000
OPENSHELL_CONTROL_MCP_BROKER_URL=http://localhost:3000/api/mcp/brokerOPENSHELL_CONTROL_MCP_BROKER_URL is optional. Set it only when you need to override discovery. By default the dashboard discovers the active OpenShell Docker gateway and the selected sandbox's proxy environment before writing /sandbox/openshell_control_mcp.md.
Stdio MCP servers run on the control host. The installer verifies npx, creates or reuses a Python virtual environment, installs uvx there, and persists that venv path in .env.local so the MCP broker can launch uvx servers later. Custom MCP server launch commands, such as node or python, must also be available to the dashboard process.
The file transfer UI is scoped to safe sandbox paths:
/sandbox/tmp
The default max transfer size is 128 MiB. Override with:
SANDBOX_FILE_TRANSFER_MAX_BYTES=134217728npm run dev
npm run lint
npx tsc --noEmit
npm run buildAfter running npm run build during development, restart cleanly:
pkill -f 'node server.mjs|npm run dev' || true
rm -rf .next
npm run devCopy or edit .env.local:
cp .env.example .env.localCommon keys:
NEXT_PUBLIC_DASHBOARD_PORT=3000
NEXT_PUBLIC_API_BASE=/api
NEXT_PUBLIC_ENABLE_SANDBOX_OPERATIONS=true
OPEN_SHELL_CONTAINER=openshell-cluster-nemoclaw
OPENSHELL_GATEWAY=nemoclaw
# For containerized CLI runs, when supported by the installed OpenShell/NemoClaw versions:
# OPENSHELL_GATEWAY_HOST=host.docker.internal
# OPENSHELL_GATEWAY_PORT=8080
# OPENSHELL_GATEWAY_URL=http://host.docker.internal:8080
OPENSHELL_CONTROL_PASSWORD=change-this-password
OPENSHELL_CONTROL_AUTH_SECRET=change-this-random-secret
OPENSHELL_CONTROL_RECOVERY_TOKEN=change-this-recovery-token
MCP_BROKER_TOKEN_TTL_HOURS=168
MCP_BROKER_REQUEST_TIMEOUT_MS=45000The controller also accepts an OpenShell config-file form at ~/.config/openshell/gateway.json or ~/.config/openshell/config.json:
{ "gateway": { "host": "host.docker.internal", "port": 8080 } }Those values are translated into OPENSHELL_GATEWAY_HOST, OPENSHELL_GATEWAY_PORT, and OPENSHELL_GATEWAY_URL for controller-launched OpenShell/NemoClaw child processes.
This is not production hardened.
Known limitations:
- single shared local password;
- no user accounts or roles;
- no email reset flow;
- no rate limiting;
- no audit log persistence beyond process/container logs;
- local recovery token can reset the password;
- assumes a trusted operator host and trusted local filesystem.
- MCP stdio servers run as child processes on the control host; only install trusted MCP servers.
- MCP broker tokens grant sandbox-scoped access until expiry or rotation.
Before exposing this outside a trusted lab network, replace auth with a real identity provider, add role-based access control, add audit logging, use TLS, and review every shell-out path.
Check OpenShell:
openshell --version
openshell list
docker ps | grep openshellCheck auth:
grep OPENSHELL_CONTROL_PASSWORD .env.local
grep OPENSHELL_CONTROL_RECOVERY_TOKEN .env.localCheck dashboard:
npm run lint
npx tsc --noEmit
npm run buildIf the UI behaves oddly after a production build:
pkill -f 'node server.mjs|npm run dev' || true
rm -rf .next
npm run devInternal development prototype. Add the appropriate license before distribution.