Deploy OpenClaw to an existing OpenShift namespace.
ocCLI installedopensslinstalled- Access to an existing OpenShift namespace (edit role)
- An API key for at least one LLM provider (Anthropic, OpenAI, Gemini, or OpenRouter)
export OPENCLAW_NAMESPACE="my-namespace"
export GEMINI_API_KEY="sk-ant-..." # or OPENAI_API_KEY, ANTHROPIC_API_KEY, OPENROUTER_API_KEY
./deploy.sh --kubeconfig /path/to/kubeconfig --show-tokenThe script creates two Secrets (gateway token in openclaw-secrets, API keys in openclaw-proxy-secrets), applies the Kustomize manifests, waits for the rollout, and prints the Route URL. API keys are only mounted in the credential proxy pod — the OpenClaw pod never receives them.
Open the Route URL printed by the deploy script (e.g. https://openclaw-my-namespace.apps.cluster.example.com).
On first connection from a new browser you'll see "pairing required". This is OpenClaw's device authentication — remote connections require a one-time approval.
With the browser tab open (so the pairing request stays active), run:
# List pending pairing requests
oc exec -n $OPENCLAW_NAMESPACE deployment/openclaw -- \
node /app/dist/index.js devices list
# Approve by request ID (copy from the Pending table above)
oc exec -n $OPENCLAW_NAMESPACE deployment/openclaw -- \
node /app/dist/index.js devices approve <requestId>Refresh the browser after approval. The device is remembered — you won't need to pair again unless you clear browser data or switch browsers.
Paste the gateway token into the Control UI. If you deployed with --show-token, it was printed in the terminal. Otherwise retrieve it:
oc get secret openclaw-secrets -n $OPENCLAW_NAMESPACE \
-o jsonpath='{.data.OPENCLAW_GATEWAY_TOKEN}' | base64 -d && echoexport OPENCLAW_NAMESPACE="my-namespace"
export OPENAI_API_KEY="sk-..."
./deploy.sh --create-secret
./deploy.shExisting keys and the gateway token are preserved — only the keys you provide are updated.
OpenClaw supports Gemini models through Google Vertex AI using the google-vertex provider. This uses GCP Application Default Credentials (ADC) instead of direct API keys.
In the GCP Console (or with gcloud), create a service account with the Vertex AI User role and download a JSON key file.
export OPENCLAW_NAMESPACE="my-namespace"
export GCP_SA_KEY_FILE="/path/to/sa-key.json"
export GCP_PROJECT_ID="my-gcp-project"
export GCP_LOCATION="us-central1" # optional, defaults to us-central1
./deploy.sh --kubeconfig /path/to/kubeconfig --show-tokenThe script creates a separate openclaw-gcp-credentials secret from the key file and the deployment mounts it at the path referenced by GOOGLE_APPLICATION_CREDENTIALS.
The default model is set to google-vertex/gemini-3-flash-preview in manifests/configmap.yaml. Available Gemini models on Vertex AI include:
google-vertex/gemini-3-flash-preview— fast, cost-effective (default)google-vertex/gemini-3.1-pro-preview— most capable
You can switch models in the Control UI or by editing the configmap.
Then redeploy:
./deploy.shIf you don't have a service account yet, you can test with your personal Application Default Credentials:
gcloud auth application-default login
export OPENCLAW_NAMESPACE="my-namespace"
export GCP_SA_KEY_FILE="$HOME/.config/gcloud/application_default_credentials.json"
export GCP_PROJECT_ID="my-gcp-project" # required — ADC user credentials don't contain a project ID
export GCP_LOCATION="us-central1" # optional, defaults to us-central1
./deploy.sh --kubeconfig /path/to/kubeconfig --show-tokenGCP_PROJECT_ID is required because personal ADC credentials don't contain a project ID (unlike service account keys which do). GCP_LOCATION sets the Vertex AI region. Switch to a proper service account key for production.
If OpenClaw is already deployed and you want to add Vertex AI support later:
export OPENCLAW_NAMESPACE="my-namespace"
export GCP_SA_KEY_FILE="/path/to/sa-key.json"
./deploy.sh --create-secret
./deploy.shAll API keys and integration tokens are isolated from the OpenClaw pod. This mirrors how OpenShell uses inference.local — a proxy endpoint that the agent calls instead of the real API, with credentials injected server-side.
OpenClaw's models.providers.*.baseUrl in openclaw.json points each LLM provider at the proxy (e.g., http://openclaw-proxy:8080/gemini). A dummy apiKey value satisfies OpenClaw's auth resolver without exposing real credentials. The proxy replaces this with the real key before forwarding upstream.
┌─────────────────────────────┐ ┌───────────────────────────────────┐
│ OpenClaw Pod │ │ Proxy Pod │
│ │ │ │
│ No real API keys. │──────>│ Has API keys (proxy-secrets). │
│ baseUrl -> proxy:8080 │ :8080 │ Replaces dummy key with real. │
│ apiKey = dummy placeholder │ │ Forwards to real API endpoints. │
└─────────────────────────────┘ └──────────────┬────────────────────┘
│ │
│ NetworkPolicy: │ Allowed: HTTPS to
│ egress only to proxy + DNS │ api.anthropic.com,
│ │ api.openai.com, etc.
x blocked: direct internet │
v
External LLM APIs
How it protects credentials:
| Layer | Protection |
|---|---|
| Secret split | openclaw-secrets has the gateway token only. openclaw-proxy-secrets has all API keys and is mounted only in the proxy pod. |
| Provider baseUrl redirect | openclaw.json sets each provider's baseUrl to the proxy. OpenClaw's SDK calls the proxy instead of the real API. This is equivalent to OpenShell's inference.local mechanism. |
| Credential injection | The proxy's nginx config replaces the dummy auth header with real credentials per upstream (e.g., x-api-key for Anthropic, Authorization: Bearer for OpenAI). |
| NetworkPolicy | The OpenClaw pod's egress is restricted to the proxy Service and DNS. Even if credentials were present, they could not be exfiltrated. |
| L7 method filtering | Each proxy endpoint restricts HTTP methods (e.g., LLM APIs allow POST only; GitHub allows GET/HEAD/OPTIONS only). |
| Integration | Proxy path | Auth header | Methods allowed | Status |
|---|---|---|---|---|
| Anthropic | /anthropic/ |
x-api-key |
POST | Working via models.providers |
| OpenAI | /openai/ |
Authorization: Bearer |
POST | Working via models.providers |
| Gemini | /gemini/ |
x-goog-api-key |
POST | Working via models.providers |
| OpenRouter | /openrouter/ |
Authorization: Bearer |
POST | Working via models.providers |
| GitHub API | /github/ |
Authorization: token |
GET, HEAD, OPTIONS | Proxy ready, needs investigation |
| Telegram Bot | /telegram/ |
Token in URL path | POST | Proxy ready, needs investigation |
LLM providers are fully working. OpenClaw's models.providers.*.baseUrl redirects all model API traffic through the proxy.
Third-party services (Telegram, GitHub, etc.) have proxy routes ready with credential injection, but OpenClaw's built-in channel/tool integrations do not yet support a baseUrl override to redirect them through the proxy. This requires further investigation. However, third-party service integrations can be configured via skills — a skill's instructions can direct the agent to call the proxy URL (e.g., http://openclaw-proxy:8080/telegram/sendMessage) instead of the upstream API directly, enabling credential-proxied access to these services without changes to OpenClaw core.
- Add the credential to
openclaw-proxy-secrets(updatedeploy.sh) - Add a
locationblock tomanifests/proxy-configmap.yaml - Add the env var to
manifests/proxy-deployment.yaml - Add a provider entry to
models.providersinmanifests/configmap.yamlwithbaseUrlpointing to the new proxy path andapiKey: "ah-ah-ah-you-didnt-say-the-magic-word" - Redeploy with
./deploy.sh
Edit manifests/configmap.yaml to change openclaw.json (gateway settings) or AGENTS.md (agent instructions), then redeploy:
./deploy.shYou can also edit the config live through the Control UI's Config tab — changes hot-reload without a restart for most settings.
./deploy.sh --deleteRemoves all OpenClaw resources (Deployment, Service, ConfigMap, PVC, Route, Secret) from the namespace. The namespace itself is not deleted.