Skip to content

Setup help needed (Telegram connection, Llama3 connection) #308

@Stef4fun

Description

@Stef4fun

My questions?

  1. Why does it not find llama3 inside running ironclaw?
  2. There was no question during the onboarding to connect a telegram channel/bot (like written in the telegram.md) how can I do this?
  3. Which key has to be in SECRETS_MASTER_KEY just some random key I generate myself?

See following routing I did and the respective logs.
Thanks for your advice and help
Stefan

  • Running Ubuntu 24 on local machine.
  • Installed ironclaw remotely attached via SSH
  • managed to get postrgresql running and the vector extension

Then I ran ironclaw onboard with the following options:

╭─────────────────────────╮
│ IronClaw Setup Wizard │
╰─────────────────────────╯

Step 1/8: Database Connection
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

ℹ Existing database URL: postgres://ironclaw:****@localhost:5432/ironclaw
Use this database? [Y/n]
✓ Database connection successful
Step 2/8: Security
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

ℹ Secrets master key found in SECRETS_MASTER_KEY environment variable.
✓ Security configured (env var)
Step 3/8: Inference Provider
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

ℹ Current provider: Ollama (local)

Keep current provider? [Y/n]
Ollama base URL (default: http://localhost:11434):
✓ Ollama configured (http://localhost:11434)
Step 4/8: Model Selection
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Available models:

Select a model:

[1] llama3:8b
[2] Custom model ID

1
✓ Selected llama3:8b
Step 5/8: Embeddings (Semantic Search)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

ℹ Embeddings enable semantic search in your workspace memory.

Enable semantic search? [Y/n]
ℹ No NEAR AI session or OpenAI key found for embeddings.
ℹ Set OPENAI_API_KEY in your environment to enable embeddings.
Step 6/8: Channel Configuration
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

ℹ Current tunnel configuration:
ℹ Provider: ngrok
ℹ Auth: token configured

Change tunnel configuration? [y/N]

Which channels do you want to enable?

(Use arrow keys to navigate, space to toggle, enter to confirm)

[x] CLI/TUI (always enabled)

[x] HTTP webhook

HTTP Webhook Setup:

ℹ The HTTP webhook allows external services to send messages to the agent.

Port (default: 8080):
Host (default: 0.0.0.0):
Generate a webhook secret for authentication? [Y/n]
✓ Webhook secret generated and saved to database
ℹ Retrieve it later with: ironclaw secret get http_webhook_secret
✓ HTTP webhook will listen on 0.0.0.0:8080
Step 7/8: Extensions
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

ℹ Extension registry not found. Skipping tool installation.
ℹ Install tools manually with: ironclaw tool install
Step 8/8: Background Tasks
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

ℹ Heartbeat runs periodic background tasks (e.g., checking your calendar,
ℹ monitoring for notifications, running scheduled workflows).

Enable heartbeat? [y/N] y
Check interval in minutes (default: 30):
Notify channel on findings (e.g., telegram): telegram
✓ Heartbeat enabled (every 30 minutes)

✓ Configuration saved to database

Configuration Summary:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Database: PostgreSQL (configured)
Security: environment variable
Provider: Ollama
Model: llama3:8b
Embeddings: disabled
Tunnel: ngrok (managed, starts at boot)
Channels:
- CLI/TUI: enabled
- HTTP: enabled (port 8080)
Heartbeat: every 30 minutes

To start the agent, run:
ironclaw

To change settings later:
ironclaw config set
ironclaw onboard

I have the following log when starting ironclaw:

before I started ollama with ollama serve &
Firewall is disabled
ollama serve &
[1] 4815
stefan@stefan-ZBOX-EN1070-1060-EN1070K-1060K:~$ time=2026-02-22T15:13:50.107+01:00 level=INFO source=routes.go:1663 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:INFO OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/stefan/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2026-02-22T15:13:50.107+01:00 level=INFO source=routes.go:1665 msg="Ollama cloud disabled: false"
time=2026-02-22T15:13:50.107+01:00 level=INFO source=images.go:473 msg="total blobs: 5"
time=2026-02-22T15:13:50.108+01:00 level=INFO source=images.go:480 msg="total unused blobs removed: 0"
time=2026-02-22T15:13:50.108+01:00 level=INFO source=routes.go:1718 msg="Listening on 127.0.0.1:11434 (version 0.16.3)"
time=2026-02-22T15:13:50.109+01:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2026-02-22T15:13:50.109+01:00 level=INFO source=server.go:431 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 39477"
time=2026-02-22T15:13:50.200+01:00 level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1"
time=2026-02-22T15:13:50.200+01:00 level=INFO source=server.go:431 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 38313"
time=2026-02-22T15:13:50.317+01:00 level=INFO source=server.go:431 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 40797"
time=2026-02-22T15:13:50.433+01:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-a2b3289-ec0d-8fa0-45dd-48a98257ca3e filter_id="" library=CUDA compute=6.1 name=CUDA0 description="NVIDIA GeForce GTX 1060" libdirs=ollama,cuda_v12 driver=12.2 pci_id=0000:01:00.0 type=discrete total="6.0 GiB" available="5.9 GiB"
time=2026-02-22T15:13:50.433+01:00 level=INFO source=routes.go:1768 msg="vram-based default context" total_vram="6.0 GiB" default_num_ctx=4096


stefan@stefan-ZBOX-EN1070-1060-EN1070K-1060K:~$ ironclaw
2026-02-22T13:48:50.423779Z INFO Starting IronClaw...
2026-02-22T13:48:50.423869Z INFO Loaded configuration for agent: ironclaw
2026-02-22T13:48:50.423885Z INFO LLM backend: ollama
2026-02-22T13:48:50.435750Z INFO PostgreSQL database connected and migrations applied
2026-02-22T13:48:50.437366Z INFO Configuration reloaded from database
2026-02-22T13:48:50.441246Z INFO Starting ngrok tunnel on 127.0.0.1:3000...
2026-02-22T13:48:50.441486Z ERROR Failed to start tunnel: No such file or directory (os error 2)
2026-02-22T13:48:50.503208Z INFO Using Ollama (base_url: http://localhost:11434, model: llama3)
2026-02-22T13:48:50.503250Z INFO LLM provider initialized: llama3
2026-02-22T13:48:50.503257Z INFO LLM retry wrapper enabled max_retries=3
2026-02-22T13:48:50.514317Z INFO Safety layer initialized
2026-02-22T13:48:50.575002Z INFO Registered 4 built-in tools
2026-02-22T13:48:50.575018Z INFO Embeddings disabled (set OPENAI_API_KEY or EMBEDDING_ENABLED=true)
2026-02-22T13:48:50.575065Z INFO Registered 4 memory tools
2026-02-22T13:48:50.637664Z INFO Registered 6 extension management tools
2026-02-22T13:48:50.637700Z INFO Extension manager initialized with in-chat discovery tools
2026-02-22T13:48:50.637934Z INFO Tool registry initialized with 14 total tools
2026-02-22T13:48:50.637966Z INFO REPL mode enabled
2026-02-22T13:48:50.638082Z INFO Orchestrator internal API listening on 0.0.0.0:50051
2026-02-22T13:48:50.641544Z INFO Lifecycle hooks initialized bundled=1 plugin=0 workspace=0 outbound_webhooks=0 errors=0
2026-02-22T13:48:50.641605Z INFO Registered 6 job management tools
2026-02-22T13:48:50.641729Z INFO Web UI: http://127.0.0.1:3000/

╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶

ironclaw v0.9.0

model llama3 via ollama
database postgres (connected)
tools 20 registered
features heartbeat (30m) sandbox routines
channels repl gateway

gateway http://127.0.0.1:3000/?token=Ntp...

╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶

/help for commands, /quit to exit

2026-02-22T13:48:50.642397Z INFO Started channel: gateway
2026-02-22T13:48:50.642517Z INFO Started channel: repl
2026-02-22T13:48:50.642560Z INFO Registered 5 routine management tools
2026-02-22T13:48:50.642611Z INFO Starting heartbeat loop with interval 1800s
› 2026-02-22T13:48:50.643225Z INFO Routines enabled: cron ticker every 15s, max 10 concurrent
2026-02-22T13:48:50.643241Z INFO Agent ironclaw ready and listening
test
› ○ Processing...
...
Error: LLM error: Provider llama3 request failed: HttpError: Invalid status code 404 Not Found with message:
{"error":"model 'llama3' not found"}

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions