fix: restore upstream/main merge ancestry after Tier 2C squash#28
Merged
Conversation
…usResearch#20752) * feat(skills/linear): add Documents support + Python helper script The bundled Linear skill (PR NousResearch#1230) covered issues, projects, teams, and workflow states via curl. It had no coverage for Linear's Documents API, so fetching an RFC/doc from a linear.app URL required hand-writing GraphQL against an underdocumented schema. Adds: - Documents section in SKILL.md explaining slugId extraction from URLs, the contentState (markdown) vs contentState (ProseMirror) split, and four canonical curl examples (fetch by slugId, fetch by UUID, list recent, title-search). - scripts/linear_api.py — stdlib-only Python CLI wrapping the most common operations (whoami, list-teams, list/get/search/create/update issues, add-comment, update-status, list/get/search documents, raw GraphQL passthrough). Zero deps, reads LINEAR_API_KEY from env. Auth header quirk (personal key takes bare $LINEAR_API_KEY, no Bearer prefix) is already documented in the skill. Found during RFC review: the existing skill's lack of document support forced falling back to the browser (which hit Linear's login wall). Also fixes a schema gotcha — the Document field is `contentState`, not `contentData` (which returns 400). Tested end-to-end against the production API: python3 linear_api.py whoami python3 linear_api.py get-document 38359beef67c Both return expected payloads. * fix(skills/linear): point LINEAR_API_KEY setup to the correct page The org-level Settings > API page (/settings/api) only shows OAuth apps and workspace-member keys. Personal API keys live under Account, Security, access (/settings/account/security). Update both the setup link in config.py (shown during hermes setup) and the setup step in SKILL.md so users land on the page that can create a personal key.
…+ publishing a skill tap (NousResearch#20800) Two pluggable surfaces were mentioned in the interfaces map without a real authoring guide behind them: 1. **Image-gen backends** — only had 'See bundled examples' pointers. Now a full developer-guide/image-gen-provider-plugin.md (270 lines) mirroring the memory/context/model provider docs: - How discovery works, directory structure, plugin.yaml - ImageGenProvider ABC with every overridable method (name, display_name, is_available, list_models, default_model, get_setup_schema, generate) - Full authoring walkthrough with a working MyBackendImageGenProvider - Response-format reference (success_response / error_response) - Handling b64 vs URL output (save_b64_image helper) - User overrides at ~/.hermes/plugins/image_gen/<name>/ - Testing recipe + pip distribution - Reference examples (openai, openai-codex, xai) 2. **Skill taps** — features/skills.md mentioned the CLI commands but never explained the repo contract for publishing a tap. Added 'Publishing a custom skill tap' section under Skills Hub covering: - Repo layout (skills/<name>/SKILL.md by default) - Minimal working example - Non-default path configuration (taps.json) - Installing individual skills without subscribing - Trust-level handling - Full tap management CLI + in-session /skills tap commands Wired into: - website/sidebars.ts: image-gen-provider-plugin added to Extending group - website/docs/user-guide/features/plugins.md: pluggable interfaces table + 'What plugins can do' table now link to the real guides instead of 'See bundled examples' - website/docs/guides/build-a-hermes-plugin.md: top info map and inline sub-sections updated, 'Full guide:' line added to image-gen block, tap section mentions publishing Verified: docusaurus build SUCCESS, new page renders at /docs/developer-guide/image-gen-provider-plugin, anchor #publishing-a-custom-skill-tap resolves from plugins.md + build-a-hermes-plugin.md. Pre-existing zh-Hans broken links unchanged.
…ative providers (NousResearch#20802) OpenCode Go and OpenCode Zen are flat-namespace model resellers — their /v1/models returns bare IDs (deepseek-v4-flash, minimax-m2.7), and the inference API rejects vendor-prefixed names with HTTP 401 'Model not supported'. Two bugs fixed: 1. `switch_model` in hermes_cli/model_switch.py was silently switching the user off opencode-go to native deepseek when they typed `/model deepseek-v4-flash`. Step d found the model in opencode-go's live catalog, but step e (detect_provider_for_model) still ran and matched the bare name against deepseek's static catalog. Fix: track whether the live catalog resolved it; skip step e when it did. 2. `normalize_model_for_provider` in hermes_cli/model_normalize.py only stripped the exact `opencode-zen/` prefix, leaving arbitrary vendor prefixes like `minimax/minimax-m2.7` (commonly copied from aggregator slugs into fallback_model configs) intact — causing HTTP 401s when the fallback chain activated. Fix: opencode-go/opencode-zen strip ANY leading vendor prefix because their APIs are flat-namespace. Tests: 11 new cases in tests/hermes_cli/test_opencode_go_flat_namespace.py covering both normalization (prefix stripping, regression guards for opencode-zen Claude hyphenation and openrouter vendor-prepending) and switch_model (bare-name resolution on opencode-go's live catalog must not trigger cross-provider hijack). Reported by @UFOnik via Discord; Kimi K2.6 always worked because moonshotai has no overlapping entry in a native provider's static catalog. Deepseek and minimax failed because their v4/v2.7 names existed in the native deepseek/minimax catalogs.
…ze (NousResearch#20820) Same Hermes Teal palette as the default theme, but with baseSize 18px, lineHeight 1.65, and spacious density so the whole dashboard scales up. Gives users a one-click bigger-text preset and a copyable reference for authoring custom YAML themes with their own typography settings.
Introduce the foundation for independently selecting web search and extract backends — enabling future combinations like SearXNG for search + Firecrawl for extract. Architecture: - tools/web_providers/base.py: WebSearchProvider and WebExtractProvider ABCs with normalized result contracts (mirrors CloudBrowserProvider) - tools/web_tools.py: _get_search_backend() and _get_extract_backend() read per-capability config keys, fall through to shared web.backend - hermes_cli/config.py: web.search_backend and web.extract_backend in DEFAULT_CONFIG (empty = inherit from web.backend) Behavioral change: - web_search_tool() now dispatches via _get_search_backend() - web_extract_tool() now dispatches via _get_extract_backend() - When per-capability keys are empty (default), behavior is identical to before — _get_search_backend() falls through to _get_backend() This is purely structural — no new backends are added. SearXNG and other search-only/extract-only providers can now be added as simple drop-in modules in follow-up PRs. 12 new tests, 49 existing tests pass with zero regressions. Ref: NousResearch#19198
Adds SearXNG as a free, self-hosted web search provider. SearXNG is a
privacy-respecting metasearch engine that requires no API key — just a
running instance and SEARXNG_URL pointing at it.
## What this adds
- `tools/web_providers/searxng.py` — `SearXNGSearchProvider` implementing
`WebSearchProvider` (search only; no extract capability)
- `_is_backend_available("searxng")` — gates on SEARXNG_URL
- `_get_backend()` — accepts "searxng" as a configured value; adds it to
auto-detect candidates (lower priority than paid services)
- `web_search_tool` — dispatches to SearXNG when it is the active backend
- `check_web_api_key()` — includes SearXNG in availability check
- `OPTIONAL_ENV_VARS["SEARXNG_URL"]` — registered with tools=["web_search"]
- `tools_config.py` — SearXNG appears in the `hermes tools` provider picker
- `nous_subscription.py` — `direct_searxng` detection, web_active / web_available
- `setup.py` — SEARXNG_URL listed in the missing-credential hint
- 23 tests covering: is_configured, happy-path search, score sorting, limit,
HTTP/request errors, _is_backend_available, _get_backend, check_web_api_key
## Config
```yaml
# Use SearXNG for search, any paid provider for extract
web:
search_backend: "searxng"
extract_backend: "firecrawl"
# Or: SearXNG as the sole backend (web_extract will use the next available)
web:
backend: "searxng"
```
SearXNG is search-only — it does not implement WebExtractProvider. Users
who only configure SEARXNG_URL get web_search available; web_extract falls
back to the next available extract provider (or is unavailable if none).
Closes NousResearch#19198 (Phase 2 Task 4 — SearXNG provider)
Ref: NousResearch#11562 (original SearXNG PR)
Closes the remaining gaps from PR NousResearch#11562 that weren't covered by the core SearXNG integration landed in NousResearch#20823. - optional-skills/research/searxng-search/ — installable skill with SKILL.md (curl-based usage, category support, Python example) and searxng.sh helper script for health checks and instance queries - website/docs/user-guide/configuration.md — SearXNG added to the Web Search Backends section (5 backends, backend table, per-capability split config example, correct search-only note) - website/docs/reference/environment-variables.md — SEARXNG_URL row - website/docs/reference/optional-skills-catalog.md — searxng-search entry The core SearXNG code, OPTIONAL_ENV_VARS, hermes tools picker, and tests were already on main via NousResearch#20823. This commit is purely additive docs + the optional skill scaffold. Credits from NousResearch#11562 salvage: @w4rum — original _searxng_search structure @nathansdev — tools_config.py integration @moyomartin — category support and result formatting @0xMihai — config/env var approach @nicobailon — skill and documentation structure @searxng-fan — error handling patterns @Local-First — self-hosted-first philosophy and docs
Route Feishu topic progress, status, approval, stream, and fallback messages through threaded replies by preserving the originating message id as the reply target. Add regressions for tool progress topic metadata and Feishu metadata-driven reply routing.
- Remove dead metadata.get('reply_to') fallback in _send_raw_message;
nothing in the codebase ever sets 'reply_to' inside a metadata dict —
the key only appears as a top-level send_voice() keyword argument
- Simplify _status_thread_metadata construction in run.py to use a
single dict literal instead of create-then-mutate pattern; the
or-{} guard was dead since source.thread_id implies _progress_thread_id
is also set for Feishu
- Add yuqian@zmetasoft.com to AUTHOR_MAP for contributor attribution
- Expand migration comment to name the primary failure mode (missing column OperationalError from NousResearch#20842) ahead of the secondary SQLite schema-reparse concern; also document the stale-cols-snapshot invariant - Add clarifying comments on from_row() legacy fallback branches noting they are belt-and-suspenders dead code post-migration - Add task_events comment in existing test explaining why the table is required by the migrator - Add test_legacy_migration_no_legacy_columns_at_all: Scenario A — explicitly asserts the exact NousResearch#20842 crash no longer occurs and that consecutive_failures defaults to 0 on a DB that never had spawn_failures - Add test_legacy_migration_both_columns_already_present: Scenario D — asserts the migration is a no-op when both columns already exist, preserving the existing counter value
change: enable ruff/ty
Switch top-level concurrency to cancel-in-progress=false so every push to main gets its own SHA-tagged image published — no more discarded builds when commits land back-to-back. Guard the :latest tag with a second job that has its own concurrency group with cancel-in-progress=true plus a git-ancestor check against the revision label on the current :latest. Together these guarantee :latest only ever moves forward in history: a slower run whose commit isn't a descendant of the current :latest refuses to clobber it, and a newer push mid-way through the move-latest job preempts the older one before it can retag. - Every main push publishes nousresearch/hermes-agent:sha-<commit> with an org.opencontainers.image.revision label embedded. - move-latest job reads that label off :latest, runs merge-base --is-ancestor, and only retags (via buildx imagetools create, registry-side, no rebuild) if our commit strictly descends. - fetch-depth bumped to 1000 so merge-base has the history it needs. - Release tag flow unchanged (unique tag, no race).
…ch#20827) Previous version read like internal API docs \u2014 leading with env var tables, config YAML, and 'precedence' rules before ever explaining the product. Complete rewrite inverts the structure so readers see value first, mechanics last. Structure now: - Lede: 'One subscription. Every tool built in.' + pitch paragraph - CTA: subscribe/manage button styled as a real call-to-action - What's included: emoji-led table with expanded descriptions per tool. Image gen lists all 9 models by name (FLUX 2 Klein/Pro, Z-Image Turbo, Nano Banana Pro, GPT Image 1.5/2, Ideogram V3, Recraft V4 Pro, Qwen) - Why it's here: value bullets \u2014 one bill, one signup, one key, same quality, bring-your-own anytime - Get started: two-command flow (hermes model \u2192 hermes status) - Eligibility: paid-tier note with upgrade link - Mix and match: three realistic usage patterns - Using individual image models: ID reference table for power users - --- separator --- - Configuration reference (demoted): use_gateway flag, disabling, self-hosted gateway env vars moved below the fold where they belong - FAQ: streamlined, removed redundant content Fact-checked against code: - 9 FAL models confirmed from tools/image_generation_tool.py FAL_MODELS - Status section output verified against hermes_cli/status.py - Portal subscription URL preserved - Self-hosted env vars (TOOL_GATEWAY_DOMAIN etc.) kept accurate Verified: docusaurus build SUCCESS, page renders, no new broken links.
…n profile Profile processes (kanban workers, cron subprocesses, delegated subagents) read the profile's auth.json only. If a provider was authenticated at the global root but not inside the profile, the profile's credential_pool comes back empty and the process fails with 'No LLM provider configured' — even though the credentials are sitting in ~/.hermes/auth.json. NousResearch#18594 propagated HERMES_HOME correctly, which is what surfaced this: workers now land in the right profile, and the profile turns out to shadow global with no fallback. Semantics (read-only, per-provider shadowing): * Profile has any entries for provider X → use profile only (global ignored). * Profile has zero entries for provider X → fall back to global. * Writes (write_credential_pool, _save_auth_store) still target the profile. * Classic mode (HERMES_HOME == global root) skips the fallback entirely — _global_auth_file_path() returns None. Also mirrors the fallback in get_provider_auth_state so OAuth singletons (nous, minimax-oauth, openai-codex, spotify) inherit cleanly — the Nous shared-token store (PR NousResearch#19712) remains the authoritative path for Nous OAuth rotation, this just makes the read side consistent with it. Seat belt: _load_global_auth_store() refuses to read the real user's ~/.hermes/auth.json under PYTEST_CURRENT_TEST even when HERMES_HOME points to a profile-shaped path. Guard uses $HOME (stable across fixtures) rather than Path.home() (which fixtures often monkeypatch to a tmp root). Reported by @SeedsForbidden on Twitter as the credential_pool shadowing follow-up to the NousResearch#18594 fix.
Adds an opt-out toggle on PlatformConfig that gates both restart lifecycle pings: the "♻ Gateway restarted" message sent to the chat that issued /restart, and the "♻️ Gateway online" home-channel startup notification. Defaults to True so existing deployments are unaffected. The motivating split is operator vs. end-user surfaces: a back-channel like Telegram should keep these pings, while a Slack workspace shared with end users should not surface gateway lifecycle noise. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Extend the gateway_restart_notification flag to cover
_notify_active_sessions_of_shutdown — the message that fires just
before drain ("⚠️ Gateway restarting — Your current task will be
interrupted. Send any message after restart and I'll try to resume
where you left off.") sent to active sessions and home channels.
Same operator/end-user reasoning: on a Slack workspace shared with
end users, "Gateway restarting" reads as "the bot is broken" — the
operator should be able to suppress it consistently with the other
two lifecycle pings rather than having a partial opt-out.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
For cherry-picked commits in PR NousResearch#20801.
ci(docker): don't cancel overlapping builds, guard :latest
* fix(tui): steady transcript scrollbar Keep the visible scrollbar tied to committed viewport position while virtual history can still prefetch against pending scroll targets, and preserve drag grab offset synchronously for native-feeling scrollbar drags. * fix(tui): smooth precision wheel scroll Replace the opt-scroll throttle with frame-sized coalescing so modifier wheel gestures stay line-precise without stepping.
* fix(tui): restore classic CLI voice push-to-talk parity (cherry picked from commit 93b9ae3) * fix(tui): harden voice push-to-talk stop flow Address review feedback from PR NousResearch#16189 by stopping the active recorder before background transcription, documenting single-shot voice capture, and covering the TUI gateway flags with regression tests. * fix(tui): preserve silent voice strike tracking Keep single-shot voice recording's no-speech counter alive across starts so the TUI can still emit the three-strikes auto-disable event, and bind the auto-restart state at module scope for type checking. * fix(tui): clean up voice stop failure path Address follow-up review by naming the TUI flow as single-shot push-to-talk and cancelling the recorder when forced stop cannot produce a WAV. * fix(tui): report busy voice capture starts Return explicit start state from the voice wrapper so the TUI gateway does not report recording while forced-stop transcription is still cleaning up. * fix(tui): handle busy voice record responses Apply the gateway busy status immediately in the TUI and route forced-stop voice events to the session that sent the stop request. * fix(tui): clear voice recording on null response Treat a null voice.record RPC result as a failed optimistic start so the REC badge cannot stick after gateway-side errors. * fix(tui): count silent manual voice stops Preserve single-shot voice no-speech strikes through forced stop transcription so empty push-to-talk captures still trigger the three-strikes guard. --------- Co-authored-by: Montbra <montbra@gmail.com>
… is installed The setup wizard dropped non-root users at a bare shell prompt when trying to start a system-scope gateway service. Previously _require_root_for_system_service called sys.exit(1), which the wizard's `except Exception` guards cannot catch (SystemExit is a BaseException). Users with a pre-existing /etc/systemd/system unit (e.g. from an earlier `sudo hermes setup` run) hit this whenever they re-ran `hermes setup` as a regular user. - Convert _require_root_for_system_service to raise a typed SystemScopeRequiresRootError (RuntimeError subclass) instead of sys.exit(1). The direct CLI path (`hermes gateway install|start|stop| restart|uninstall` without sudo) still exits 1 cleanly via a new catch at the top of gateway_command, matching the existing UserSystemdUnavailableError pattern. - Add _system_scope_wizard_would_need_root() pre-check and _print_system_scope_remediation() helper. Both setup wizards (hermes_cli/setup.py and hermes_cli/gateway.py::gateway_setup) now detect the dead-end before prompting and print actionable guidance: either `sudo systemctl start <service>` this time, or uninstall the system unit and install a per-user one. - Defense-in-depth: all 5 wizard prompt sites also catch SystemScopeRequiresRootError and fall back to the remediation helper if the pre-check is bypassed (race, etc.). Tests: 12 new tests in TestSystemScopeRequiresRootError, TestSystemScopeWizardPreCheck, TestSystemScopeRemediationOutput, and TestGatewayCommandCatchesSystemScopeError covering the exception contract, pre-check matrix (root vs non-root, system-only vs user-present vs none vs explicit system=True), remediation output for each action, and the direct-CLI exit-1 path.
NousResearch#21390) The May 5 refactor in d5357f8 made _message_thread_id_for_typing() symmetric with _message_thread_id_for_send() by mapping the General topic (thread id "1") to None upfront for both. That's correct for sendMessage — Telegram rejects message_thread_id=1 on sends and the topic must be omitted — but it's wrong for sendChatAction. Observed behavior (confirmed via before/after Telegram wire traces): Before d5357f8: thread_id=1 → message_thread_id=1 → bubble visible in General After d5357f8: thread_id=1 → message_thread_id=None → no visible typing Omitting message_thread_id on sendChatAction does NOT fall back to the General topic's view in a forum-enabled supergroup; the bubble ends up hidden from the client's General-topic pane entirely. For any user on a forum-group, the typing indicator stopped appearing. Fix: drop the symmetric "1 → None" mapping from the typing resolver. sendMessage still maps 1 → None via _message_thread_id_for_send (that side was never broken). The asymmetry is real and required by Telegram's API — document it in the resolver docstring. Partial revert of d5357f8; restores the behavior from 0cf7d57 ("fix(telegram): restore typing indicator and thread routing for forum General topic"). Does not re-introduce the retry-without-thread fallback that 41545f7 scoped down for DM topics — with the resolver fixed, the first call already hits the right wire shape. Test updated from test_send_typing_general_topic_uses_none_thread_id (which encoded the broken contract) to test_send_typing_preserves_general_topic_thread_id, asserting the single correct call with message_thread_id=1. 10 other tests in the file untouched and passing.
The Tenacity Release — Hermes Agent now finishes what it starts. - Durable multi-agent Kanban with heartbeat, reclaim, zombie detection, retry budgets, hallucination gate - /goal persistent cross-turn goals (Ralph loop) - Checkpoints v2 single-store rewrite with real pruning - Gateway auto-resume interrupted sessions after restart - no_agent cron watchdog mode - Post-write delta lint on write_file + patch - 8 P0 security closures — redaction ON by default, CVSS 8.1 Discord fix, WhatsApp stranger rejection, MCP/auth TOCTOU, SSRF floor, cron prompt-injection skill scanning - Google Chat (20th platform) + generic platform-plugin hooks - ProviderProfile ABC + plugins/model-providers/ - 7 i18n locales (zh/ja/de/es/fr/uk/tr) + display.language - video_analyze tool, xAI Custom Voices, SearXNG, OpenRouter caching - MCP SSE transport + OAuth + image MEDIA surfacing - 864 commits, 588 merged PRs, 295 contributors
Extends PR NousResearch#21400's resource inlining with image-specific handling: ACP resource_link and embedded blob resources with an image/* mime (or image file suffix when mime is missing) now emit an OpenAI image_url part with a base64 data URL, so vision models actually see the image instead of a [Binary file omitted] note. Non-image resources keep the existing text-inlining behavior. Adds 3 tests: local PNG via resource_link, JPEG mime inferred from suffix when client omits mimeType, and embedded blob PNG.
Both implement WebSearchProvider via tools/web_providers/ — matching the existing SearXNG pattern (PR #5c906d702). Search-only; pair with any extract provider via web.extract_backend. - tools/web_providers/brave_free.py — Brave Search API (free tier, 2k queries/mo). Uses BRAVE_SEARCH_API_KEY as X-Subscription-Token. - tools/web_providers/ddgs.py — DuckDuckGo via the ddgs Python package. No API key; gated on package importability. - tools/web_tools.py: both backends added to _get_backend() config list and auto-detect chain (trails paid providers), _is_backend_available, web_search_tool dispatch, web_extract_tool + web_crawl_tool search-only refusals, check_web_api_key, and the __main__ diagnostic. Introduces _ddgs_package_importable() helper so tests can monkeypatch a single symbol for the ddgs availability check. - hermes_cli/tools_config.py: picker entries for both providers; ddgs gets a post_setup handler that runs `pip install ddgs`. - hermes_cli/config.py: BRAVE_SEARCH_API_KEY in OPTIONAL_ENV_VARS. - scripts/release.py: AUTHOR_MAP entry for @Abd0r. - tests: 14 new tests (brave-free) + 15 new tests (ddgs) covering provider unit behavior, backend wiring, and search-only refusals. Salvages the brave-free + ddgs portion of PR NousResearch#19796. Not included: the in-line helpers in web_tools.py (replaced with provider modules to match the shipped architecture), the lynx-based extract path (these backends should refuse extract with a clear error — users pair with a real extract provider), and scripts/start-llama-server.sh (unrelated). Co-authored-by: Abd0r <223003280+Abd0r@users.noreply.github.com>
…ousResearch#21435) * feat(kanban): add `specify` — auxiliary LLM fleshes out triage tasks The Triage column shipped with a placeholder 'a specifier will flesh out the spec', but the specifier itself was never built. This wires it up as a dedicated CLI verb. `hermes kanban specify <id>` calls the auxiliary LLM (configured under `auxiliary.triage_specifier`) to expand a rough one-liner into a concrete spec — tightened title plus a body with Goal / Approach / Acceptance criteria / Out-of-scope sections — then atomically flips `status: triage -> todo` and recomputes ready so parent-free tasks go straight to the dispatcher on the same tick. Surface: hermes kanban specify <task_id> # single task hermes kanban specify --all [--tenant T] # sweep triage column hermes kanban specify ... --author NAME # audit-comment author hermes kanban specify ... --json # one JSON line per task Design choices: - Parent gating is preserved. specify_triage_task flips to 'todo', then recompute_ready promotes to 'ready' only when parents are done — same rule as a normal parent-gated todo. - No daemon, no background watcher. Every invocation is explicit — keeps cost predictable and doesn't fight the dispatcher loop. - Response parse is lenient: strict JSON preferred, markdown-fence tolerated, raw-body fallback on malformed JSON so the LLM can't strand a task in triage. - All failure modes (no aux client, API error, task moved out of triage mid-call) return SpecifyOutcome(ok=False, reason=...) so --all continues past individual failures. Changes: hermes_cli/kanban_db.py + specify_triage_task() hermes_cli/kanban_specify.py NEW (~220 LOC — prompt, parse, call) hermes_cli/kanban.py + specify subcommand + _cmd_specify hermes_cli/config.py + auxiliary.triage_specifier task slot website/docs/user-guide/features/kanban.md specify + config notes website/docs/reference/cli-commands.md CLI reference entry tests/hermes_cli/test_kanban_specify_db.py NEW (10 tests) tests/hermes_cli/test_kanban_specify.py NEW (20 tests) Validation: 30/30 targeted tests pass. E2E: triage task -> specify -> ends in 'ready' with events [created, specified, promoted] and the audit comment recorded under the configured author. * feat(kanban): wire specifier into dashboard and gateway slash Follow-ups to the initial PR NousResearch#21435 — closes the two gaps I'd left as post-merge: dashboard button and first-class gateway surface. Dashboard (plugins/kanban/dashboard/) - POST /tasks/:id/specify NEW endpoint. Thin wrapper around kanban_specify.specify_task(). Returns the CLI outcome shape ({ok, task_id, reason, new_title}); ok=false with a human reason is a 200, not a 4xx, so the UI can render it inline without treating 'no aux client configured' as a crash. - Runs sync in FastAPI's threadpool because the LLM call can take tens of seconds on reasoning models. - Pins HERMES_KANBAN_BOARD around the specify call so the module's argless kb.connect() lands on the right board. - dist/index.js: doSpecify callback threaded through the drawer → TaskDetail → StatusActions prop chain. ✨ Specify button appears ONLY when task.status === 'triage' (elsewhere the backend would reject anyway — hide the button to keep the action row clean). Busy state (Specifying…) + inline success/error banner under the button using the response.reason text. - dist/style.css: tiny hermes-kanban-msg-ok / -err classes using existing --color vars so themes reskin cleanly. Gateway slash (/kanban specify) - Already works via the existing run_slash → build_parser → kanban_command pipeline. No code change needed — slash commands inherit the argparse tree automatically. Added coverage: test_run_slash_specify_end_to_end (create --triage, specify, verify promotion + retitle) and test_run_slash_specify_help_is_reachable. Tests - tests/plugins/test_kanban_dashboard_plugin.py: 3 new tests for the REST endpoint — happy path, non-triage rejection as ok=false 200, missing aux client as ok=false 200. - tests/hermes_cli/test_kanban_cli.py: 2 new slash-surface tests. Docs - website/docs/user-guide/features/kanban.md: dashboard action row description mentions ✨ Specify + all three surfaces. REST table gains /tasks/:id/specify. Slash examples include /kanban specify. Validation: 340/340 targeted tests pass. E2E via TestClient: create a triage task over REST → POST /specify with mocked aux client → task moves to 'ready' column on /board with new title and body applied.
The existing mapping pointed to the wrong GitHub user (blakejohnson, id 866695, IBM) — the email actually belongs to voteblake (id 5585957), confirmed via search/commits?author-email. Mis-credited since 323ca70.
…esearch#21494) The kanban specifier landed in NousResearch#21435 with feature-page docs (the kanban page itself + the CLI reference table), but three other docs pages enumerate every auxiliary task slot and were missed: user-guide/configuration.md Auxiliary Models section — interactive picker example + full auxiliary config reference YAML block. user-guide/features/fallback-providers.md Both 'Auxiliary Tasks' and 'Fallback Reference' tables. user-guide/features/kanban-tutorial.md Triage-column bullet now mentions the ✨ Specify button + CLI + slash command. No other docs enumerate the aux task slots (verified with grep -r 'title_generation\|auxiliary.session_search' website/docs/).
…ing (NousResearch#21455) - Add pricing entries for Claude Opus 4.5/4.6/4.7, Sonnet 4.5/4.6, and Haiku 4.5 with updated source URLs (platform.claude.com) - Add _normalize_anthropic_model_name() to handle dot-notation variants (e.g. claude-opus-4.7 → claude-opus-4-7) for pricing lookups - Fix silent token loss: ensure session row exists before UPDATE in both run_agent.py and hermes_state.py (INSERT OR IGNORE is idempotent) - Log token persistence failures at DEBUG level instead of swallowing them silently — makes undercounted analytics diagnosable - Surface reasoning tokens in CLI /usage and TUI usage panel - Add 'reasoning' and 'cost_status' fields to TUI Usage type
channels_list was iterating directory.items() directly, yielding
("updated_at", str) and ("platforms", dict) pairs — neither passed
the isinstance(entries_list, list) check, so the inner loop never ran
and every call returned count=0 even when channel_directory.json was
populated.
The writer (gateway/channel_directory.py) wraps the payload as
{"updated_at": ..., "platforms": {...}}; every other reader in the
codebase unwraps via directory.get("platforms", {}). This aligns
channels_list with that convention.
Also tightens the existing test_channels_with_directory test, which
bypassed the bug by asserting against _load_channel_directory() directly
instead of calling channels_list. It now calls the tool end-to-end and
a new test_channels_with_directory_platform_filter covers the filter
path. Both tests fail against the pre-fix code.
Closes NousResearch#21474
Co-authored-by: chrisworksai <262485129+chrisworksai@users.noreply.github.com>
…udo -u When the installer is run via , uv resolves config file paths against the process owner's (root) home directory rather than the effective user's, causing a Permission denied error when trying to read /root/uv.toml. Setting UV_NO_CONFIG=1 prevents uv from discovering any config files (uv.toml, pyproject.toml) during installation, which is the correct behavior for a bootstrap script that manages its own environment. Fixes NousResearch#21269
…essions-skills-menu feat(tui): add /sessions slash command for browsing and resuming previous sessions
…rsonality fix(tui): preserve session when switching personality
…ch#21541) Makes first-time use of the kanban view self-explanatory. Every control that wasn't already labelled now has a `title` tooltip describing what it does, and a `?` icon next to the board switcher opens the kanban docs page in a new tab. Coverage: - BoardSwitcher: board select, + New board button, docs-link icon (both compact and full variants) - BoardToolbar: Search, Tenant, Assignee, Show archived, Nudge dispatcher, Refresh - BulkActionBar: → ready, Complete, Archive, reassign group, Apply, Clear - Column header: hovering the header now surfaces COLUMN_HELP as a tooltip in addition to the visible sub-text; column count also labelled - Card: task id, priority badge, tenant badge, assignee/unassigned, comment count, link count, age timestamp - InlineCreate: assignee, priority, parent-task selectors Closes the community feedback from @CharlieDePew asking for tooltips and a docs link in the kanban view. Relevant docs page: https://hermes-agent.nousresearch.com/docs/user-guide/features/kanban
Route goal status notices through the platform adapter send API and register post-delivery callbacks so completed-goal notices appear after the final assistant response. Also cancel queued synthetic goal continuations on /goal pause and /goal clear while preserving normal queued user messages.
Weak judge models (e.g. deepseek-v4-flash) return empty strings or prose
when asked for the strict {done, reason} JSON verdict. The old code
failed-open to continue on every such turn, burning the entire turn
budget with log lines like
judge returned empty response
judge reply was not JSON: "Let me analyze whether the goal..."
and /goal clear could not stop it mid-loop without /stop.
After N=3 consecutive *parse* failures (transport/API errors don't
count — those are transient), the loop auto-pauses and prints:
⏸ Goal paused — the judge model (3 turns) isn't returning the
required JSON verdict. Route the judge to a stricter model in
~/.hermes/config.yaml:
auxiliary:
goal_judge:
provider: openrouter
model: google/gemini-3-flash-preview
Then /goal resume to continue.
The counter resets on any usable reply (both "done"/"continue" and
API errors) and persists across GoalManager reloads so cross-session
resumes carry the correct state.
Also fixes test_goal_verdict_send.py sharing a hardcoded session_id
across tests — the shared id only worked because the previous
_post_turn_goal_continuation was a never-awaited coroutine. Now that
PR NousResearch#19160 made it properly awaited, the xdist test-leakage bug
surfaced. Each test gets a unique session_id via uuid suffix.
The squash of PR #26 (a0d03c7, 2026-05-08) collapsed the true merge of upstream/main into a single non-merge commit on this branch's first-parent chain. Result: `git log upstream/main..main` reports 2337 commits as 'not merged', even though their content IS in main via the squash. The next `git merge upstream/main` would attempt to re-apply those commits, regenerating 249+ already-resolved conflicts. This commit re-registers the upstream ancestry by introducing a synthetic no-op merge with two parents: - main's existing tip 421bd9d (PR #27 squash; first parent) - upstream's tip at the time of the Tier 2C squash — faa13e4 (second parent) The `--strategy=ours` resolution keeps main's tree byte-identical to HEAD~1 (verified: `git diff HEAD~1 HEAD` is empty). The only effect is on parentage and on `git merge-base`. After this commit: - `git log upstream/main..HEAD` returns ONLY fork-side commits - The next `git merge upstream/main` operates on the ~388 commits upstream has shipped since faa13e4, NOT on the ~2337 commits that are already merged content-wise. Refs: parent-repo plan docs/superpowers/plans/2026-05-09-myah-no-fork-vendoring.md Phase 0.
🔎 Lint report:
|
| Rule | Count |
|---|---|
unresolved-attribute |
25 |
invalid-assignment |
6 |
First entries
run_agent.py:2766: [invalid-assignment] invalid-assignment: Object of type `int` is not assignable to attribute `threshold_tokens` on type `None | Unknown | ContextCompressor`
tests/run_agent/test_switch_model_context.py:49: [unresolved-attribute] unresolved-attribute: Attribute `context_length` is not defined on `None` in union `None | Unknown | ContextCompressor`
run_agent.py:11008: [unresolved-attribute] unresolved-attribute: Attribute `protect_last_n` is not defined on `None` in union `None | Unknown | ContextCompressor`
run_agent.py:9550: [unresolved-attribute] unresolved-attribute: Attribute `compression_count` is not defined on `None` in union `None | Unknown | ContextCompressor`
run_agent.py:12090: [invalid-assignment] invalid-assignment: Object of type `Literal[False]` is not assignable to attribute `_context_probe_persistable` on type `None | Unknown | ContextCompressor`
cli.py:8001: [unresolved-attribute] unresolved-attribute: Attribute `context_length` is not defined on `None` in union `None | Unknown | ContextCompressor`
run_agent.py:12089: [invalid-assignment] invalid-assignment: Object of type `Literal[False]` is not assignable to attribute `_context_probed` on type `None | Unknown | ContextCompressor`
tests/run_agent/test_switch_model_context.py:60: [unresolved-attribute] unresolved-attribute: Attribute `model` is not defined on `None` in union `None | Unknown | ContextCompressor`
run_agent.py:9570: [invalid-assignment] invalid-assignment: Object of type `Literal[0]` is not assignable to attribute `last_completion_tokens` on type `None | Unknown | ContextCompressor`
cli.py:8003: [unresolved-attribute] unresolved-attribute: Attribute `compression_count` is not defined on `None` in union `None | Unknown | ContextCompressor`
tests/run_agent/test_compressor_fallback_update.py:71: [unresolved-attribute] unresolved-attribute: Attribute `context_length` is not defined on `None` in union `None | Unknown | ContextCompressor`
gateway/run.py:9990: [unresolved-attribute] unresolved-attribute: Attribute `has_content_to_compress` is not defined on `None` in union `None | Unknown | ContextCompressor`
run_agent.py:13797: [unresolved-attribute] unresolved-attribute: Attribute `should_compress` is not defined on `None` in union `None | Unknown | ContextCompressor`
tests/run_agent/test_compressor_fallback_update.py:67: [unresolved-attribute] unresolved-attribute: Attribute `model` is not defined on `None` in union `None | Unknown | ContextCompressor`
tests/run_agent/test_compression_feasibility.py:346: [unresolved-attribute] unresolved-attribute: Attribute `threshold_tokens` is not defined on `None` in union `None | Unknown | ContextCompressor`
run_agent.py:13801: [unresolved-attribute] unresolved-attribute: Attribute `last_prompt_tokens` is not defined on `None` in union `None | Unknown | ContextCompressor`
run_agent.py:11063: [unresolved-attribute] unresolved-attribute: Attribute `threshold_tokens` is not defined on `None` in union `None | Unknown | ContextCompressor`
run_agent.py:2772: [invalid-assignment] invalid-assignment: Object of type `int | float` is not assignable to attribute `threshold_percent` on type `None | Unknown | ContextCompressor`
cli.py:8000: [unresolved-attribute] unresolved-attribute: Attribute `last_prompt_tokens` is not defined on `None` in union `None | Unknown | ContextCompressor`
tests/run_agent/test_compressor_fallback_update.py:72: [unresolved-attribute] unresolved-attribute: Attribute `threshold_percent` is not defined on `None` in union `None | Unknown | ContextCompressor`
tests/run_agent/test_compressor_fallback_update.py:68: [unresolved-attribute] unresolved-attribute: Attribute `base_url` is not defined on `None` in union `None | Unknown | ContextCompressor`
tests/run_agent/test_compressor_fallback_update.py:69: [unresolved-attribute] unresolved-attribute: Attribute `api_key` is not defined on `None` in union `None | Unknown | ContextCompressor`
run_agent.py:9569: [invalid-assignment] invalid-assignment: Object of type `int` is not assignable to attribute `last_prompt_tokens` on type `None | Unknown | ContextCompressor`
run_agent.py:10388: [unresolved-attribute] unresolved-attribute: Attribute `handle_tool_call` is not defined on `None` in union `None | Unknown | ContextCompressor`
tests/run_agent/test_compressor_fallback_update.py:70: [unresolved-attribute] unresolved-attribute: Attribute `provider` is not defined on `None` in union `None | Unknown | ContextCompressor`
... and 6 more
Unchanged: 4193 pre-existing issues carried over.
Diagnostics are surfaced as warnings — this check never fails the build.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Tier 2C PR #26 was merged via squash on 2026-05-08. Squashing collapsed the true 2-parent merge commit (
e4824aa33e4ed987..., parents =28140eaaec+faa13e49f) into a single non-merge commit (a0d03c7668...) onmain's first-parent chain. PR #27 then merged a single commit (421bd9d74) on top of that.Result today (verified 2026-05-11):
```
$ git merge-base origin/main upstream/main
0e23594 # ancient base, ~12 months old
$ git rev-list --count origin/main..upstream/main
2337 # commits upstream has past the fork — BUT most are already in main
# content-wise via the Tier 2C squash. The next `git merge upstream/main`
# would attempt to re-apply all 2,337, regenerating 249+ resolved conflicts.
```
Fix mechanism
A single synthetic 2-parent commit on
main:421bd9d749...(current main tip — PR feat(plugin): v1.1.0 — OSS bootstrap, F4 secret wiring, F7 MCP disconnect #27 squash)faa13e49f8...(upstream's tip at the time of the Tier 2C merge)--strategy=ours— keeps main's tree byte-identical to HEAD~1Verification (post-fix, on this branch)
```
$ git merge-base HEAD upstream/main
faa13e4 # ← correctly anchored now
$ git rev-list --count HEAD..upstream/main
388 # the real upstream delta since faa13e4
# (matches `git rev-list --count faa13e4..upstream/main` exactly)
$ git diff HEAD~1 HEAD
# ← empty: tree unchanged
```
Why this matters
Future upstream merges become trivial. Without this fix, every `git merge upstream/main` would re-apply ~2,300 phantom commits and re-trigger ~250 conflicts that were already resolved in the Tier 2C merge. The Myah no-fork vendoring plan (Phases 1-9) depends on a clean upstream-merge baseline.
Refs
If you accidentally squash-merge, the fix is destroyed and we re-do Phase 0 from scratch. Please double-check the dropdown next to the merge button.