feat: Rich Markdown rendering with skin-aware themes and /markdown toggle#5150
feat: Rich Markdown rendering with skin-aware themes and /markdown toggle#5150lucaspirola wants to merge 9 commits into
Conversation
|
Substantial feature — streaming block-boundary detection that respects code fences is the right approach for avoiding mid-render chopping, and the fast-path plain-text check + graceful fallback show good defensive thinking. Skin-aware theming is a nice touch. A few concerns before merge: 1. Fast-path false negative on long responses with late markdown. return bool(_MD_SYNTAX_RE.search(text[:500] if len(text) > 500 else text))If markdown syntax doesn't appear in the first 500 chars, The author's reasoning ("markdown almost always appears early") holds for short assistant chat but not for tool-heavy agentic outputs — exactly the workload Hermes is built for. Two fixes:
The current behavior will surprise users when some responses render beautifully and others show raw syntax. 2. Prompt hint change is inconsistent with "cli": (
"You are a CLI AI Agent. Your terminal supports full markdown "
"rendering. Use markdown freely ..."
),A user who turns off markdown rendering via Make the prompt hint respect the setting: # Pass markdown_enabled into the prompt context, or swap hints:
CLI_HINT_MARKDOWN_ON = "... Your terminal supports full markdown rendering. Use markdown freely..."
CLI_HINT_MARKDOWN_OFF = "You are a CLI AI Agent. Use plain text rendered inside a terminal..."And select at prompt-build time based on the config. 3. Exception swallowing without logging. except Exception:
return _rich_text_from_ansi(text)and try:
self._stream_md_console.print(_RichMarkdown(chunk, ...))
except Exception:
self._stream_md_console.print(chunk)If Rich Markdown ever chokes on valid markdown (edge case in the parser, unicode surrogate pair, etc.), users see plain output and there's no trail for debugging. At minimum 4. Skin change during stream isn't reflected.
if not self._stream_box_opened:
...
self._stream_md_code_theme = _skin.get_color("code_theme", "monokai")
self._stream_md_text_color = _skin.get_color("banner_text", "#FFF8DC")Running 5. Streaming state has 7 new instance variables. self._stream_md_buf = ""
self._stream_md_rendered = 0
self._stream_md_fence_open = False
self._stream_md_console = None
self._stream_md_iobuf = None
self._stream_md_code_theme = "monokai"
self._stream_md_text_color = ""Extraction into a Minor nits:
Ship it with the scan-window fix (#1) and prompt hint consistency (#2) — those are user-visible regressions. The rest are polish. |
|
All addressed in the latest push — thanks for the thorough review. #1 Scan window — raised to 8192 chars via a named #2 Prompt hint — added a #3 Exception logging — both handlers now call #4 Skin/stream clarification — updated the in-code comment to explicitly say the theme is captured once at stream-open and a Minor nits — On #5 (StreamingMarkdownRenderer dataclass) — agreed it would be a nice cleanup; left as a follow-up to keep this PR focused. |
Make the markdown renderer adapt to the active skin's colour palette
instead of hardcoding monokai/white. _render_response() now accepts
code_theme and text_color from the skin — banner_text becomes the base
paragraph colour (Rich Markdown's style= layers underneath element
styles, preserving heading/code/bold colours), and code_theme falls
back to monokai unless a skin overrides it via get_color("code_theme").
Zero changes to skin definitions or SkinConfig — existing skins and
user-defined skins work automatically through get_color() fallbacks.
Also adds docstrings and inline comments to all markdown rendering code
(regex, fast-path, block boundary detection, chunk rendering, streaming
strategy, flush, command handler) for clarity and maintainability.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The CLI now has a full Rich Markdown renderer, so the platform hint should tell the LLM to use markdown instead of discouraging it. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…g, term width cache - Raise fast-path scan window from 500 to 8192 chars so agentic responses with plain-text preambles before tables/code blocks still get markdown rendering - Add cli_no_markdown platform hint; pass it when markdown_enabled=False so the LLM doesn't emit markdown that would display as raw syntax - Add logger.debug() to both renderer exception handlers so render failures leave a diagnostic trail - Cache terminal width at stream-open time to avoid a syscall per chunk - Clarify in-code comment: skin theme is captured per response, not mid-stream (/skin takes effect on the next response) Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
System prompt changes (prompt_builder.py, run_agent.py): - Remove "unless otherwise directed below" loophole from DEFAULT_AGENT_IDENTITY - Add TONE_AND_STYLE_GUIDANCE and OUTPUT_EFFICIENCY_GUIDANCE constants with explicit FORBIDDEN/WRONG→RIGHT examples to enforce concise responses - Inject both early (position 2, after identity) for open-source models that weight earlier instructions more heavily - Gate injection on non-messaging, non-cron platforms - Add conciseness hint to cli and cli_no_markdown platform hints CLI status bar changes (cli.py): - Remove all decorative Panel borders and response labels (⚕ Hermes) - Remove separator lines between queries (exchange divider, bg/btw task dividers) - Status bar hides during inference; shows live counter while agent runs - After response: status bar switches to ∑ total / ↩ last inference time display - Timer absent before first message (no Hermes-uptime counter at startup) - Add _inference_total_seconds and _last_inference_seconds session accumulators - Add _response_received flag to switch status bar mode after first response - Per-turn flags (_summary_printed_this_turn, _response_received) reset on each chat() Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
… bad rebase conflict resolutions Previous rebase conflict resolutions in 866c194 and 48aa22e5 left display.py with only 134 lines (missing all classes including KawaiiSpinner) and commands.py with line-number prefixes from cat -n output. Restore both files from upstream (3a63514) and re-apply the two small additions our branch actually needed: the /markdown command registration in commands.py and the skin-aware diff color infrastructure already present in the upstream display.py. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…conflict resolution The 60687a4e conflict resolution removed build_environment_hints from the prompt_builder import, but run_agent.py still calls it at line 3827. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
6bb19bb to
8ed6638
Compare
|
Closing in favour of two focused PRs cleanly rebased onto current main:
|
Summary
Adds full Rich Markdown rendering for CLI responses using Rich's built-in
Markdownclass, with skin integration and user control:code_themefrom the active skin, updates live on/skinchange/markdown [on|off]command (alias/md) — toggle rendering at runtime, persists todisplay.markdownin config.yamltry/exceptwraps all render calls; renderer crashes fall back to plain textFiles changed (3 files)
cli.py_render_response(),_emit_stream_markdown(),_find_block_boundary(),_handle_markdown_command(), skin theme inithermes_cli/commands.py/markdowncommand with/mdaliasagent/prompt_builder.pyIssues addressed
Closes #3621, closes #4236, relates to #684
Test plan
_has_markdown_syntax()(13 cases), config persistence, command registration/markdown offshows raw markdown;/markdown onre-enables/skin aresupdates code theme colours live🤖 Generated with Claude Code