Skip to content

feat(llm): enable thinking for Ollama via default additional params#2372

Merged
ilblackdragon merged 1 commit into
nearai:mainfrom
standardtoaster:feat/ollama-think-param
May 2, 2026
Merged

feat(llm): enable thinking for Ollama via default additional params#2372
ilblackdragon merged 1 commit into
nearai:mainfrom
standardtoaster:feat/ollama-think-param

Conversation

@standardtoaster
Copy link
Copy Markdown
Contributor

Summary

Ollama's native /api/chat requires an explicit "think": true to enable extended reasoning. Without it, thinking-capable models (Qwen3, DeepSeek-R1, Gemma 4) run without their reasoning step.

This PR always sends think: true for Ollama. Non-thinking models ignore the parameter harmlessly -- Ollama treats it as a no-op. This avoids maintaining a hardcoded list of thinking-capable model names that would rot as new models ship.

The implementation adds a generic with_additional_params() builder on RigAdapter for injecting provider-level defaults into every request. The Ollama factory uses it to set think: true. Other providers are unaffected.

Design decisions

Why not a per-model config flag? IronClaw doesn't have per-model config today. Adding one for a single boolean isn't worth the complexity. If Ollama adds models where think: true causes problems, we can add an opt-out then.

Why not a hardcoded model pattern list? Model lists rot. New thinking models ship regularly (Gemma 4 just launched). Ollama handles the parameter gracefully for non-thinking models, so there's no safety concern.

Why with_additional_params instead of a think-specific flag? The generic approach is reusable -- any provider can inject top-level fields without new RigAdapter API surface per feature.

Test plan

4 unit tests covering merge_additional_params:

  • Merge into empty params
  • Merge preserves existing keys (e.g., cache_control)
  • Existing keys win over defaults (no silent overwrite)
  • No-op when defaults are None

Manual wire-level validation -- started IronClaw with Ollama backend pointed at a mock HTTP server, captured the raw /api/chat request body:

BEFORE (base): "think": false   (Ollama default)
AFTER  (fix):  "think": true    (injected by RigAdapter)

@github-actions github-actions Bot added scope: llm LLM integration size: M 50-199 changed lines risk: low Changes to docs, tests, or low-risk modules contributor: regular 2-5 merged PRs labels Apr 12, 2026
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces the ability to provide default additional parameters to the RigAdapter, which are merged into every completion request. Specifically, it enables the think: true parameter for the Ollama provider to support extended reasoning in thinking models. The implementation includes a new with_additional_params method, a merging utility that preserves existing request parameters, and corresponding unit tests. I have no feedback to provide.

@henrypark133 henrypark133 changed the base branch from staging to main May 1, 2026 06:16
Ollama's native /api/chat requires `think: true` to enable extended
reasoning. Add `with_additional_params()` to RigAdapter for injecting
provider-level defaults, and set `think: true` for the Ollama factory.

Non-thinking models ignore the parameter harmlessly. Other providers
(OpenAI, Anthropic) are unaffected — only the Ollama factory sets it.
@ilblackdragon ilblackdragon added this pull request to the merge queue May 2, 2026
Merged via the queue into nearai:main with commit 7335a27 May 2, 2026
29 checks passed
This was referenced May 7, 2026
@ironclaw-ci ironclaw-ci Bot mentioned this pull request May 8, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

contributor: experienced 6-19 merged PRs risk: low Changes to docs, tests, or low-risk modules scope: llm LLM integration size: M 50-199 changed lines

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants