feat(llm): enable thinking for Ollama via default additional params#2372
Merged
ilblackdragon merged 1 commit intoMay 2, 2026
Merged
Conversation
Contributor
There was a problem hiding this comment.
Code Review
This pull request introduces the ability to provide default additional parameters to the RigAdapter, which are merged into every completion request. Specifically, it enables the think: true parameter for the Ollama provider to support extended reasoning in thinking models. The implementation includes a new with_additional_params method, a merging utility that preserves existing request parameters, and corresponding unit tests. I have no feedback to provide.
Ollama's native /api/chat requires `think: true` to enable extended reasoning. Add `with_additional_params()` to RigAdapter for injecting provider-level defaults, and set `think: true` for the Ollama factory. Non-thinking models ignore the parameter harmlessly. Other providers (OpenAI, Anthropic) are unaffected — only the Ollama factory sets it.
8606df7 to
3df2863
Compare
ilblackdragon
approved these changes
May 2, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Ollama's native
/api/chatrequires an explicit"think": trueto enable extended reasoning. Without it, thinking-capable models (Qwen3, DeepSeek-R1, Gemma 4) run without their reasoning step.This PR always sends
think: truefor Ollama. Non-thinking models ignore the parameter harmlessly -- Ollama treats it as a no-op. This avoids maintaining a hardcoded list of thinking-capable model names that would rot as new models ship.The implementation adds a generic
with_additional_params()builder onRigAdapterfor injecting provider-level defaults into every request. The Ollama factory uses it to setthink: true. Other providers are unaffected.Design decisions
Why not a per-model config flag? IronClaw doesn't have per-model config today. Adding one for a single boolean isn't worth the complexity. If Ollama adds models where
think: truecauses problems, we can add an opt-out then.Why not a hardcoded model pattern list? Model lists rot. New thinking models ship regularly (Gemma 4 just launched). Ollama handles the parameter gracefully for non-thinking models, so there's no safety concern.
Why
with_additional_paramsinstead of athink-specific flag? The generic approach is reusable -- any provider can inject top-level fields without new RigAdapter API surface per feature.Test plan
4 unit tests covering
merge_additional_params:cache_control)Manual wire-level validation -- started IronClaw with Ollama backend pointed at a mock HTTP server, captured the raw
/api/chatrequest body: