feat: add OpenRouter preset to setup wizard#270
Conversation
Summary of ChangesHello @ztsalexey, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces direct support for OpenRouter within the application's setup wizard. By adding OpenRouter as a distinct provider option, users can now quickly configure access to its diverse range of models with pre-filled settings and a guided API key input, enhancing the ease of use for a popular LLM aggregation service. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Pull request overview
This PR adds OpenRouter as a top-level provider option in the onboarding wizard, making it easier for users to discover and configure OpenRouter without needing to know it's OpenAI-compatible. The implementation reuses the existing openai_compatible backend with pre-filled configuration values.
Changes:
- Added OpenRouter as the 5th provider option (index 4) in the setup wizard provider menu
- Implemented
setup_openrouter()method that pre-configures the OpenAI-compatible backend with OpenRouter's base URL - Updated documentation to describe the OpenRouter setup flow and configuration
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.
| File | Description |
|---|---|
| src/setup/wizard.rs | Added OpenRouter to provider menu, implemented setup_openrouter() method with pre-configured base URL and API key flow |
| src/setup/README.md | Documented OpenRouter provider in Step 3 specification table and added detailed description of setup flow |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Code Review
This pull request adds OpenRouter as a preset provider in the setup wizard, which is a nice feature enhancement. The implementation correctly reuses the existing openai_compatible backend. The changes are straightforward and well-contained.
I've left a couple of suggestions for minor improvements:
- One is to improve the alignment of the provider list in the UI for better readability, using hardcoded padding.
- The other is to replace some hardcoded strings with constants or enum values to enhance maintainability.
Overall, great work!
8608263 to
4cfa450
Compare
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Add OpenRouter as a top-level provider option in the onboarding wizard (Step 3). Selecting it pre-fills the base URL (https://openrouter.ai/api/v1) and prompts for an API key, avoiding manual URL entry. Under the hood it uses the existing openai_compatible backend. Inlines the key collection flow (rather than delegating to setup_api_key_provider) so success messages consistently say "OpenRouter" instead of "openai_compatible", including the early-return env-key path. Closes nearai#178 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
827623a to
c90ef4f
Compare
| 2 => self.setup_openai().await?, | ||
| 3 => self.setup_ollama()?, | ||
| 4 => self.setup_openai_compatible().await?, | ||
| 4 => self.setup_openrouter().await?, |
There was a problem hiding this comment.
Medium Severity — "Keep current provider?" re-run path doesn't recognize OpenRouter
When a user re-runs onboarding with an existing OpenRouter config, self.settings.llm_backend is "openai_compatible" and openai_compatible_base_url is the OpenRouter URL. The display name mapping (line 738) shows "OpenAI-compatible endpoint" and the "keep current" path calls setup_openai_compatible() — which prompts for a base URL and a "does this need an API key?" confirm.
This loses the OpenRouter-specific UX (pre-filled URL, direct key prompt, "OpenRouter" branding). There's no way to distinguish an OpenRouter user from a generic openai_compatible user on re-run.
Suggested fix: Either (a) add a settings field like openai_compatible_preset: Option<String> so the wizard can detect "openrouter" on re-entry and route to setup_openrouter(), or (b) check openai_compatible_base_url contains openrouter.ai in the display-name match and "keep current" dispatch to route accordingly.
There was a problem hiding this comment.
Fixed in bcd738c — re-run path now detects OpenRouter by checking if the stored base URL contains openrouter.ai. Display shows "OpenRouter" and keep-current dispatches to setup_openrouter().
| 3 => self.setup_ollama()?, | ||
| 4 => self.setup_openai_compatible().await?, | ||
| 4 => self.setup_openrouter().await?, | ||
| 5 => self.setup_openai_compatible().await?, |
There was a problem hiding this comment.
Medium Severity — Step 4 model selection for OpenRouter uses raw text input instead of fetching models
Since OpenRouter maps to openai_compatible, Step 4 hits the "openai_compatible" arm which prompts "Model name (e.g., meta-llama/Llama-3-8b-chat-hf)". OpenRouter has 200+ models and supports the standard /v1/models endpoint, so this is a poor UX compared to Anthropic/OpenAI which get a fetched dropdown list.
The README even says "Clears selected_model so Step 4 can fetch models from OpenRouter's API" but the code doesn't actually fetch — it falls through to the generic text input.
Suggested fix: Add an OpenRouter-specific check in Step 4: if backend is openai_compatible and base URL contains openrouter.ai, call the /v1/models API using the cached key and present a selection list. Alternatively, update the README to not claim fetching.
There was a problem hiding this comment.
Fixed in bcd738c — updated README to remove the false claim about model fetching. It now correctly states: "Clears selected_model so Step 4 prompts for a model name (manual text input, no API-based model fetching)". Implementing actual model fetching from OpenRouter's API is out of scope for this PR.
| /// Inlines the API key collection (rather than delegating to | ||
| /// `setup_api_key_provider`) so the success message says "OpenRouter" | ||
| /// instead of "openai_compatible". | ||
| async fn setup_openrouter(&mut self) -> Result<(), SetupError> { |
There was a problem hiding this comment.
Low Severity — setup_openrouter() is a near-copy of setup_api_key_provider()
The PR acknowledges this in comments ("inlined so the success message says 'OpenRouter'"), but the only differences are: (a) pre-filling the base URL, (b) the success/display message, and (c) not calling setup_api_key_provider with the backend name. The ~50 lines of duplicated secret-handling logic could diverge over time.
Suggested fix: Extend setup_api_key_provider() with optional parameters (base_url: Option<&str>, display_name: Option<&str>) or a config struct, so OpenRouter can reuse it while customizing the branding and pre-filled URL.
There was a problem hiding this comment.
Fixed in bcd738c — setup_openrouter() now delegates to setup_api_key_provider() with a new override_display_name: Option<&str> parameter (set to Some("OpenRouter")). Eliminates ~40 lines of duplication.
| | Anthropic | API key | `anthropic_api_key` | `ANTHROPIC_API_KEY` | | ||
| | OpenAI | API key | `openai_api_key` | `OPENAI_API_KEY` | | ||
| | Ollama | None | - | - | | ||
| | OpenRouter | API key | `llm_compatible_api_key` | `LLM_API_KEY` | |
There was a problem hiding this comment.
Nit — OpenRouter and OpenAI-compatible share the same secret name and env var
Both rows in the table list llm_compatible_api_key / LLM_API_KEY. A brief note clarifying they share storage (and that switching between them overwrites the same secret) would prevent confusion for contributors reading the spec.
Suggested fix: Add a footnote like: "OpenRouter uses the same secret/env var as OpenAI-compatible — they share the openai_compatible backend."
There was a problem hiding this comment.
Fixed in bcd738c — added a footnote after the table explaining that OpenRouter and OpenAI-compatible share the same secret/env var because OpenRouter is stored as openai_compatible under the hood, and switching between them overwrites the same credential slot.
- Re-run path now recognizes OpenRouter: display shows "OpenRouter" and keep-current routes to setup_openrouter() when base URL contains openrouter.ai - Refactor setup_openrouter() to delegate to setup_api_key_provider() with a display_name override, eliminating ~40 lines of duplication - Update README: remove false claim about model fetching from OpenRouter API, add footnote explaining shared secret/env var between OpenRouter and OpenAI-compatible - Fix pre-existing clippy warning in settings.rs (field_reassign_with_default) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 3 out of 3 changed files in this pull request and generated no new comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
* feat: add OpenRouter preset to setup wizard Add OpenRouter as a top-level provider option in the onboarding wizard (Step 3). Selecting it pre-fills the base URL (https://openrouter.ai/api/v1) and prompts for an API key, avoiding manual URL entry. Under the hood it uses the existing openai_compatible backend. Inlines the key collection flow (rather than delegating to setup_api_key_provider) so success messages consistently say "OpenRouter" instead of "openai_compatible", including the early-return env-key path. Closes nearai#178 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: address serrrfirat review comments on OpenRouter wizard preset - Re-run path now recognizes OpenRouter: display shows "OpenRouter" and keep-current routes to setup_openrouter() when base URL contains openrouter.ai - Refactor setup_openrouter() to delegate to setup_api_key_provider() with a display_name override, eliminating ~40 lines of duplication - Update README: remove false claim about model fetching from OpenRouter API, add footnote explaining shared secret/env var between OpenRouter and OpenAI-compatible - Fix pre-existing clippy warning in settings.rs (field_reassign_with_default) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Summary
https://openrouter.ai/api/v1) and prompts for an API keyopenai_compatiblebackend — no new backend neededChanges
src/setup/wizard.rs: Add OpenRouter as choice 4 in provider menu, addsetup_openrouter()methodsrc/setup/README.md: Document OpenRouter provider in Step 3 specTest plan
cargo test --lib— all 1271 tests passcargo check/cargo check --no-default-features --features libsql/cargo check --all-featurescargo fmt --check/cargo clippy --all --all-featurescargo run -- onboardand verify OpenRouter appears as option 5 with correct base URLCloses #178
🤖 Generated with Claude Code