Skip to content

feat(llm): add Google Gemini, AWS Bedrock, io.net, Mistral, Yandex, and Cloudflare WS AI providers#676

Merged
ilblackdragon merged 2 commits intonearai:mainfrom
Mffff4:feat/add-llm-providers
Mar 7, 2026
Merged

feat(llm): add Google Gemini, AWS Bedrock, io.net, Mistral, Yandex, and Cloudflare WS AI providers#676
ilblackdragon merged 2 commits intonearai:mainfrom
Mffff4:feat/add-llm-providers

Conversation

@Mffff4
Copy link
Copy Markdown
Contributor

@Mffff4 Mffff4 commented Mar 7, 2026

  • Added 6 new LLM providers to declarative registry in providers.json
  • Updated LLM_PROVIDERS.md documentation matrix
  • Maintained feature parity in FEATURE_PARITY.md
  • Added support for passing custom headers such as YANDEX_EXTRA_HEADERS for Yandex folder IDs
  • Bypassed default_base_url test logic for Cloudflare and AWS Bedrock as they have dynamic base URLs

@github-actions github-actions Bot added scope: docs Documentation size: M 50-199 changed lines risk: low Changes to docs, tests, or low-risk modules labels Mar 7, 2026
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly expands the platform's capabilities by integrating six new Large Language Model (LLM) providers: Google Gemini, AWS Bedrock, io.net, Mistral, Yandex AI Studio, and Cloudflare Workers AI. These additions enhance the system's flexibility and choice for users, along with necessary documentation updates and specific handling for dynamic base URLs and custom headers.

Highlights

  • New LLM Providers: Added support for Google Gemini, AWS Bedrock, io.net, Mistral, Yandex AI Studio, and Cloudflare Workers AI as new LLM providers.
  • Documentation Updates: Updated the LLM_PROVIDERS.md documentation to include the newly added providers and ensured feature parity is reflected in FEATURE_PARITY.md.
  • Custom Header Support: Implemented support for passing custom headers, specifically for Yandex folder IDs.
  • Dynamic Base URL Handling: Adjusted base URL handling for Cloudflare and AWS Bedrock due to their dynamic nature, bypassing default base URL test logic.
Changelog
  • FEATURE_PARITY.md
    • Updated the feature parity matrix to reflect the ✅ status for new LLM providers like AWS Bedrock, Google Gemini, io.net, Mistral, Yandex AI Studio, Cloudflare Workers AI, and NVIDIA API.
    • Added notes on the specific adapters or proxies required for some new providers.
  • docs/LLM_PROVIDERS.md
    • Added new rows to the LLM providers table for Google Gemini, AWS Bedrock, io.net, Mistral, Yandex AI Studio, and Cloudflare Workers AI, including their IDs, API key requirements, and model descriptions.
  • providers.json
    • Added new JSON objects defining the configurations for Google Gemini, AWS Bedrock, io.net, Mistral, Yandex AI Studio, and Cloudflare Workers AI.
    • Standardized the formatting of aliases arrays for existing providers by moving them to multiline arrays.
    • Included extra_headers_env for Yandex to support custom headers.
    • Configured base_url_env for Bedrock and Cloudflare, bypassing default_base_url logic.
Activity
  • No human activity has been recorded on this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@github-actions github-actions Bot added the contributor: new First-time contributor label Mar 7, 2026
@Mffff4 Mffff4 force-pushed the feat/add-llm-providers branch from 22b19c5 to b892abf Compare March 7, 2026 19:40
@github-actions github-actions Bot added the scope: llm LLM integration label Mar 7, 2026
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request is a great addition, adding support for six new LLM providers and updating the corresponding documentation. The changes are well-structured. I've identified a few potential typos in the default model names for some of the new providers in providers.json. These could cause errors for users relying on the default configurations. My review includes specific suggestions to correct these model names.

Note: Security Review has been skipped due to the limited scope of the PR.

Comment thread providers.json
"api_key_env": "GEMINI_API_KEY",
"api_key_required": true,
"model_env": "GEMINI_MODEL",
"default_model": "gemini-2.5-flash",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The default_model for Gemini is set to gemini-2.5-flash. This appears to be a typo, as the current flash model from Google is gemini-1.5-flash. Using an incorrect model name will lead to errors for users.

Suggested change
"default_model": "gemini-2.5-flash",
"default_model": "gemini-1.5-flash",

Comment thread providers.json
"api_key_required": false,
"base_url_env": "BEDROCK_BASE_URL",
"model_env": "BEDROCK_MODEL",
"default_model": "anthropic.claude-3-5-sonnet-20241022-v2:0",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The default_model for AWS Bedrock, anthropic.claude-3-5-sonnet-20241022-v2:0, appears to be incorrect. The date 20241022 is in the future and the version v2:0 is unusual. The official model ID for Claude 3.5 Sonnet on Bedrock is anthropic.claude-3-5-sonnet-20240620-v1:0. Using an invalid model ID will cause requests to fail.

Suggested change
"default_model": "anthropic.claude-3-5-sonnet-20241022-v2:0",
"default_model": "anthropic.claude-3-5-sonnet-20240620-v1:0",

Comment thread providers.json
"api_key_required": true,
"base_url_env": "CLOUDFLARE_BASE_URL",
"model_env": "CLOUDFLARE_MODEL",
"default_model": "@cf/meta/llama-3.3-70b-instruct-fp8-fast",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The default_model for Cloudflare, @cf/meta/llama-3.3-70b-instruct-fp8-fast, seems to contain a typo. Meta has not released llama-3.3; the latest version is Llama 3.1. A valid model on Cloudflare is @cf/meta/llama-3.1-70b-instruct. Using a non-existent model will result in errors.

Suggested change
"default_model": "@cf/meta/llama-3.3-70b-instruct-fp8-fast",
"default_model": "@cf/meta/llama-3.1-70b-instruct",

Copy link
Copy Markdown
Member

@ilblackdragon ilblackdragon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. Also comments from Gemini seems like because it has outdated notion of it's own and other models

@Mffff4
Copy link
Copy Markdown
Contributor Author

Mffff4 commented Mar 7, 2026

@ilblackdragon
Thank you for the review and approval! You're right about Gemini's comments — they do seem to have outdated information about the models.

I plan to add native Google Gemini API support separately (not just through the OpenAI-compatible endpoint) in a future PR, as the native API provides additional capabilities like multimodal inputs, grounding with Google Search, code execution, and access to Google's latest model parameters. The current implementation uses the OpenAI-compatible endpoint for quick integration, but I want to leverage the full feature set of the Gemini API eventually.

Thanks for the feedback!

@ilblackdragon ilblackdragon merged commit 12ba79f into nearai:main Mar 7, 2026
22 checks passed
@github-actions github-actions Bot mentioned this pull request Mar 7, 2026
@github-actions github-actions Bot mentioned this pull request Mar 10, 2026
bkutasi pushed a commit to bkutasi/ironclaw that referenced this pull request Mar 28, 2026
…nd Cloudflare WS AI providers (nearai#676)

* feat(llm): add Google Gemini and AWS Bedrock providers

* feat(llm): add io.net, Mistral, Yandex, and Cloudflare WS AI providers
drchirag1991 pushed a commit to drchirag1991/ironclaw that referenced this pull request Apr 8, 2026
…nd Cloudflare WS AI providers (nearai#676)

* feat(llm): add Google Gemini and AWS Bedrock providers

* feat(llm): add io.net, Mistral, Yandex, and Cloudflare WS AI providers
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

contributor: new First-time contributor risk: low Changes to docs, tests, or low-risk modules scope: docs Documentation scope: llm LLM integration size: M 50-199 changed lines

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants