feat(llm): add Google Gemini, AWS Bedrock, io.net, Mistral, Yandex, and Cloudflare WS AI providers#676
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly expands the platform's capabilities by integrating six new Large Language Model (LLM) providers: Google Gemini, AWS Bedrock, io.net, Mistral, Yandex AI Studio, and Cloudflare Workers AI. These additions enhance the system's flexibility and choice for users, along with necessary documentation updates and specific handling for dynamic base URLs and custom headers. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
22b19c5 to
b892abf
Compare
There was a problem hiding this comment.
Code Review
This pull request is a great addition, adding support for six new LLM providers and updating the corresponding documentation. The changes are well-structured. I've identified a few potential typos in the default model names for some of the new providers in providers.json. These could cause errors for users relying on the default configurations. My review includes specific suggestions to correct these model names.
Note: Security Review has been skipped due to the limited scope of the PR.
| "api_key_env": "GEMINI_API_KEY", | ||
| "api_key_required": true, | ||
| "model_env": "GEMINI_MODEL", | ||
| "default_model": "gemini-2.5-flash", |
There was a problem hiding this comment.
| "api_key_required": false, | ||
| "base_url_env": "BEDROCK_BASE_URL", | ||
| "model_env": "BEDROCK_MODEL", | ||
| "default_model": "anthropic.claude-3-5-sonnet-20241022-v2:0", |
There was a problem hiding this comment.
The default_model for AWS Bedrock, anthropic.claude-3-5-sonnet-20241022-v2:0, appears to be incorrect. The date 20241022 is in the future and the version v2:0 is unusual. The official model ID for Claude 3.5 Sonnet on Bedrock is anthropic.claude-3-5-sonnet-20240620-v1:0. Using an invalid model ID will cause requests to fail.
| "default_model": "anthropic.claude-3-5-sonnet-20241022-v2:0", | |
| "default_model": "anthropic.claude-3-5-sonnet-20240620-v1:0", |
| "api_key_required": true, | ||
| "base_url_env": "CLOUDFLARE_BASE_URL", | ||
| "model_env": "CLOUDFLARE_MODEL", | ||
| "default_model": "@cf/meta/llama-3.3-70b-instruct-fp8-fast", |
There was a problem hiding this comment.
The default_model for Cloudflare, @cf/meta/llama-3.3-70b-instruct-fp8-fast, seems to contain a typo. Meta has not released llama-3.3; the latest version is Llama 3.1. A valid model on Cloudflare is @cf/meta/llama-3.1-70b-instruct. Using a non-existent model will result in errors.
| "default_model": "@cf/meta/llama-3.3-70b-instruct-fp8-fast", | |
| "default_model": "@cf/meta/llama-3.1-70b-instruct", |
ilblackdragon
left a comment
There was a problem hiding this comment.
Looks good. Also comments from Gemini seems like because it has outdated notion of it's own and other models
|
@ilblackdragon I plan to add native Google Gemini API support separately (not just through the OpenAI-compatible endpoint) in a future PR, as the native API provides additional capabilities like multimodal inputs, grounding with Google Search, code execution, and access to Google's latest model parameters. The current implementation uses the OpenAI-compatible endpoint for quick integration, but I want to leverage the full feature set of the Gemini API eventually. Thanks for the feedback! |
…nd Cloudflare WS AI providers (nearai#676) * feat(llm): add Google Gemini and AWS Bedrock providers * feat(llm): add io.net, Mistral, Yandex, and Cloudflare WS AI providers
…nd Cloudflare WS AI providers (nearai#676) * feat(llm): add Google Gemini and AWS Bedrock providers * feat(llm): add io.net, Mistral, Yandex, and Cloudflare WS AI providers
providers.jsonLLM_PROVIDERS.mddocumentation matrixFEATURE_PARITY.mdYANDEX_EXTRA_HEADERSfor Yandex folder IDsdefault_base_urltest logic for Cloudflare and AWS Bedrock as they have dynamic base URLs