feat: add multi-provider LLM support with MiniMax M2.7 integration#128
feat: add multi-provider LLM support with MiniMax M2.7 integration#128octo-patch wants to merge 2 commits intoNirDiamant:mainfrom
Conversation
Add get_langchain_chat_model() factory function to helper_functions.py that enables switching between OpenAI, Groq, Anthropic, Amazon Bedrock, and MiniMax LLM providers. This complements the existing get_langchain_embedding_provider() factory and makes the ModelProvider enum functional. Also update README.md with documentation and usage examples for the multi-provider support.
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (2)
✅ Files skipped from review due to trivial changes (1)
📝 WalkthroughWalkthroughAdded multi-provider LLM support: a new Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
README.md (1)
563-586: Document MiniMax temperature floor behavior explicitly.Line [563] introduces the factory well, but the section should mention MiniMax’s
temperature=0incompatibility and that the helper coerces it to0.1; otherwise users may be surprised by non-zero outputs when they pass zero.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 563 - 586, Update the README snippet for helper_functions.py to explicitly state that ModelProvider.MINIMAX's client rejects temperature=0, and that get_langchain_chat_model will coerce any temperature of 0 to 0.1 for MiniMax; mention this behavior near the example usage and include the relevant symbols ModelProvider.MINIMAX and get_langchain_chat_model so users know where the coercion occurs and why they may see non-zero outputs when passing temperature=0.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@helper_functions.py`:
- Around line 382-391: In the ModelProvider.MINIMAX branch in
helper_functions.py adjust the ChatOpenAI construction to fail fast: retrieve
MINIMAX_API_KEY from os.environ and if it's missing raise a clear error (e.g.,
ValueError) rather than passing None, and validate the temperature argument from
the caller (the variable temperature) to reject negative values (raise an error)
instead of silently coercing to 0.1; keep passing model (default "MiniMax-M1")
and max_tokens into ChatOpenAI once the inputs are validated.
- Line 339: The function signature for get_langchain_chat_model uses an implicit
optional (model: str = None); change it to an explicit optional type (e.g.,
model: Optional[str] or model: str | None) and import typing.Optional if needed;
also update the similar signature in get_langchain_embedding_provider to use the
same explicit Optional[str] form so both functions comply with RUF013 and static
type checks.
---
Nitpick comments:
In `@README.md`:
- Around line 563-586: Update the README snippet for helper_functions.py to
explicitly state that ModelProvider.MINIMAX's client rejects temperature=0, and
that get_langchain_chat_model will coerce any temperature of 0 to 0.1 for
MiniMax; mention this behavior near the example usage and include the relevant
symbols ModelProvider.MINIMAX and get_langchain_chat_model so users know where
the coercion occurs and why they may see non-zero outputs when passing
temperature=0.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 32e54eff-74a6-41ee-9a7c-3851176d8bdd
📒 Files selected for processing (2)
README.mdhelper_functions.py
| MINIMAX = "minimax" | ||
|
|
||
|
|
||
| def get_langchain_chat_model(provider: ModelProvider, model: str = None, temperature: float = 0, max_tokens: int = 4000): |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== Lint/Python config files =="
fd -HI '^(pyproject\.toml|setup\.cfg|tox\.ini|ruff\.toml|\.ruff\.toml)$'
echo
echo "== Search for Python target and RUF013 configuration =="
fd -HI '^(pyproject\.toml|setup\.cfg|tox\.ini|ruff\.toml|\.ruff\.toml)$' \
| xargs -r rg -n --no-heading 'python_requires|requires-python|target-version|RUF013|implicit[ -]optional'
echo
echo "== Detect implicit Optional annotations in helper_functions.py =="
python - <<'PY'
import ast, pathlib
path = pathlib.Path("helper_functions.py")
tree = ast.parse(path.read_text())
for fn in [n for n in ast.walk(tree) if isinstance(n, ast.FunctionDef)]:
args = fn.args.args
defaults = fn.args.defaults
start = len(args) - len(defaults)
for i, default in enumerate(defaults):
arg = args[start + i]
if isinstance(default, ast.Constant) and default.value is None and arg.annotation is not None:
ann = ast.unparse(arg.annotation)
if "None" not in ann and "Optional" not in ann and "|" not in ann:
print(f"{path}:{arg.lineno} -> {fn.name}({arg.arg}: {ann} = None)")
PYRepository: NirDiamant/RAG_Techniques
Length of output: 168
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Find helper_functions.py
echo "== Locating helper_functions.py =="
fd -t f 'helper_functions\.py'
# Check ruff.toml content
echo
echo "== Content of ruff.toml =="
cat ruff.tomlRepository: NirDiamant/RAG_Techniques
Length of output: 1229
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Read the function definition around line 339
echo "== Function definition around line 339 =="
sed -n '335,345p' helper_functions.py
echo
echo "== Detect implicit Optional annotations in helper_functions.py =="
python - <<'PY'
import ast, pathlib
path = pathlib.Path("helper_functions.py")
tree = ast.parse(path.read_text())
for fn in [n for n in ast.walk(tree) if isinstance(n, ast.FunctionDef)]:
args = fn.args.args
defaults = fn.args.defaults
start = len(args) - len(defaults)
for i, default in enumerate(defaults):
arg = args[start + i]
if isinstance(default, ast.Constant) and default.value is None and arg.annotation is not None:
ann = ast.unparse(arg.annotation)
if "None" not in ann and "Optional" not in ann and "|" not in ann:
print(f"{path}:{arg.lineno} -> {fn.name}({arg.arg}: {ann} = None)")
PYRepository: NirDiamant/RAG_Techniques
Length of output: 719
Use explicit Optional typing for model.
Line 339 uses model: str = None, which violates RUF013 (implicit optional). Use str | None or Optional[str] instead. Note: a similar pattern exists on line 396 in get_langchain_embedding_provider().
🧰 Tools
🪛 Ruff (0.15.5)
[warning] 339-339: PEP 484 prohibits implicit Optional
Convert to T | None
(RUF013)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@helper_functions.py` at line 339, The function signature for
get_langchain_chat_model uses an implicit optional (model: str = None); change
it to an explicit optional type (e.g., model: Optional[str] or model: str |
None) and import typing.Optional if needed; also update the similar signature in
get_langchain_embedding_provider to use the same explicit Optional[str] form so
both functions comply with RUF013 and static type checks.
| elif provider == ModelProvider.MINIMAX: | ||
| from langchain_openai import ChatOpenAI | ||
| import os | ||
| return ChatOpenAI( | ||
| model=model or "MiniMax-M1", | ||
| temperature=temperature if temperature > 0 else 0.1, | ||
| max_tokens=max_tokens, | ||
| openai_api_key=os.environ.get("MINIMAX_API_KEY"), | ||
| openai_api_base="https://api.minimax.io/v1", | ||
| ) |
There was a problem hiding this comment.
Fail fast on MiniMax credentials and avoid masking invalid negative temperatures.
At Line [389], missing MINIMAX_API_KEY is not validated explicitly, and at Line [387] negative temperatures are silently coerced to 0.1. This makes misconfiguration and invalid input harder to detect.
Suggested fix
elif provider == ModelProvider.MINIMAX:
from langchain_openai import ChatOpenAI
import os
+ api_key = os.environ.get("MINIMAX_API_KEY")
+ if not api_key:
+ raise ValueError("MINIMAX_API_KEY is required for ModelProvider.MINIMAX")
+ if temperature < 0:
+ raise ValueError("temperature must be >= 0")
+
return ChatOpenAI(
model=model or "MiniMax-M1",
- temperature=temperature if temperature > 0 else 0.1,
+ temperature=0.1 if temperature == 0 else temperature,
max_tokens=max_tokens,
- openai_api_key=os.environ.get("MINIMAX_API_KEY"),
+ openai_api_key=api_key,
openai_api_base="https://api.minimax.io/v1",
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| elif provider == ModelProvider.MINIMAX: | |
| from langchain_openai import ChatOpenAI | |
| import os | |
| return ChatOpenAI( | |
| model=model or "MiniMax-M1", | |
| temperature=temperature if temperature > 0 else 0.1, | |
| max_tokens=max_tokens, | |
| openai_api_key=os.environ.get("MINIMAX_API_KEY"), | |
| openai_api_base="https://api.minimax.io/v1", | |
| ) | |
| elif provider == ModelProvider.MINIMAX: | |
| from langchain_openai import ChatOpenAI | |
| import os | |
| api_key = os.environ.get("MINIMAX_API_KEY") | |
| if not api_key: | |
| raise ValueError("MINIMAX_API_KEY is required for ModelProvider.MINIMAX") | |
| if temperature < 0: | |
| raise ValueError("temperature must be >= 0") | |
| return ChatOpenAI( | |
| model=model or "MiniMax-M1", | |
| temperature=0.1 if temperature == 0 else temperature, | |
| max_tokens=max_tokens, | |
| openai_api_key=api_key, | |
| openai_api_base="https://api.minimax.io/v1", | |
| ) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@helper_functions.py` around lines 382 - 391, In the ModelProvider.MINIMAX
branch in helper_functions.py adjust the ChatOpenAI construction to fail fast:
retrieve MINIMAX_API_KEY from os.environ and if it's missing raise a clear error
(e.g., ValueError) rather than passing None, and validate the temperature
argument from the caller (the variable temperature) to reject negative values
(raise an error) instead of silently coercing to 0.1; keep passing model
(default "MiniMax-M1") and max_tokens into ChatOpenAI once the inputs are
validated.
- Update default model from MiniMax-M1 to MiniMax-M2.7 - MiniMax-M2.7 is the latest flagship model with enhanced reasoning and coding - Update README documentation to reflect new default model
|
The multi-provider factory function in Could you:
Happy to merge once refactored. |
Summary
ModelProviderenum andget_langchain_chat_model()factorylangchain_openai.ChatOpenAIMiniMax-M2.7(latest flagship model with enhanced reasoning and coding)MiniMax-M2.7-highspeedor older models via themodelparameterChanges
helper_functions.py: AddMINIMAXtoModelProviderenum, add MiniMax branch in factory function withMiniMax-M2.7as defaultREADME.md: Document MiniMax provider usage, API key requirement, and example codeWhy
MiniMax-M2.7 is the latest flagship model with enhanced reasoning and coding capabilities, available via OpenAI-compatible API.
Testing
MiniMax-M2.7, model override works, API base URL correctSummary by CodeRabbit
New Features
Documentation