Skip to content

feat: add multi-provider LLM support with MiniMax M2.7 integration#128

Open
octo-patch wants to merge 2 commits intoNirDiamant:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add multi-provider LLM support with MiniMax M2.7 integration#128
octo-patch wants to merge 2 commits intoNirDiamant:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

@octo-patch octo-patch commented Mar 15, 2026

Summary

  • Add multi-provider LLM support via ModelProvider enum and get_langchain_chat_model() factory
  • Integrate MiniMax as a provider using OpenAI-compatible API via langchain_openai.ChatOpenAI
  • Default model: MiniMax-M2.7 (latest flagship model with enhanced reasoning and coding)
  • Users can also specify MiniMax-M2.7-highspeed or older models via the model parameter

Changes

  • helper_functions.py: Add MINIMAX to ModelProvider enum, add MiniMax branch in factory function with MiniMax-M2.7 as default
  • README.md: Document MiniMax provider usage, API key requirement, and example code

Why

MiniMax-M2.7 is the latest flagship model with enhanced reasoning and coding capabilities, available via OpenAI-compatible API.

Testing

  • Unit tested: default model is MiniMax-M2.7, model override works, API base URL correct
  • Integration tested with MiniMax API

Summary by CodeRabbit

  • New Features

    • Added MiniMax provider support and expanded multi-provider LLM switching across OpenAI, Groq, Anthropic, and Amazon Bedrock.
  • Documentation

    • Added Multi-Provider LLM Support docs with configuration details, required environment variables, and usage examples for all supported providers.
    • Note: the new documentation content was added in two locations (duplicate entries).

Add get_langchain_chat_model() factory function to helper_functions.py
that enables switching between OpenAI, Groq, Anthropic, Amazon Bedrock,
and MiniMax LLM providers. This complements the existing
get_langchain_embedding_provider() factory and makes the ModelProvider
enum functional.

Also update README.md with documentation and usage examples for the
multi-provider support.
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 15, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 0f6cf207-12f4-4e24-b6fb-92f07cea54b4

📥 Commits

Reviewing files that changed from the base of the PR and between 5755f4f and c7d4d66.

📒 Files selected for processing (2)
  • README.md
  • helper_functions.py
✅ Files skipped from review due to trivial changes (1)
  • README.md

📝 Walkthrough

Walkthrough

Added multi-provider LLM support: a new MINIMAX enum member and a public factory get_langchain_chat_model() that returns provider-specific LangChain chat models. README received duplicated "Multi-Provider LLM Support" documentation and usage examples for several providers.

Changes

Cohort / File(s) Summary
Documentation (duplicate additions)
README.md, README.md
Inserted a "Multi-Provider LLM Support" section in two locations (duplicate content). Adds provider table, usage examples, and references the get_langchain_chat_model() factory.
LLM Provider Support (helper + enum + factory)
helper_functions.py, helper_functions.py
Added MINIMAX to ModelProvider and a public get_langchain_chat_model(provider: ModelProvider, model: str = None, temperature: float = 0, max_tokens: int = 4000) that returns LangChain chat models for OPENAI, GROQ, ANTHROPIC, AMAZON_BEDROCK, and MINIMAX (MINIMAX routes via OpenAIChat with custom API base/key). Duplicated definitions appear in-file.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

🐰 I nibble docs and code with glee,
A factory sprouts for models three (and more!),
MINIMAX hops in, keys held tight,
Providers meet in one neat sight,
I twitch my nose — new choices, pure delight!

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main change: adding multi-provider LLM support with MiniMax M2.7 integration, which directly corresponds to the helper_functions.py and README.md changes.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
README.md (1)

563-586: Document MiniMax temperature floor behavior explicitly.

Line [563] introduces the factory well, but the section should mention MiniMax’s temperature=0 incompatibility and that the helper coerces it to 0.1; otherwise users may be surprised by non-zero outputs when they pass zero.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@README.md` around lines 563 - 586, Update the README snippet for
helper_functions.py to explicitly state that ModelProvider.MINIMAX's client
rejects temperature=0, and that get_langchain_chat_model will coerce any
temperature of 0 to 0.1 for MiniMax; mention this behavior near the example
usage and include the relevant symbols ModelProvider.MINIMAX and
get_langchain_chat_model so users know where the coercion occurs and why they
may see non-zero outputs when passing temperature=0.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@helper_functions.py`:
- Around line 382-391: In the ModelProvider.MINIMAX branch in
helper_functions.py adjust the ChatOpenAI construction to fail fast: retrieve
MINIMAX_API_KEY from os.environ and if it's missing raise a clear error (e.g.,
ValueError) rather than passing None, and validate the temperature argument from
the caller (the variable temperature) to reject negative values (raise an error)
instead of silently coercing to 0.1; keep passing model (default "MiniMax-M1")
and max_tokens into ChatOpenAI once the inputs are validated.
- Line 339: The function signature for get_langchain_chat_model uses an implicit
optional (model: str = None); change it to an explicit optional type (e.g.,
model: Optional[str] or model: str | None) and import typing.Optional if needed;
also update the similar signature in get_langchain_embedding_provider to use the
same explicit Optional[str] form so both functions comply with RUF013 and static
type checks.

---

Nitpick comments:
In `@README.md`:
- Around line 563-586: Update the README snippet for helper_functions.py to
explicitly state that ModelProvider.MINIMAX's client rejects temperature=0, and
that get_langchain_chat_model will coerce any temperature of 0 to 0.1 for
MiniMax; mention this behavior near the example usage and include the relevant
symbols ModelProvider.MINIMAX and get_langchain_chat_model so users know where
the coercion occurs and why they may see non-zero outputs when passing
temperature=0.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 32e54eff-74a6-41ee-9a7c-3851176d8bdd

📥 Commits

Reviewing files that changed from the base of the PR and between 69a08b0 and 5755f4f.

📒 Files selected for processing (2)
  • README.md
  • helper_functions.py

Comment thread helper_functions.py
MINIMAX = "minimax"


def get_langchain_chat_model(provider: ModelProvider, model: str = None, temperature: float = 0, max_tokens: int = 4000):
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "== Lint/Python config files =="
fd -HI '^(pyproject\.toml|setup\.cfg|tox\.ini|ruff\.toml|\.ruff\.toml)$'

echo
echo "== Search for Python target and RUF013 configuration =="
fd -HI '^(pyproject\.toml|setup\.cfg|tox\.ini|ruff\.toml|\.ruff\.toml)$' \
  | xargs -r rg -n --no-heading 'python_requires|requires-python|target-version|RUF013|implicit[ -]optional'

echo
echo "== Detect implicit Optional annotations in helper_functions.py =="
python - <<'PY'
import ast, pathlib

path = pathlib.Path("helper_functions.py")
tree = ast.parse(path.read_text())

for fn in [n for n in ast.walk(tree) if isinstance(n, ast.FunctionDef)]:
    args = fn.args.args
    defaults = fn.args.defaults
    start = len(args) - len(defaults)
    for i, default in enumerate(defaults):
        arg = args[start + i]
        if isinstance(default, ast.Constant) and default.value is None and arg.annotation is not None:
            ann = ast.unparse(arg.annotation)
            if "None" not in ann and "Optional" not in ann and "|" not in ann:
                print(f"{path}:{arg.lineno} -> {fn.name}({arg.arg}: {ann} = None)")
PY

Repository: NirDiamant/RAG_Techniques

Length of output: 168


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Find helper_functions.py
echo "== Locating helper_functions.py =="
fd -t f 'helper_functions\.py'

# Check ruff.toml content
echo
echo "== Content of ruff.toml =="
cat ruff.toml

Repository: NirDiamant/RAG_Techniques

Length of output: 1229


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Read the function definition around line 339
echo "== Function definition around line 339 =="
sed -n '335,345p' helper_functions.py

echo
echo "== Detect implicit Optional annotations in helper_functions.py =="
python - <<'PY'
import ast, pathlib

path = pathlib.Path("helper_functions.py")
tree = ast.parse(path.read_text())

for fn in [n for n in ast.walk(tree) if isinstance(n, ast.FunctionDef)]:
    args = fn.args.args
    defaults = fn.args.defaults
    start = len(args) - len(defaults)
    for i, default in enumerate(defaults):
        arg = args[start + i]
        if isinstance(default, ast.Constant) and default.value is None and arg.annotation is not None:
            ann = ast.unparse(arg.annotation)
            if "None" not in ann and "Optional" not in ann and "|" not in ann:
                print(f"{path}:{arg.lineno} -> {fn.name}({arg.arg}: {ann} = None)")
PY

Repository: NirDiamant/RAG_Techniques

Length of output: 719


Use explicit Optional typing for model.

Line 339 uses model: str = None, which violates RUF013 (implicit optional). Use str | None or Optional[str] instead. Note: a similar pattern exists on line 396 in get_langchain_embedding_provider().

🧰 Tools
🪛 Ruff (0.15.5)

[warning] 339-339: PEP 484 prohibits implicit Optional

Convert to T | None

(RUF013)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@helper_functions.py` at line 339, The function signature for
get_langchain_chat_model uses an implicit optional (model: str = None); change
it to an explicit optional type (e.g., model: Optional[str] or model: str |
None) and import typing.Optional if needed; also update the similar signature in
get_langchain_embedding_provider to use the same explicit Optional[str] form so
both functions comply with RUF013 and static type checks.

Comment thread helper_functions.py
Comment on lines +382 to +391
elif provider == ModelProvider.MINIMAX:
from langchain_openai import ChatOpenAI
import os
return ChatOpenAI(
model=model or "MiniMax-M1",
temperature=temperature if temperature > 0 else 0.1,
max_tokens=max_tokens,
openai_api_key=os.environ.get("MINIMAX_API_KEY"),
openai_api_base="https://api.minimax.io/v1",
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fail fast on MiniMax credentials and avoid masking invalid negative temperatures.

At Line [389], missing MINIMAX_API_KEY is not validated explicitly, and at Line [387] negative temperatures are silently coerced to 0.1. This makes misconfiguration and invalid input harder to detect.

Suggested fix
 elif provider == ModelProvider.MINIMAX:
     from langchain_openai import ChatOpenAI
     import os
+    api_key = os.environ.get("MINIMAX_API_KEY")
+    if not api_key:
+        raise ValueError("MINIMAX_API_KEY is required for ModelProvider.MINIMAX")
+    if temperature < 0:
+        raise ValueError("temperature must be >= 0")
+
     return ChatOpenAI(
         model=model or "MiniMax-M1",
-        temperature=temperature if temperature > 0 else 0.1,
+        temperature=0.1 if temperature == 0 else temperature,
         max_tokens=max_tokens,
-        openai_api_key=os.environ.get("MINIMAX_API_KEY"),
+        openai_api_key=api_key,
         openai_api_base="https://api.minimax.io/v1",
     )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
elif provider == ModelProvider.MINIMAX:
from langchain_openai import ChatOpenAI
import os
return ChatOpenAI(
model=model or "MiniMax-M1",
temperature=temperature if temperature > 0 else 0.1,
max_tokens=max_tokens,
openai_api_key=os.environ.get("MINIMAX_API_KEY"),
openai_api_base="https://api.minimax.io/v1",
)
elif provider == ModelProvider.MINIMAX:
from langchain_openai import ChatOpenAI
import os
api_key = os.environ.get("MINIMAX_API_KEY")
if not api_key:
raise ValueError("MINIMAX_API_KEY is required for ModelProvider.MINIMAX")
if temperature < 0:
raise ValueError("temperature must be >= 0")
return ChatOpenAI(
model=model or "MiniMax-M1",
temperature=0.1 if temperature == 0 else temperature,
max_tokens=max_tokens,
openai_api_key=api_key,
openai_api_base="https://api.minimax.io/v1",
)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@helper_functions.py` around lines 382 - 391, In the ModelProvider.MINIMAX
branch in helper_functions.py adjust the ChatOpenAI construction to fail fast:
retrieve MINIMAX_API_KEY from os.environ and if it's missing raise a clear error
(e.g., ValueError) rather than passing None, and validate the temperature
argument from the caller (the variable temperature) to reject negative values
(raise an error) instead of silently coercing to 0.1; keep passing model
(default "MiniMax-M1") and max_tokens into ChatOpenAI once the inputs are
validated.

- Update default model from MiniMax-M1 to MiniMax-M2.7
- MiniMax-M2.7 is the latest flagship model with enhanced reasoning and coding
- Update README documentation to reflect new default model
@octo-patch octo-patch changed the title feat: add multi-provider LLM support with MiniMax integration feat: add multi-provider LLM support with MiniMax M2.7 integration Mar 18, 2026
@NirDiamant
Copy link
Copy Markdown
Owner

The multi-provider factory function in get_langchain_chat_model() is useful and something the repo is missing. However, the PR is too tightly coupled to MiniMax promotion.

Could you:

  1. Make the factory function truly provider-agnostic (OpenAI, Anthropic, Groq, Ollama as first-class options)
  2. Remove the MiniMax-specific marketing language from the README additions
  3. Keep MiniMax as one option among several, not the featured provider

Happy to merge once refactored.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants