Skip to content

Commit 5755f4f

Browse files
committed
feat: add multi-provider LLM support with MiniMax integration
Add get_langchain_chat_model() factory function to helper_functions.py that enables switching between OpenAI, Groq, Anthropic, Amazon Bedrock, and MiniMax LLM providers. This complements the existing get_langchain_embedding_provider() factory and makes the ModelProvider enum functional. Also update README.md with documentation and usage examples for the multi-provider support.
1 parent 69a08b0 commit 5755f4f

2 files changed

Lines changed: 85 additions & 0 deletions

File tree

README.md

Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -558,6 +558,33 @@ To begin implementing these advanced RAG techniques in your projects:
558558
```
559559
3. Follow the detailed implementation guide in each technique's directory.
560560

561+
## Multi-Provider LLM Support
562+
563+
The `helper_functions.py` module includes a `get_langchain_chat_model()` factory function that lets you easily switch between different LLM providers:
564+
565+
| Provider | Enum Value | Default Model | Required Environment Variable |
566+
|----------|------------|---------------|-------------------------------|
567+
| OpenAI | `ModelProvider.OPENAI` | `gpt-4o` | `OPENAI_API_KEY` |
568+
| Groq | `ModelProvider.GROQ` | `llama-3.3-70b-versatile` | `GROQ_API_KEY` |
569+
| Anthropic | `ModelProvider.ANTHROPIC` | `claude-sonnet-4-20250514` | `ANTHROPIC_API_KEY` |
570+
| Amazon Bedrock | `ModelProvider.AMAZON_BEDROCK` | `anthropic.claude-3-sonnet-20240229-v1:0` | AWS credentials |
571+
| MiniMax | `ModelProvider.MINIMAX` | `MiniMax-M1` | `MINIMAX_API_KEY` |
572+
573+
**Example usage:**
574+
575+
```python
576+
from helper_functions import ModelProvider, get_langchain_chat_model
577+
578+
# Use OpenAI (default)
579+
llm = get_langchain_chat_model(ModelProvider.OPENAI)
580+
581+
# Use MiniMax
582+
llm = get_langchain_chat_model(ModelProvider.MINIMAX, model="MiniMax-M1")
583+
584+
# Use Anthropic with custom settings
585+
llm = get_langchain_chat_model(ModelProvider.ANTHROPIC, temperature=0.7, max_tokens=2000)
586+
```
587+
561588
## Contributing
562589

563590
We welcome contributions from the community! If you have a new technique or improvement to suggest:

helper_functions.py

Lines changed: 58 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -333,6 +333,64 @@ class ModelProvider(Enum):
333333
GROQ = "groq"
334334
ANTHROPIC = "anthropic"
335335
AMAZON_BEDROCK = "bedrock"
336+
MINIMAX = "minimax"
337+
338+
339+
def get_langchain_chat_model(provider: ModelProvider, model: str = None, temperature: float = 0, max_tokens: int = 4000):
340+
"""
341+
Returns a LangChain chat model based on the specified provider.
342+
343+
Args:
344+
provider (ModelProvider): The model provider to use.
345+
model (str): Optional - The specific model ID to use.
346+
temperature (float): The temperature for generation (default: 0).
347+
max_tokens (int): Maximum tokens for generation (default: 4000).
348+
349+
Returns:
350+
A LangChain chat model instance.
351+
352+
Raises:
353+
ValueError: If the specified provider is not supported.
354+
"""
355+
if provider == ModelProvider.OPENAI:
356+
from langchain_openai import ChatOpenAI
357+
return ChatOpenAI(
358+
model=model or "gpt-4o",
359+
temperature=temperature,
360+
max_tokens=max_tokens,
361+
)
362+
elif provider == ModelProvider.GROQ:
363+
from langchain_groq import ChatGroq
364+
return ChatGroq(
365+
model=model or "llama-3.3-70b-versatile",
366+
temperature=temperature,
367+
max_tokens=max_tokens,
368+
)
369+
elif provider == ModelProvider.ANTHROPIC:
370+
from langchain_anthropic import ChatAnthropic
371+
return ChatAnthropic(
372+
model=model or "claude-sonnet-4-20250514",
373+
temperature=temperature,
374+
max_tokens=max_tokens,
375+
)
376+
elif provider == ModelProvider.AMAZON_BEDROCK:
377+
from langchain_aws import ChatBedrock
378+
return ChatBedrock(
379+
model_id=model or "anthropic.claude-3-sonnet-20240229-v1:0",
380+
model_kwargs={"temperature": temperature, "max_tokens": max_tokens},
381+
)
382+
elif provider == ModelProvider.MINIMAX:
383+
from langchain_openai import ChatOpenAI
384+
import os
385+
return ChatOpenAI(
386+
model=model or "MiniMax-M1",
387+
temperature=temperature if temperature > 0 else 0.1,
388+
max_tokens=max_tokens,
389+
openai_api_key=os.environ.get("MINIMAX_API_KEY"),
390+
openai_api_base="https://api.minimax.io/v1",
391+
)
392+
else:
393+
raise ValueError(f"Unsupported model provider: {provider}")
336394

337395

338396
def get_langchain_embedding_provider(provider: EmbeddingProvider, model_id: str = None):

0 commit comments

Comments
 (0)