Skip to content

Add MiniMax as LLM provider with M2.7 and M2.5 models#60

Open
octo-patch wants to merge 1 commit intostanford-oval:mainfrom
octo-patch:feature/add-minimax-provider
Open

Add MiniMax as LLM provider with M2.7 and M2.5 models#60
octo-patch wants to merge 1 commit intostanford-oval:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

  • Add MiniMax as an LLM provider in WikiChat via the OpenAI-compatible API
  • Add MiniMax-M2.7, M2.7-highspeed, M2.5, and M2.5-highspeed models to llm_config.yaml using LiteLLM's openai/ prefix for routing
  • Add MiniMax models to the Chainlit web UI model selector dropdown
  • Update README.md to mention MiniMax in supported providers and API key setup

Details

MiniMax provides an OpenAI-compatible API at https://api.minimax.io/v1, making integration seamless through WikiChat's existing ChainLite/LiteLLM stack. No code changes are needed beyond configuration — once the MINIMAX_API_KEY environment variable is set, all pipeline stages (query, generation, filtering, drafting, refining, reranking) automatically work with MiniMax models.

Models added:

Engine name Model Context window
minimax-m27 MiniMax-M2.7 1M tokens
minimax-m27-highspeed MiniMax-M2.7-highspeed 1M tokens
minimax-m25 MiniMax-M2.5 204K tokens
minimax-m25-highspeed MiniMax-M2.5-highspeed 204K tokens

Usage:

# Set API key
echo "MINIMAX_API_KEY=your-key" >> API_KEYS

# Run WikiChat with MiniMax
inv demo --engine minimax-m27

Test plan

  • 18 unit tests validating config structure, engine name uniqueness, LiteLLM prefix, UI dropdown, README mentions, engine resolution, and temperature handling
  • 3 integration tests (MiniMax-M2.7 completion, M2.5-highspeed completion, streaming) — all passing
  • All 21 tests pass: pytest tests/test_minimax.py -v

Add MiniMax AI (https://platform.minimaxi.com/) as an LLM provider option,
enabling WikiChat to use MiniMax's OpenAI-compatible API via LiteLLM.

Changes:
- llm_config.yaml: Add MiniMax endpoint with M2.7, M2.7-highspeed, M2.5,
  and M2.5-highspeed model mappings using openai/ prefix for LiteLLM routing
- backend_server.py: Add minimax-m27 and minimax-m27-highspeed to the
  Chainlit web UI model selector dropdown
- README.md: Mention MiniMax in supported providers list and API_KEYS example
- tests/test_minimax.py: 18 unit tests (config validation, engine resolution,
  temperature handling, UI/README checks) + 3 integration tests (completion,
  streaming via LiteLLM)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant