Skip to content

Kimi k2.5 MLA support#23803

Draft
rohitharkhani wants to merge 2 commits intosgl-project:mainfrom
rohitharkhani:kimi-k2.5-mls-support
Draft

Kimi k2.5 MLA support#23803
rohitharkhani wants to merge 2 commits intosgl-project:mainfrom
rohitharkhani:kimi-k2.5-mls-support

Conversation

@rohitharkhani
Copy link
Copy Markdown

@rohitharkhani rohitharkhani commented Apr 27, 2026

Motivation

support for MLA model speculative decoding https://huggingface.co/lightseekorg/kimi-k2.5-eagle3-mla

Modifications

  1. Added deepseek MLA Eagle3 support
  2. Speculative decoding matches the base model context

Accuracy Tests

TODO

Speed Tests and Profiling

TODO

Checklist

Review and Merge Process

  1. Ping Merge Oncalls to start the process. See the PR Merge Process.
  2. Get approvals from CODEOWNERS and other reviewers.
  3. Trigger CI tests with comments or contact authorized users to do so.
    • Common commands include /tag-and-rerun-ci, /tag-run-ci-label, /rerun-failed-ci
  4. After green CI and required approvals, ask Merge Oncalls or people with Write permission to merge the PR.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Warning

You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again!

@rohitharkhani rohitharkhani changed the title Kimi k2.5 mls support Kimi k2.5 MLA support Apr 27, 2026
@rohitharkhani rohitharkhani force-pushed the kimi-k2.5-mls-support branch from c7a156f to fe81bdc Compare April 27, 2026 07:35
rohitharkhani and others added 2 commits April 27, 2026 13:07
Add deepseek_eagle3.py implementing Eagle3DeepseekV2ForCausalLM for
speculative decoding with DeepSeek Multi-head Latent Attention (MLA).

This enables SGLang to load Eagle3 draft models that use MLA attention
(e.g., lightseekorg/kimi-k2.5-eagle3-mla) which declare the architecture
Eagle3DeepseekV2ForCausalLM. Previously only Llama-based Eagle3 models
were supported via llama_eagle3.py.

Key implementation details:
- DeepseekEagle3DecoderLayer: Single decoder layer using
  DeepseekV2AttentionMLA with overridden fused_qkv_a_proj_with_mqa
  to accept 2*hidden_size input (concat of embeddings + hidden states)
- DeepseekEagle3Model: Backbone with embed_tokens, fc projection
  (3*target_hidden_size -> hidden_size), single midlayer, and norm
- Eagle3DeepseekV2ForCausalLM: Top-level class with lm_head,
  logits_processor, hot_token_id (d2t mapping), and custom weight
  loading that handles fused q_a_proj/kv_a_proj_with_mqa concatenation
- Eagle3DeepseekV3ForCausalLM: Alias for V3 compatibility
- Auto-registered via EntryClass for model registry discovery

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
@rohitharkhani rohitharkhani force-pushed the kimi-k2.5-mls-support branch from fe81bdc to 317b8cd Compare April 27, 2026 07:37
@rohitharkhani
Copy link
Copy Markdown
Author

/tag-and-rerun-ci

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant