Skip to content

Correctly abort the failed grammar requests & Improve the handling of abort #6803

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Jun 2, 2025

Conversation

merrymercy
Copy link
Contributor

@merrymercy merrymercy commented Jun 2, 2025

  • Abort the requests in the grammar queue correctly
  • Return errors immediately for invalid grammar requests instead of silently ignoring the grammar
  • Simplify the redundant code for aborted requests

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @merrymercy, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

Summary of Changes

Hello! Gemini or gemini-code-assist here, providing a summary of this pull request. This PR primarily focuses on improving the handling of failed grammar requests and standardizing the process for aborting requests within the system. It introduces a dedicated object (INVALID_GRAMMAR_OBJ) to represent invalid grammar states, ensuring these failures are explicitly tracked and cached. Requests encountering invalid grammars or other validation errors are now consistently marked for abortion using a new helper method (set_finish_with_abort). Furthermore, the PR adds the capability to abort requests that are currently waiting in the grammar processing queue and includes metrics to track aborted requests. Minor adjustments are also made to CUDA graph capture batch sizes, memory pool size, logging, and test utilities.

Highlights

  • Standardized Request Abortion: Introduced a new method set_finish_with_abort on the Req class to provide a consistent way to mark requests for abortion with a specific error message and status code. This replaces scattered logic for handling various validation failures.
  • Improved Invalid Grammar Handling: Created a dedicated INVALID_GRAMMAR_OBJ to represent grammar compilation failures. Grammar backends now return this object instead of None on failure. The scheduler checks for this object (including in the cache) and aborts the corresponding request.
  • Grammar Queue Abort Support: Implemented the ability to abort requests that are currently waiting in the grammar processing queue, ensuring that AbortReq signals are handled promptly for these requests.
  • Metrics for Aborted Requests: Added a new Prometheus counter (sglang:num_aborted_requests) to track the total number of requests that are aborted.

Changelog

Click here to see the changelog
  • python/sglang/srt/constrained/base_grammar_backend.py
    • Added a default copy implementation returning self (L63).
    • Introduced INVALID_GRAMMAR_OBJ (L102).
    • Changed CacheEntry.value type hint from Optional[BaseGrammarObject] to BaseGrammarObject (L107).
  • python/sglang/srt/constrained/llguidance_backend.py
    • Imported INVALID_GRAMMAR_OBJ (L31).
    • Changed error handling in _from_serialized, dispatch_json, dispatch_ebnf, and dispatch_structural_tag to return INVALID_GRAMMAR_OBJ instead of None on failure (L131, L143, L156, L174).
    • Changed logging level from warning to error for grammar compilation failures (L130, L142, L155, L173).
  • python/sglang/srt/constrained/outlines_backend.py
    • Imported INVALID_GRAMMAR_OBJ (L27).
    • Changed error handling in _compile_regex and dispatch_json to return INVALID_GRAMMAR_OBJ instead of None on failure (L156, L175).
    • Changed logging level from warning to error for JSON schema failures (L174).
  • python/sglang/srt/constrained/xgrammar_backend.py
    • Imported INVALID_GRAMMAR_OBJ (L31).
    • Wrapped tokenizer info initialization in if True (L156).
    • Changed error handling in dispatch_json, dispatch_ebnf, dispatch_regex, and dispatch_structural_tag to return INVALID_GRAMMAR_OBJ instead of None on failure (L186, L194, L202, L221).
    • Changed logging level from warning to error for grammar compilation failures (L185, L193, L201, L220).
    • Removed conditional check for self.grammar_compiler in reset (L225).
  • python/sglang/srt/entrypoints/http_server.py
    • Added [http_server] prefix to error logs in stream_results and generate (L259, L277).
  • python/sglang/srt/managers/schedule_batch.py
    • Imported HTTPStatus (L40) and get_tensor_model_parallel_rank (L55).
    • Removed import of get_compiler_backend (L65).
    • Added set_finish_with_abort method to the Req class for standardized abortion (L776-L784).
  • python/sglang/srt/managers/scheduler.py
    • Imported INVALID_GRAMMAR_OBJ (L39).
    • Updated handle_generate_request to use set_finish_with_abort for disaggregation errors (L960), multimodal prompt length errors (L991), input length errors (L1007), and logprob_start_len errors (L1021).
    • Added check in handle_generate_request to abort requests hitting a cached INVALID_GRAMMAR_OBJ (L1059-L1061).
    • Updated handle_embedding_request to use set_finish_with_abort for multimodal prompt length errors (L1109).
    • Updated move_ready_grammar_requests to handle requests already finished (aborted) (L1783-L1785).
    • Updated move_ready_grammar_requests to handle INVALID_GRAMMAR_OBJ from grammar backend results and abort requests (L1788-L1791, L1822-L1825).
    • Renamed num_abort_reqs to num_timeout_reqs in move_ready_grammar_requests (L1780, L1810, L1814).
    • Updated timeout handling in move_ready_grammar_requests to use set_finish_with_abort and cache INVALID_GRAMMAR_OBJ (L1834-L1835).
    • Removed TODO comment about aborting grammar queue requests (L2027).
    • Added logic to abort_request to cancel and abort requests found in the grammar_queue (L2056-L2061).
  • python/sglang/srt/managers/tokenizer_manager.py
    • Initialized mm_processor to None if multimodal is not enabled (L224).
    • Updated _tokenize_one_request to only process multimodal data if mm_processor is available (L465).
    • Ensured image_inputs is None if multimodal processing is skipped (L475).
    • Modified _send_one_request to return the created ReqState (L635).
    • Modified _wait_one_response to accept ReqState as an argument instead of looking it up (L640).
    • Updated calls to _wait_one_response in _handle_batch_request to pass the ReqState (L714, L722, L748, L757).
    • Added metrics observation for aborted requests in abort_request (L793-L794).
  • python/sglang/srt/managers/utils.py
    • Removed manual setting of finished_reason and logging in validate_input_length (L38-L41), as this is now handled by set_finish_with_abort in the scheduler.
  • python/sglang/srt/metrics/collector.py
    • Added num_aborted_requests_total counter (L405-L408).
    • Added observe_one_aborted_request method to increment the aborted requests counter (L543-L544).
  • python/sglang/srt/model_executor/cuda_graph_runner.py
    • Removed unused import two_batch_overlap (L27).
    • Adjusted default CUDA graph capture batch sizes, especially for speculative decoding and larger GPUs (L135, L144-L145, L150).
    • Removed logic adding size - 1 to capture sizes (L155-L157).
    • Removed filtering of batch sizes based on enable_two_batch_overlap (L159-L160).
    • Added assertion check for capture_bs (L165).
  • python/sglang/srt/model_executor/model_runner.py
    • Changed ReqToTokenPool size from max_num_reqs + 1 to max_num_reqs (L921).
  • python/sglang/srt/utils.py
    • Added is_blackwell helper function (L2058-L2061).
  • python/sglang/test/send_one.py
    • Added check for non-200 HTTP status code and print response before returning (L130-L132).
  • python/sglang/test/test_utils.py
    • Changed STDERR_FILENAME and STDOUT_FILENAME to use /tmp/ directory (L884-L885).
    • Reduced sleep time in read_output from 1 second to 0.01 seconds (L891).
    • Added try-except block for FileNotFoundError in read_output (L897-L901).
  • scripts/killall_sglang.sh
    • Added conditional logic to skip nvidia-smi calls and GPU process killing if the first argument is 'rocm' (L3-L8, L10-L33).
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.


Grammar fails, a syntax error,
No longer None, but a clear terror.
Requests now abort,
A cleaner report,
No more hidden, silent bearer.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request significantly improves the handling of failed and aborted grammar requests. Key improvements include:

  • Correct Abort Handling: Requests in the grammar queue are now correctly aborted, and a new set_finish_with_abort method in the Req class centralizes the logic for aborting requests, which simplifies the codebase.
  • Clearer Error Signaling: Invalid grammar requests now consistently return an INVALID_GRAMMAR_OBJ and log errors instead of warnings, making issues more visible and easier to debug.
  • Code Simplification: Redundant code for handling aborted requests has been refactored using the new set_finish_with_abort method.

The changes are well-aligned with the PR's objectives and enhance the robustness and clarity of the error handling mechanisms. There are a few minor areas for improvement noted below.

Summary of Findings

  • Improved Error Handling for Grammar Requests: The PR successfully standardizes error handling for invalid grammar requests by introducing INVALID_GRAMMAR_OBJ and consistently using logger.error.
  • Centralized Request Abort Logic: The new Req.set_finish_with_abort() method effectively centralizes and simplifies the logic for aborting requests, improving code maintainability.
  • Correct Abort of Grammar Queue Requests: The abort_request method in the scheduler now correctly handles aborting requests that are pending in the grammar queue.
  • Redundant Code Removal: A redundant if True: block was identified in xgrammar_backend.py.
  • Logging Consistency: A minor inconsistency in logging levels (logger.warning vs logger.error) was noted in outlines_backend.py for similar error conditions.

Merge Readiness

This pull request introduces valuable improvements to error handling and request abortion logic, significantly enhancing the robustness and maintainability of the grammar processing components. The changes are well-structured and address the stated objectives effectively.

There are a few minor points (a redundant if True:, a logging consistency question, and a query about enable_two_batch_overlap's CUDA graph implications) that would be good to address or clarify. Once these are considered, the PR should be in excellent shape for merging.

I am unable to approve pull requests, so please ensure other reviewers take a look and approve before merging.

Comment on lines 154 to 156
except interegular.patterns.InvalidSyntax as e:
logger.warning(f"skip invalid regex schema: {regex=}, {e=}")
return None
return INVALID_GRAMMAR_OBJ
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

In _compile_regex, a logger.warning is used for invalid regex schemas, while other similar error handling spots in this PR (e.g., dispatch_json in this file, and other backend files) have been updated to use logger.error and return INVALID_GRAMMAR_OBJ.

Is there a specific reason to keep logger.warning here? For consistency with the goal of returning errors immediately for invalid grammar requests, perhaps logger.error would be more appropriate, even if this is an internal helper method.

Comment on lines +156 to +160
if True:
tokenizer_info = TokenizerInfo.from_huggingface(
tokenizer, vocab_size=vocab_size
)
override_stop_tokens = None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This if True: block appears to be redundant. Could it be removed to simplify the code?

        tokenizer_info = TokenizerInfo.from_huggingface(
            tokenizer, vocab_size=vocab_size
        )
        override_stop_tokens = None

Comment on lines 224 to +225
def reset(self):
if self.grammar_compiler:
self.grammar_compiler.clear_cache()
self.grammar_compiler.clear_cache()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The if self.grammar_compiler: check was removed from the reset method. This implies self.grammar_compiler is guaranteed to be initialized. While this seems to be the case from the constructor, could you confirm if there are any scenarios where self.grammar_compiler might not be set before reset is called? If it's always set, this change is fine.

logger.error(f"{error_msg}, {self.rid=}")
self.multimodal_inputs = None
self.grammar = None
self.origin_input_ids = [0] # set it to one token to skip the long prefill
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The line self.origin_input_ids = [0] with the comment "set it to one token to skip the long prefill" is a bit of an implicit contract. While the comment explains the intent, it might be worth considering if there's a more explicit way to signal to the prefill logic that the request is aborted and should be skipped or handled minimally, rather than relying on a specific input ID pattern. However, if this is a well-established pattern in the codebase, it's acceptable.

Comment on lines 152 to 155
if max(capture_bs) > model_runner.req_to_token_pool.size:
# In some case (e.g., with a small GPU or --max-running-requests), the #max-running-requests
# In some cases (e.g., with a small GPU or --max-running-requests), the #max-running-requests
# is very small. We add more values here to make sure we capture the maximum bs.
capture_bs += [model_runner.req_to_token_pool.size - 1] + [
model_runner.req_to_token_pool.size
]

if server_args.enable_two_batch_overlap:
capture_bs = [bs for bs in capture_bs if bs >= 2]
capture_bs += [model_runner.req_to_token_pool.size]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic for enable_two_batch_overlap (previously filtering capture_bs for bs >= 2) has been removed. Does this mean enable_two_batch_overlap is no longer supported, or is its handling for CUDA graph capture managed elsewhere? If the feature is still intended to work, it might be good to clarify how this change affects it.

@merrymercy merrymercy merged commit 20fd53b into main Jun 2, 2025
34 of 41 checks passed
@merrymercy merrymercy deleted the lianmin/fix-grammar branch June 2, 2025 02:00
Layssy pushed a commit to Layssy/sglang-iaas that referenced this pull request Jun 9, 2025
xwu-intel pushed a commit to xwu-intel/sglang that referenced this pull request Jun 17, 2025
walker-ai pushed a commit to walker-ai/sglang that referenced this pull request Jul 8, 2025
Merge branch 'sgl_20250610_sync_tag047 of [email protected]:Theta/SGLang.git into main

https://code.alipay.com/Theta/SGLang/pull_requests/52


Reviewed-by: 剑川 <[email protected]>


* [Bugfix] Fix slice operation when chunk size mismatch (sgl-project#6697)
* [Bugfix] Fix ChatCompletion endpoint of mini_lb when stream is set (sgl-project#6703)
* [CI] Fix setup of disaggregation with different tp (sgl-project#6706)
* [PD] Remove Unnecessary Exception Handling for FastQueue.get() (sgl-project#6712)
* Fuse routed_scaling_factor in DeepSeek (sgl-project#6710)
* Overlap two kernels in DeepSeek with communication (sgl-project#6711)
* Minor refactor two-batch overlap (sgl-project#6682)
* Speed up when having padding tokens two-batch overlap (sgl-project#6668)
* [Feature] Support Flashinfer fp8 blockwise GEMM kernel on Blackwell (sgl-project#6479)
* Fix LoRA bench (sgl-project#6719)
* temp
* Fix PP for Qwen3 MoE (sgl-project#6709)
* [feat] triton kernel for get_last_loc (sgl-project#6676)
* [fix] more mem for draft_extend cuda_graph (sgl-project#6726)
* [PD] bug fix:  Update status if nixl receiver send a a dummy req. (sgl-project#6720)
* Tune memory arguments on B200 (sgl-project#6718)
* Add DeepSeek-R1-0528 function call chat template (sgl-project#6725)
* refactor(tool call): Fix BaseFormatDetector tool_index issue and refactor `parse_streaming_increment` (sgl-project#6715)
* Add draft extend CUDA graph for Triton backend (sgl-project#6705)
* refactor apply_w8a8_block_fp8_linear in fp (sgl-project#6545)
* [PD] Support completion endpoint (sgl-project#6729)
* PD Rust LB (PO2) (sgl-project#6437)
* Super tiny enable sole usage of expert distribution metrics and update doc (sgl-project#6680)
* Support picking variants of EPLB algorithms (sgl-project#6728)
* Support tuning DeepEP configs (sgl-project#6742)
* [test] add ut and bm for get_last_loc (sgl-project#6746)
* Fix mem_fraction_static for AMD CI (sgl-project#6748)
* [fix][RL] Fix DeepSeekV3ForCausalLM.post_load_weights for multiple update weight (sgl-project#6265)
* Improve EPLB logical to physical dispatch map (sgl-project#6727)
* Update DeepSeek-R1-0528 function call chat template (sgl-project#6765)
* [PD] Optimize time out logic and add env var doc for mooncake (sgl-project#6761)
* Fix aiohttp 'Chunk too big' in bench_serving (sgl-project#6737)
* Support sliding window in triton backend (sgl-project#6509)
* Fix shared experts fusion error (sgl-project#6289)
* Fix one bug in the grouped-gemm triton kernel (sgl-project#6772)
* update llama4 chat template and pythonic parser (sgl-project#6679)
* feat(tool call): Enhance Llama32Detector for improved JSON parsing in non-stream (sgl-project#6784)
* Support token-level quantization for EP MoE (sgl-project#6782)
* Temporarily lower mmlu threshold for triton sliding window backend (sgl-project#6785)
* ci: relax test_function_call_required (sgl-project#6786)
* Add intel_amx backend for Radix Attention for CPU (sgl-project#6408)
* Fix incorrect LoRA weight loading for fused gate_up_proj (sgl-project#6734)
* fix(PD-disaggregation): Can not get local ip (sgl-project#6792)
* [FIX] mmmu bench serving result display error (sgl-project#6525) (sgl-project#6791)
* Bump torch to 2.7.0 (sgl-project#6788)
* chore: bump sgl-kernel v0.1.5 (sgl-project#6794)
* Improve profiler and integrate profiler in bench_one_batch_server (sgl-project#6787)
* chore: upgrade sgl-kernel v0.1.5 (sgl-project#6795)
* [Minor] Always append newline after image token when parsing chat message (sgl-project#6797)
* Update CI tests for Llama4 models (sgl-project#6421)
* [Feat] Enable PDL automatically on Hopper architecture (sgl-project#5981)
* chore: update blackwell docker (sgl-project#6800)
* misc: cache is_hopper_arch (sgl-project#6799)
* Remove contiguous before Flashinfer groupwise fp8 gemm (sgl-project#6804)
* Correctly abort the failed grammar requests & Improve the handling of abort (sgl-project#6803)
* [EP] Add cuda kernel for moe_ep_pre_reorder (sgl-project#6699)
* Add draft extend CUDA graph for flashinfer backend  (sgl-project#6805)
* Refactor CustomOp to avoid confusing bugs (sgl-project#5382)
* Tiny log prefill time (sgl-project#6780)
* Tiny fix EPLB assertion about rebalancing period and recorder window size (sgl-project#6813)
* Add simple utility to dump tensors for debugging (sgl-project#6815)
* Fix profiles do not have consistent names (sgl-project#6811)
* Speed up rebalancing when using non-static dispatch algorithms (sgl-project#6812)
* [1/2] Add Kernel support for Cutlass based Fused FP4 MoE (sgl-project#6093)
* [Router] Fix k8s Service Discovery (sgl-project#6766)
* Add CPU optimized kernels for topk and rope fusions  (sgl-project#6456)
* fix new_page_count_next_decode (sgl-project#6671)
* Fix wrong weight reference in dynamic EPLB (sgl-project#6818)
* Minor add metrics to expert location updater (sgl-project#6816)
* [Refactor] Rename `n_share_experts_fusion` as `num_fused_shared_experts` (sgl-project#6735)
* [FEAT] Add transformers backend support  (sgl-project#5929)
* [fix] recover auto-dispatch for rmsnorm and rope (sgl-project#6745)
* fix ep_moe_reorder kernel bugs (sgl-project#6858)
* [Refactor] Multimodal data processing for VLM (sgl-project#6659)
* Decoder-only Scoring API (sgl-project#6460)
* feat: add dp-rank to KV events (sgl-project#6852)
* Set `num_fused_shared_experts` as `num_shared_experts` when shared_experts fusion is not disabled (sgl-project#6736)
* Fix one missing arg in DeepEP (sgl-project#6878)
* Support LoRA in TestOpenAIVisionServer and fix fused kv_proj loading bug. (sgl-project#6861)
* support 1 shot allreduce  in 1-node and 2-node using mscclpp (sgl-project#6277)
* Fix Qwen3MoE missing token padding optimization (sgl-project#6820)
* Tiny update error hints (sgl-project#6846)
* Support layerwise rebalancing experts (sgl-project#6851)
* Tiny allow profiler API to auto create directory (sgl-project#6865)
* Support Blackwell DeepEP docker images (sgl-project#6868)
* [EP] Add cuda kernel for moe_ep_post_reorder (sgl-project#6837)
* [theta]merge 0605
* oai: fix openAI client error with single request via batch api (sgl-project#6170)
* [PD] Fix potential perf spike caused by tracker gc and optimize doc (sgl-project#6764)
* Use deepgemm instead of triton for fused_qkv_a_proj_with_mqa (sgl-project#6890)
* [CUTLASS-FP4-MOE]  Introduce CutlassMoEParams class for easy initialization of Cutlass Grouped Gems Metadata (sgl-project#6887)
* bugfix(OAI): Fix image_data processing for jinja chat templates (sgl-project#6877)
* [CPU] enable CI for PRs, add Dockerfile and auto build task (sgl-project#6458)
* AITER backend extension and workload optimizations (sgl-project#6838)
* [theta]merge
* [theta]merge
* [Feature] Support Flashinfer fmha on Blackwell (sgl-project#6930)
* Fix a bug in abort & Improve docstrings for abort (sgl-project#6931)
* Tiny support customize DeepEP max dispatch tokens per rank (sgl-project#6934)
* Sync the changes on cuda graph runners (sgl-project#6932)
* [PD] Optimize transfer queue forward logic for dummy rank (sgl-project#6922)
* [Refactor] image data process in bench_serving (sgl-project#6879)
* [fix] logical_to_all_physical_map index 256 is out of bounds in EP parallel. (sgl-project#6767)
* Add triton fused moe kernel config for E=257 on B200 (sgl-project#6939)
* [sgl-kernel] update deepgemm (sgl-project#6942)
* chore: bump sgl-kernel v0.1.6 (sgl-project#6943)
* Minor compile fused topk (sgl-project#6944)
* [Bugfix] pipeline parallelism and Eagle Qwen2 (sgl-project#6910)
* Tiny re-introduce profile id logging (sgl-project#6912)
* Add triton version as a fused_moe_triton config search key to avoid performace decrease in different Triton version (sgl-project#5955)
* reduce torch.zeros overhead in moe align block size kernel (sgl-project#6369)
* chore: upgrade sgl-kernel v0.1.6 (sgl-project#6945)
* add fbgemm moe grouped gemm kernel benchmark (sgl-project#6924)
* [Docker] Add docker file for SGL Router (sgl-project#6915)
* Disabling mixed chunked prefill when eagle is enabled (sgl-project#6874)
* Add canary for EPLB rebalancing (sgl-project#6895)
* Refactor global_server_args_dict (sgl-project#6866)
* Fuse routed scaling factor in topk_reduce kernel (sgl-project#6220)
* Update server timeout time in AMD CI. (sgl-project#6953)
* [misc] add is_cpu() (sgl-project#6950)
* Add H20 fused MoE kernel tuning configs for DeepSeek-R1/V3 (sgl-project#6885)
* Add a CUDA kernel for fusing mapping and weighted sum for MoE. (sgl-project#6916)
* chore: bump sgl-kernel v0.1.6.post1 (sgl-project#6955)
* chore: upgrade sgl-kernel v0.1.6.post1 (sgl-project#6957)
* [DeepseekR1-FP4] Add Support for nvidia/DeepSeekR1-FP4 model (sgl-project#6853)
* Revert "Fuse routed scaling factor in topk_reduce kernel (sgl-project#6220)" (sgl-project#6968)
* [AMD] Add more tests to per-commit-amd (sgl-project#6926)
* chore: bump sgl-kernel v0.1.7 (sgl-project#6963)
* Slightly improve the sampler to skip unnecessary steps (sgl-project#6956)
* rebase h20 fused_moe config (sgl-project#6966)
* Fix CI and triton moe Configs (sgl-project#6974)
* Remove unnecessary kernels of num_token_non_padded (sgl-project#6965)
* Extend cuda graph capture bs for B200 (sgl-project#6937)
* Fuse routed scaling factor in deepseek (sgl-project#6970)
* Sync cuda graph runners (sgl-project#6976)
* Fix draft extend ut stability with flush cache (sgl-project#6979)
* Fix triton sliding window test case (sgl-project#6981)
* Fix expert distribution dumping causes OOM (sgl-project#6967)
* Minor remove one kernel for DeepSeek (sgl-project#6977)
* [perf][sgl-kernel] extend cutlass_mla_decode to support num_head < 128 (sgl-project#6929)
* Enable more unit tests for AMD CI. (sgl-project#6983)
* Use torch.compile to fuse flash attention decode metadata preparation (sgl-project#6973)
* Eliminate stream sync to speed up LoRA batch init  (sgl-project#6960)
* support qwen3 emebedding (sgl-project#6990)
* Fix torch profiler bugs for bench_offline_throughput.py (sgl-project#6557)
* chore: upgrade flashinfer v0.2.6.post1 jit (sgl-project#6958)
* cleanup tmp dir (sgl-project#7007)
* chore: update pr test xeon (sgl-project#7008)
* Fix cutlass MLA gets almost zero accuracy (sgl-project#6998)
* Update amd nightly models CI. (sgl-project#6992)
* feat: add direct routing strategy to DP worker (sgl-project#6884)
* Fallback to lower triton version for unfound fused moe configs (sgl-project#7013)
* Fix torchvision version for Blackwell (sgl-project#7015)
* Simplify prepare_extend_after_decode (sgl-project#6987)
* Migrate to assertEqual (sgl-project#6741)
* Fix torch version in blackwell dockerfile (sgl-project#7017)
* chore: update pr test xeon (sgl-project#7018)
* Update default settings for blackwell (sgl-project#7023)
* Support both approximate and exact expert distribution collection (sgl-project#6964)
* Add decode req pool (sgl-project#6980)
* [theta]merge 0610
* [theta]merge 0610
* [CI] Add CI workflow for sgl-router docker build (sgl-project#7027)
* Fix fused_moe triton configs (sgl-project#7029)
* CPU: map changes from developing branch in sgl-kernel (sgl-project#6833)
* chore: bump v0.4.7 (sgl-project#7038)
* Update README.md (sgl-project#7040)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant