-
Notifications
You must be signed in to change notification settings - Fork 2.4k
support 1 shot allreduce in 1-node and 2-node using mscclpp #6277
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
1231678
to
bd9d2a9
Compare
self.allreduce_algo = MscclppAllReduce1Shot1NodeLL(mscclpp_group, self.MAX_BYTES, device) | ||
elif self.world_size == torch.cuda.device_count() * 2: | ||
proxy_service = ProxyService() | ||
self.allreduce_algo = MscclppAllReduce1Shot2Nodes(mscclpp_group, self.MAX_BYTES, device, torch.cuda.device_count(), proxy_service) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just wondering if they can support standalone one shot reduce scatter, all gather and a2a as well? This just involves choosing whether to send entire buffer or slice thereof and choosing either to sum over the buffers or concat, from which one gets a 2 x 2 matrix. E.g.
Allreduce - full, sum
Reduce scatter - slice, sum
A2A - slice, cat
Allgather - full, cat
I wonder how the a2a would compare with deepseek DeepEP and bytedance M2N (not open sourced) libs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Base on the provided mscclpp API, mscclpp can be used to implemented all these kernels. As long as all reduce is supported in sglang, the only thing left is to change the kernel implementation.
However, the contributors of mscclpp does not provide demo code, extra work will be needed to debug the cuda implementation.
In addition, the DeepEP api is not an alltoall since it fuses some permutation kernel. I think we need to carefully test the performance between mscclpp and nvshmem.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
DeepEP low latency kernel uses IBGDA to reduce the latency. For now I don't think that mscclpp will use IBGDA. I will check this as well
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
MSCCLPP can support other kernels as implemented in Nanoflow. However, I think its key advantage is customizability instead of performance? In my tests for large messages, it performs ~20% better when we manually control the block size for compute-comm overlap
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that customizability is the key advantage. In the future we can implement allreduce/layernorm fusion based on this.
40a3d8f
to
3b3e257
Compare
b08428d
to
6d9f363
Compare
I found that we may have to support two algo to cover large batch size decode.
|
|
ea90e8a
to
258e7b0
Compare
Btw, by two-shot do you mean intra-node all-reduce + inter-node all-reduce? I think NCCL doesn't do hierarchical all-reduce yet? |
The 2-node allreduce is actually intra-node reduce-scatter + inter-node send-recv + intra-node broadcast. I am not sure how NCCL does allreduce. but I think this is a different way. |
I think NCCL just uses either ring or tree, don't know if that counts as one-shot |
Please fix the compilation error in https://github.com/sgl-project/sglang/actions/runs/15343913091/job/43175858908?pr=6277 |
@Fridge003 Thank you! I have fixed this compile error |
Merge branch 'sgl_20250610_sync_tag047 of [email protected]:Theta/SGLang.git into main https://code.alipay.com/Theta/SGLang/pull_requests/52 Reviewed-by: 剑川 <[email protected]> * [Bugfix] Fix slice operation when chunk size mismatch (sgl-project#6697) * [Bugfix] Fix ChatCompletion endpoint of mini_lb when stream is set (sgl-project#6703) * [CI] Fix setup of disaggregation with different tp (sgl-project#6706) * [PD] Remove Unnecessary Exception Handling for FastQueue.get() (sgl-project#6712) * Fuse routed_scaling_factor in DeepSeek (sgl-project#6710) * Overlap two kernels in DeepSeek with communication (sgl-project#6711) * Minor refactor two-batch overlap (sgl-project#6682) * Speed up when having padding tokens two-batch overlap (sgl-project#6668) * [Feature] Support Flashinfer fp8 blockwise GEMM kernel on Blackwell (sgl-project#6479) * Fix LoRA bench (sgl-project#6719) * temp * Fix PP for Qwen3 MoE (sgl-project#6709) * [feat] triton kernel for get_last_loc (sgl-project#6676) * [fix] more mem for draft_extend cuda_graph (sgl-project#6726) * [PD] bug fix: Update status if nixl receiver send a a dummy req. (sgl-project#6720) * Tune memory arguments on B200 (sgl-project#6718) * Add DeepSeek-R1-0528 function call chat template (sgl-project#6725) * refactor(tool call): Fix BaseFormatDetector tool_index issue and refactor `parse_streaming_increment` (sgl-project#6715) * Add draft extend CUDA graph for Triton backend (sgl-project#6705) * refactor apply_w8a8_block_fp8_linear in fp (sgl-project#6545) * [PD] Support completion endpoint (sgl-project#6729) * PD Rust LB (PO2) (sgl-project#6437) * Super tiny enable sole usage of expert distribution metrics and update doc (sgl-project#6680) * Support picking variants of EPLB algorithms (sgl-project#6728) * Support tuning DeepEP configs (sgl-project#6742) * [test] add ut and bm for get_last_loc (sgl-project#6746) * Fix mem_fraction_static for AMD CI (sgl-project#6748) * [fix][RL] Fix DeepSeekV3ForCausalLM.post_load_weights for multiple update weight (sgl-project#6265) * Improve EPLB logical to physical dispatch map (sgl-project#6727) * Update DeepSeek-R1-0528 function call chat template (sgl-project#6765) * [PD] Optimize time out logic and add env var doc for mooncake (sgl-project#6761) * Fix aiohttp 'Chunk too big' in bench_serving (sgl-project#6737) * Support sliding window in triton backend (sgl-project#6509) * Fix shared experts fusion error (sgl-project#6289) * Fix one bug in the grouped-gemm triton kernel (sgl-project#6772) * update llama4 chat template and pythonic parser (sgl-project#6679) * feat(tool call): Enhance Llama32Detector for improved JSON parsing in non-stream (sgl-project#6784) * Support token-level quantization for EP MoE (sgl-project#6782) * Temporarily lower mmlu threshold for triton sliding window backend (sgl-project#6785) * ci: relax test_function_call_required (sgl-project#6786) * Add intel_amx backend for Radix Attention for CPU (sgl-project#6408) * Fix incorrect LoRA weight loading for fused gate_up_proj (sgl-project#6734) * fix(PD-disaggregation): Can not get local ip (sgl-project#6792) * [FIX] mmmu bench serving result display error (sgl-project#6525) (sgl-project#6791) * Bump torch to 2.7.0 (sgl-project#6788) * chore: bump sgl-kernel v0.1.5 (sgl-project#6794) * Improve profiler and integrate profiler in bench_one_batch_server (sgl-project#6787) * chore: upgrade sgl-kernel v0.1.5 (sgl-project#6795) * [Minor] Always append newline after image token when parsing chat message (sgl-project#6797) * Update CI tests for Llama4 models (sgl-project#6421) * [Feat] Enable PDL automatically on Hopper architecture (sgl-project#5981) * chore: update blackwell docker (sgl-project#6800) * misc: cache is_hopper_arch (sgl-project#6799) * Remove contiguous before Flashinfer groupwise fp8 gemm (sgl-project#6804) * Correctly abort the failed grammar requests & Improve the handling of abort (sgl-project#6803) * [EP] Add cuda kernel for moe_ep_pre_reorder (sgl-project#6699) * Add draft extend CUDA graph for flashinfer backend (sgl-project#6805) * Refactor CustomOp to avoid confusing bugs (sgl-project#5382) * Tiny log prefill time (sgl-project#6780) * Tiny fix EPLB assertion about rebalancing period and recorder window size (sgl-project#6813) * Add simple utility to dump tensors for debugging (sgl-project#6815) * Fix profiles do not have consistent names (sgl-project#6811) * Speed up rebalancing when using non-static dispatch algorithms (sgl-project#6812) * [1/2] Add Kernel support for Cutlass based Fused FP4 MoE (sgl-project#6093) * [Router] Fix k8s Service Discovery (sgl-project#6766) * Add CPU optimized kernels for topk and rope fusions (sgl-project#6456) * fix new_page_count_next_decode (sgl-project#6671) * Fix wrong weight reference in dynamic EPLB (sgl-project#6818) * Minor add metrics to expert location updater (sgl-project#6816) * [Refactor] Rename `n_share_experts_fusion` as `num_fused_shared_experts` (sgl-project#6735) * [FEAT] Add transformers backend support (sgl-project#5929) * [fix] recover auto-dispatch for rmsnorm and rope (sgl-project#6745) * fix ep_moe_reorder kernel bugs (sgl-project#6858) * [Refactor] Multimodal data processing for VLM (sgl-project#6659) * Decoder-only Scoring API (sgl-project#6460) * feat: add dp-rank to KV events (sgl-project#6852) * Set `num_fused_shared_experts` as `num_shared_experts` when shared_experts fusion is not disabled (sgl-project#6736) * Fix one missing arg in DeepEP (sgl-project#6878) * Support LoRA in TestOpenAIVisionServer and fix fused kv_proj loading bug. (sgl-project#6861) * support 1 shot allreduce in 1-node and 2-node using mscclpp (sgl-project#6277) * Fix Qwen3MoE missing token padding optimization (sgl-project#6820) * Tiny update error hints (sgl-project#6846) * Support layerwise rebalancing experts (sgl-project#6851) * Tiny allow profiler API to auto create directory (sgl-project#6865) * Support Blackwell DeepEP docker images (sgl-project#6868) * [EP] Add cuda kernel for moe_ep_post_reorder (sgl-project#6837) * [theta]merge 0605 * oai: fix openAI client error with single request via batch api (sgl-project#6170) * [PD] Fix potential perf spike caused by tracker gc and optimize doc (sgl-project#6764) * Use deepgemm instead of triton for fused_qkv_a_proj_with_mqa (sgl-project#6890) * [CUTLASS-FP4-MOE] Introduce CutlassMoEParams class for easy initialization of Cutlass Grouped Gems Metadata (sgl-project#6887) * bugfix(OAI): Fix image_data processing for jinja chat templates (sgl-project#6877) * [CPU] enable CI for PRs, add Dockerfile and auto build task (sgl-project#6458) * AITER backend extension and workload optimizations (sgl-project#6838) * [theta]merge * [theta]merge * [Feature] Support Flashinfer fmha on Blackwell (sgl-project#6930) * Fix a bug in abort & Improve docstrings for abort (sgl-project#6931) * Tiny support customize DeepEP max dispatch tokens per rank (sgl-project#6934) * Sync the changes on cuda graph runners (sgl-project#6932) * [PD] Optimize transfer queue forward logic for dummy rank (sgl-project#6922) * [Refactor] image data process in bench_serving (sgl-project#6879) * [fix] logical_to_all_physical_map index 256 is out of bounds in EP parallel. (sgl-project#6767) * Add triton fused moe kernel config for E=257 on B200 (sgl-project#6939) * [sgl-kernel] update deepgemm (sgl-project#6942) * chore: bump sgl-kernel v0.1.6 (sgl-project#6943) * Minor compile fused topk (sgl-project#6944) * [Bugfix] pipeline parallelism and Eagle Qwen2 (sgl-project#6910) * Tiny re-introduce profile id logging (sgl-project#6912) * Add triton version as a fused_moe_triton config search key to avoid performace decrease in different Triton version (sgl-project#5955) * reduce torch.zeros overhead in moe align block size kernel (sgl-project#6369) * chore: upgrade sgl-kernel v0.1.6 (sgl-project#6945) * add fbgemm moe grouped gemm kernel benchmark (sgl-project#6924) * [Docker] Add docker file for SGL Router (sgl-project#6915) * Disabling mixed chunked prefill when eagle is enabled (sgl-project#6874) * Add canary for EPLB rebalancing (sgl-project#6895) * Refactor global_server_args_dict (sgl-project#6866) * Fuse routed scaling factor in topk_reduce kernel (sgl-project#6220) * Update server timeout time in AMD CI. (sgl-project#6953) * [misc] add is_cpu() (sgl-project#6950) * Add H20 fused MoE kernel tuning configs for DeepSeek-R1/V3 (sgl-project#6885) * Add a CUDA kernel for fusing mapping and weighted sum for MoE. (sgl-project#6916) * chore: bump sgl-kernel v0.1.6.post1 (sgl-project#6955) * chore: upgrade sgl-kernel v0.1.6.post1 (sgl-project#6957) * [DeepseekR1-FP4] Add Support for nvidia/DeepSeekR1-FP4 model (sgl-project#6853) * Revert "Fuse routed scaling factor in topk_reduce kernel (sgl-project#6220)" (sgl-project#6968) * [AMD] Add more tests to per-commit-amd (sgl-project#6926) * chore: bump sgl-kernel v0.1.7 (sgl-project#6963) * Slightly improve the sampler to skip unnecessary steps (sgl-project#6956) * rebase h20 fused_moe config (sgl-project#6966) * Fix CI and triton moe Configs (sgl-project#6974) * Remove unnecessary kernels of num_token_non_padded (sgl-project#6965) * Extend cuda graph capture bs for B200 (sgl-project#6937) * Fuse routed scaling factor in deepseek (sgl-project#6970) * Sync cuda graph runners (sgl-project#6976) * Fix draft extend ut stability with flush cache (sgl-project#6979) * Fix triton sliding window test case (sgl-project#6981) * Fix expert distribution dumping causes OOM (sgl-project#6967) * Minor remove one kernel for DeepSeek (sgl-project#6977) * [perf][sgl-kernel] extend cutlass_mla_decode to support num_head < 128 (sgl-project#6929) * Enable more unit tests for AMD CI. (sgl-project#6983) * Use torch.compile to fuse flash attention decode metadata preparation (sgl-project#6973) * Eliminate stream sync to speed up LoRA batch init (sgl-project#6960) * support qwen3 emebedding (sgl-project#6990) * Fix torch profiler bugs for bench_offline_throughput.py (sgl-project#6557) * chore: upgrade flashinfer v0.2.6.post1 jit (sgl-project#6958) * cleanup tmp dir (sgl-project#7007) * chore: update pr test xeon (sgl-project#7008) * Fix cutlass MLA gets almost zero accuracy (sgl-project#6998) * Update amd nightly models CI. (sgl-project#6992) * feat: add direct routing strategy to DP worker (sgl-project#6884) * Fallback to lower triton version for unfound fused moe configs (sgl-project#7013) * Fix torchvision version for Blackwell (sgl-project#7015) * Simplify prepare_extend_after_decode (sgl-project#6987) * Migrate to assertEqual (sgl-project#6741) * Fix torch version in blackwell dockerfile (sgl-project#7017) * chore: update pr test xeon (sgl-project#7018) * Update default settings for blackwell (sgl-project#7023) * Support both approximate and exact expert distribution collection (sgl-project#6964) * Add decode req pool (sgl-project#6980) * [theta]merge 0610 * [theta]merge 0610 * [CI] Add CI workflow for sgl-router docker build (sgl-project#7027) * Fix fused_moe triton configs (sgl-project#7029) * CPU: map changes from developing branch in sgl-kernel (sgl-project#6833) * chore: bump v0.4.7 (sgl-project#7038) * Update README.md (sgl-project#7040)
Motivation
For now we can only use 1-shot all reduce in 1 node. However, there are times that we need TP16 and 2-node allreduce. Recently mscclpp has some examples about 1 shot all reduce in 2-node, which can improve the latency of 2-node allreduce for small messages.
Modifications
kernel benchmark
This is tested on A100 with 8 GPUs and 4NIC per node.
This is tested on H100 with 8GPUs and 2NIC per node. We can see that for 1MB message, MSCCLPP is slower than NCCL. This is because the current implementation of MSCCLPP use LLPacket, which will waste half of the bandwidth. The test machine only has two NIC for 8 GPUs, which means the bandwidth will be easier to become bottleneck.
end-to-end accuray test
This is tested on A100 and QWen-72B with TP16

Baseline
MSCCLPP

This is tested on H100 on DeepSeek-V3 with TP16
mmlu result
gsm8k result
end-to-end performance test
This is tested on H100 and DeepSeek-V3 with TP16. We can see that the improvement is larger on low concurrency, where the message size is smaller. This is aligned with the benchmark result mentioned above.
existing issues on mscclpp
There are some issues that need to be solved in the future
Adaptation
This part records some issues while I develop this feature.
C++ MSCCLPP API
The mscclpp only provide a naive demo, where the input is fixed and each time the kernel is compiled runtime. I rewrite the code in sgl-kernel to support different input and cuda graph. The correctness of this code is tested by the unit-test and benchmark, but for unknown reason, the performance of cpp code is slower than the demo. I haved create an issue in mscclpp repo.
_GLIBCXX_USE_CXX11_ABI issue
bug: while load pybind11 extension, "undefined symbol: _ZN7mscclpp9CudaErrorC1ERKSsi". To find the root cause, command
c++filt <symbol>
is used to find the original definition, and commandnm -D <path-to-so> | grep CudaError
is used to find symbol defined in so file. By doing so we can see find the root cause: libtorch.so is compiled with-D_GLIBCXX_USE_CXX11_ABI=0
and libmscclpp.so is compiled with-D_GLIBCXX_USE_CXX11_ABI=1
, which causes conflict.ps: using
python3 -c "import torch; print(int(torch._C._GLIBCXX_USE_CXX11_ABI))"
to select the value of_GLIBCXX_USE_CXX11_ABI
is more robust since in the future this value might differ.Checklist