Skip to content

support 1 shot allreduce in 1-node and 2-node using mscclpp #6277

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 26 commits into from
Jun 5, 2025

Conversation

zyksir
Copy link
Collaborator

@zyksir zyksir commented May 14, 2025

Motivation

For now we can only use 1-shot all reduce in 1 node. However, there are times that we need TP16 and 2-node allreduce. Recently mscclpp has some examples about 1 shot all reduce in 2-node, which can improve the latency of 2-node allreduce for small messages.

Modifications

  • update on 5.21: mscclpp only provide python based demo for all-reduce, which need to compile kernel in runtime. Here I rewrite the python API using cpp so that we can use pybind11 to call mscclpp. For now the code can compile and run correctly. "Autotune for best nBlocks and nThreads" is left unfinished, which means the performance might not be the best.
  • update on 5.22: mscclpp allreduce pass all the unit tests. The original mscclpp demo has bugs in cuda graph for device memory handles. The result of benchmark on A100 is listed as follows. "Autotune for best nBlocks and nThreads" need to be added for better performance on large message sizes. In addition, there is something wrong with my test machine and I failed to run any code with ray on 2-node. So Both test_mscclpp.py files is not tested on TP16 case. The accuracy is tested by benchmark_mscclpp.py is launched by torchrun.

kernel benchmark

This is tested on A100 with 8 GPUs and 4NIC per node.

msg_size torch eager time msccl eager time msccl graph time pynccl graph time
2.0 KiB 61.44 33.5872 22.8557 38.2976
4.0 KiB 44.6464 33.9968 22.0979 42.3834
8.0 KiB 44.6464 31.5392 22.487 43.9296
16.0 KiB 57.9584 31.3344 23.4598 44.1344
32.0 KiB 55.5008 32.4608 24.0947 45.8035
64.0 KiB 55.6032 33.28 26.4704 53.3606
128.0 KiB 65.4336 39.936 32.6144 61.0406
256.0 KiB 81.6128 49.7664 43.9603 81.0394
512.0 KiB 93.3888 71.3728 66.2221 95.744
1.0 MiB 126.157 110.797 110.469 125.471

This is tested on H100 with 8GPUs and 2NIC per node. We can see that for 1MB message, MSCCLPP is slower than NCCL. This is because the current implementation of MSCCLPP use LLPacket, which will waste half of the bandwidth. The test machine only has two NIC for 8 GPUs, which means the bandwidth will be easier to become bottleneck.

msg_size torch eager time msccl eager time msccl graph time pynccl graph time
2.0 KiB 82.04 38.86 21.38 35.09
4.0 KiB 73.45 37.56 22.10 36.59
8.0 KiB 65.30 43.71 26.77 37.98
16.0 KiB 62.65 37.97 23.35 45.06
32.0 KiB 80.16 45.32 24.69 40.19
64.0 KiB 70.45 38.46 28.43 40.25
128.0 KiB 70.25 38.56 33.61 45.55
256.0 KiB 67.01 46.38 45.70 59.32
512.0 KiB 85.04 69.51 68.76 74.79
1.0 MiB 91.49 129.99 114.86 76.35

end-to-end accuray test

This is tested on A100 and QWen-72B with TP16
Baseline
image

MSCCLPP
image

This is tested on H100 on DeepSeek-V3 with TP16

mmlu result

Parallel Baseline Mscclpp
64 17.08s/acc87.1% 32.87s/acc87.4%
32 22.23s/acc87.1% 21.51s/acc87.3%
16 39.03s/acc87.4% 38.29s/acc87.3%
8 76.29s/acc87.1% 67.05s/acc87.4%
4 118.86s/acc87.2% 119.66s/acc87.1%
2 134.81s/acc87.4% 129.57s/acc87.4%
1 243.15s/acc87.3% 238.17s/acc87.3%

gsm8k result

Parallel Baseline Mscclpp
64 30.19s/acc94.0% 26.41s/acc94.5%
32 42.63s/acc94.5% 38.29s/acc93.0%
16 61.18s/acc94.5% 54.19s/acc93.5%
8 88.85s/acc95.0% 79.52s/acc93.5%
4 140.89s/acc95.5% 128.08s/acc94.5%
2 247.56s/acc96.0% 222.89s/acc95.0%
1 448.45s/acc94.0% 407.65s/acc93.0%

end-to-end performance test

This is tested on H100 and DeepSeek-V3 with TP16. We can see that the improvement is larger on low concurrency, where the message size is smaller. This is aligned with the benchmark result mentioned above.

concurrency Baseline Mscclpp
4 22.77 20.47
16 29.30 26.39
32 32.48 31.81

existing issues on mscclpp

There are some issues that need to be solved in the future

  1. the graph launch time of mscclpp is longer than nccl.
  2. the mscclpp is unstable on large message sizes

Adaptation

This part records some issues while I develop this feature.

C++ MSCCLPP API

The mscclpp only provide a naive demo, where the input is fixed and each time the kernel is compiled runtime. I rewrite the code in sgl-kernel to support different input and cuda graph. The correctness of this code is tested by the unit-test and benchmark, but for unknown reason, the performance of cpp code is slower than the demo. I haved create an issue in mscclpp repo.

_GLIBCXX_USE_CXX11_ABI issue

bug: while load pybind11 extension, "undefined symbol: _ZN7mscclpp9CudaErrorC1ERKSsi". To find the root cause, command c++filt <symbol> is used to find the original definition, and command nm -D <path-to-so> | grep CudaError is used to find symbol defined in so file. By doing so we can see find the root cause: libtorch.so is compiled with -D_GLIBCXX_USE_CXX11_ABI=0 and libmscclpp.so is compiled with -D_GLIBCXX_USE_CXX11_ABI=1, which causes conflict.

ps: using python3 -c "import torch; print(int(torch._C._GLIBCXX_USE_CXX11_ABI))" to select the value of _GLIBCXX_USE_CXX11_ABI is more robust since in the future this value might differ.

image

Checklist

self.allreduce_algo = MscclppAllReduce1Shot1NodeLL(mscclpp_group, self.MAX_BYTES, device)
elif self.world_size == torch.cuda.device_count() * 2:
proxy_service = ProxyService()
self.allreduce_algo = MscclppAllReduce1Shot2Nodes(mscclpp_group, self.MAX_BYTES, device, torch.cuda.device_count(), proxy_service)
Copy link

@jon-chuang jon-chuang May 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just wondering if they can support standalone one shot reduce scatter, all gather and a2a as well? This just involves choosing whether to send entire buffer or slice thereof and choosing either to sum over the buffers or concat, from which one gets a 2 x 2 matrix. E.g.

Allreduce - full, sum
Reduce scatter - slice, sum
A2A - slice, cat
Allgather - full, cat

I wonder how the a2a would compare with deepseek DeepEP and bytedance M2N (not open sourced) libs

Copy link
Collaborator Author

@zyksir zyksir May 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Base on the provided mscclpp API, mscclpp can be used to implemented all these kernels. As long as all reduce is supported in sglang, the only thing left is to change the kernel implementation.

However, the contributors of mscclpp does not provide demo code, extra work will be needed to debug the cuda implementation.

In addition, the DeepEP api is not an alltoall since it fuses some permutation kernel. I think we need to carefully test the performance between mscclpp and nvshmem.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DeepEP low latency kernel uses IBGDA to reduce the latency. For now I don't think that mscclpp will use IBGDA. I will check this as well

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MSCCLPP can support other kernels as implemented in Nanoflow. However, I think its key advantage is customizability instead of performance? In my tests for large messages, it performs ~20% better when we manually control the block size for compute-comm overlap

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that customizability is the key advantage. In the future we can implement allreduce/layernorm fusion based on this.

@zyksir zyksir force-pushed the add_msccl_allreduce_2node branch 2 times, most recently from 40a3d8f to 3b3e257 Compare May 21, 2025 12:22
@zyksir zyksir force-pushed the add_msccl_allreduce_2node branch from b08428d to 6d9f363 Compare May 21, 2025 12:47
@Alcanderian
Copy link
Collaborator

Alcanderian commented May 22, 2025

I found that we may have to support two algo to cover large batch size decode.
https://github.com/microsoft/mscclpp/blob/main/python/mscclpp_benchmark/allreduce_bench.py#L182

        if memory.nbytes < 2**22:
            mscclpp_algos = [MscclppAllReduce5(mscclpp_group, memory, memory_out, N_GPUS_PER_NODE, proxy_service)]
        else:
            mscclpp_algos = [MscclppAllReduce4(mscclpp_group, memory, N_GPUS_PER_NODE, proxy_service)]

@zyksir
Copy link
Collaborator Author

zyksir commented May 23, 2025

I found that we may have to support two algo to cover large batch size decode. https://github.com/microsoft/mscclpp/blob/main/python/mscclpp_benchmark/allreduce_bench.py#L182

        if memory.nbytes < 2**22:
            mscclpp_algos = [MscclppAllReduce5(mscclpp_group, memory, memory_out, N_GPUS_PER_NODE, proxy_service)]
        else:
            mscclpp_algos = [MscclppAllReduce4(mscclpp_group, memory, N_GPUS_PER_NODE, proxy_service)]

image
I found that even in the benchmark script provided by mscclpp, the performance on large message is slower than NCCL. So for now I am working on improvement on small message sizes.

@zyksir zyksir force-pushed the add_msccl_allreduce_2node branch from ea90e8a to 258e7b0 Compare May 24, 2025 12:34
@Edenzzzz
Copy link
Contributor

Btw, by two-shot do you mean intra-node all-reduce + inter-node all-reduce? I think NCCL doesn't do hierarchical all-reduce yet?

@zyksir
Copy link
Collaborator Author

zyksir commented May 29, 2025

Btw, by two-shot do you mean intra-node all-reduce + inter-node all-reduce? I think NCCL doesn't do hierarchical all-reduce yet?

The 2-node allreduce is actually intra-node reduce-scatter + inter-node send-recv + intra-node broadcast. I am not sure how NCCL does allreduce. but I think this is a different way.

@Edenzzzz
Copy link
Contributor

I think NCCL just uses either ring or tree, don't know if that counts as one-shot
NVIDIA/nccl#530

@Fridge003
Copy link
Collaborator

@zyksir
Copy link
Collaborator Author

zyksir commented May 31, 2025

@Fridge003 Thank you! I have fixed this compile error

@zhyncs zhyncs merged commit 8e3797b into sgl-project:main Jun 5, 2025
65 of 69 checks passed
jianan-gu pushed a commit to jianan-gu/sglang that referenced this pull request Jun 12, 2025
xwu-intel pushed a commit to xwu-intel/sglang that referenced this pull request Jun 17, 2025
walker-ai pushed a commit to walker-ai/sglang that referenced this pull request Jul 8, 2025
Merge branch 'sgl_20250610_sync_tag047 of [email protected]:Theta/SGLang.git into main

https://code.alipay.com/Theta/SGLang/pull_requests/52


Reviewed-by: 剑川 <[email protected]>


* [Bugfix] Fix slice operation when chunk size mismatch (sgl-project#6697)
* [Bugfix] Fix ChatCompletion endpoint of mini_lb when stream is set (sgl-project#6703)
* [CI] Fix setup of disaggregation with different tp (sgl-project#6706)
* [PD] Remove Unnecessary Exception Handling for FastQueue.get() (sgl-project#6712)
* Fuse routed_scaling_factor in DeepSeek (sgl-project#6710)
* Overlap two kernels in DeepSeek with communication (sgl-project#6711)
* Minor refactor two-batch overlap (sgl-project#6682)
* Speed up when having padding tokens two-batch overlap (sgl-project#6668)
* [Feature] Support Flashinfer fp8 blockwise GEMM kernel on Blackwell (sgl-project#6479)
* Fix LoRA bench (sgl-project#6719)
* temp
* Fix PP for Qwen3 MoE (sgl-project#6709)
* [feat] triton kernel for get_last_loc (sgl-project#6676)
* [fix] more mem for draft_extend cuda_graph (sgl-project#6726)
* [PD] bug fix:  Update status if nixl receiver send a a dummy req. (sgl-project#6720)
* Tune memory arguments on B200 (sgl-project#6718)
* Add DeepSeek-R1-0528 function call chat template (sgl-project#6725)
* refactor(tool call): Fix BaseFormatDetector tool_index issue and refactor `parse_streaming_increment` (sgl-project#6715)
* Add draft extend CUDA graph for Triton backend (sgl-project#6705)
* refactor apply_w8a8_block_fp8_linear in fp (sgl-project#6545)
* [PD] Support completion endpoint (sgl-project#6729)
* PD Rust LB (PO2) (sgl-project#6437)
* Super tiny enable sole usage of expert distribution metrics and update doc (sgl-project#6680)
* Support picking variants of EPLB algorithms (sgl-project#6728)
* Support tuning DeepEP configs (sgl-project#6742)
* [test] add ut and bm for get_last_loc (sgl-project#6746)
* Fix mem_fraction_static for AMD CI (sgl-project#6748)
* [fix][RL] Fix DeepSeekV3ForCausalLM.post_load_weights for multiple update weight (sgl-project#6265)
* Improve EPLB logical to physical dispatch map (sgl-project#6727)
* Update DeepSeek-R1-0528 function call chat template (sgl-project#6765)
* [PD] Optimize time out logic and add env var doc for mooncake (sgl-project#6761)
* Fix aiohttp 'Chunk too big' in bench_serving (sgl-project#6737)
* Support sliding window in triton backend (sgl-project#6509)
* Fix shared experts fusion error (sgl-project#6289)
* Fix one bug in the grouped-gemm triton kernel (sgl-project#6772)
* update llama4 chat template and pythonic parser (sgl-project#6679)
* feat(tool call): Enhance Llama32Detector for improved JSON parsing in non-stream (sgl-project#6784)
* Support token-level quantization for EP MoE (sgl-project#6782)
* Temporarily lower mmlu threshold for triton sliding window backend (sgl-project#6785)
* ci: relax test_function_call_required (sgl-project#6786)
* Add intel_amx backend for Radix Attention for CPU (sgl-project#6408)
* Fix incorrect LoRA weight loading for fused gate_up_proj (sgl-project#6734)
* fix(PD-disaggregation): Can not get local ip (sgl-project#6792)
* [FIX] mmmu bench serving result display error (sgl-project#6525) (sgl-project#6791)
* Bump torch to 2.7.0 (sgl-project#6788)
* chore: bump sgl-kernel v0.1.5 (sgl-project#6794)
* Improve profiler and integrate profiler in bench_one_batch_server (sgl-project#6787)
* chore: upgrade sgl-kernel v0.1.5 (sgl-project#6795)
* [Minor] Always append newline after image token when parsing chat message (sgl-project#6797)
* Update CI tests for Llama4 models (sgl-project#6421)
* [Feat] Enable PDL automatically on Hopper architecture (sgl-project#5981)
* chore: update blackwell docker (sgl-project#6800)
* misc: cache is_hopper_arch (sgl-project#6799)
* Remove contiguous before Flashinfer groupwise fp8 gemm (sgl-project#6804)
* Correctly abort the failed grammar requests & Improve the handling of abort (sgl-project#6803)
* [EP] Add cuda kernel for moe_ep_pre_reorder (sgl-project#6699)
* Add draft extend CUDA graph for flashinfer backend  (sgl-project#6805)
* Refactor CustomOp to avoid confusing bugs (sgl-project#5382)
* Tiny log prefill time (sgl-project#6780)
* Tiny fix EPLB assertion about rebalancing period and recorder window size (sgl-project#6813)
* Add simple utility to dump tensors for debugging (sgl-project#6815)
* Fix profiles do not have consistent names (sgl-project#6811)
* Speed up rebalancing when using non-static dispatch algorithms (sgl-project#6812)
* [1/2] Add Kernel support for Cutlass based Fused FP4 MoE (sgl-project#6093)
* [Router] Fix k8s Service Discovery (sgl-project#6766)
* Add CPU optimized kernels for topk and rope fusions  (sgl-project#6456)
* fix new_page_count_next_decode (sgl-project#6671)
* Fix wrong weight reference in dynamic EPLB (sgl-project#6818)
* Minor add metrics to expert location updater (sgl-project#6816)
* [Refactor] Rename `n_share_experts_fusion` as `num_fused_shared_experts` (sgl-project#6735)
* [FEAT] Add transformers backend support  (sgl-project#5929)
* [fix] recover auto-dispatch for rmsnorm and rope (sgl-project#6745)
* fix ep_moe_reorder kernel bugs (sgl-project#6858)
* [Refactor] Multimodal data processing for VLM (sgl-project#6659)
* Decoder-only Scoring API (sgl-project#6460)
* feat: add dp-rank to KV events (sgl-project#6852)
* Set `num_fused_shared_experts` as `num_shared_experts` when shared_experts fusion is not disabled (sgl-project#6736)
* Fix one missing arg in DeepEP (sgl-project#6878)
* Support LoRA in TestOpenAIVisionServer and fix fused kv_proj loading bug. (sgl-project#6861)
* support 1 shot allreduce  in 1-node and 2-node using mscclpp (sgl-project#6277)
* Fix Qwen3MoE missing token padding optimization (sgl-project#6820)
* Tiny update error hints (sgl-project#6846)
* Support layerwise rebalancing experts (sgl-project#6851)
* Tiny allow profiler API to auto create directory (sgl-project#6865)
* Support Blackwell DeepEP docker images (sgl-project#6868)
* [EP] Add cuda kernel for moe_ep_post_reorder (sgl-project#6837)
* [theta]merge 0605
* oai: fix openAI client error with single request via batch api (sgl-project#6170)
* [PD] Fix potential perf spike caused by tracker gc and optimize doc (sgl-project#6764)
* Use deepgemm instead of triton for fused_qkv_a_proj_with_mqa (sgl-project#6890)
* [CUTLASS-FP4-MOE]  Introduce CutlassMoEParams class for easy initialization of Cutlass Grouped Gems Metadata (sgl-project#6887)
* bugfix(OAI): Fix image_data processing for jinja chat templates (sgl-project#6877)
* [CPU] enable CI for PRs, add Dockerfile and auto build task (sgl-project#6458)
* AITER backend extension and workload optimizations (sgl-project#6838)
* [theta]merge
* [theta]merge
* [Feature] Support Flashinfer fmha on Blackwell (sgl-project#6930)
* Fix a bug in abort & Improve docstrings for abort (sgl-project#6931)
* Tiny support customize DeepEP max dispatch tokens per rank (sgl-project#6934)
* Sync the changes on cuda graph runners (sgl-project#6932)
* [PD] Optimize transfer queue forward logic for dummy rank (sgl-project#6922)
* [Refactor] image data process in bench_serving (sgl-project#6879)
* [fix] logical_to_all_physical_map index 256 is out of bounds in EP parallel. (sgl-project#6767)
* Add triton fused moe kernel config for E=257 on B200 (sgl-project#6939)
* [sgl-kernel] update deepgemm (sgl-project#6942)
* chore: bump sgl-kernel v0.1.6 (sgl-project#6943)
* Minor compile fused topk (sgl-project#6944)
* [Bugfix] pipeline parallelism and Eagle Qwen2 (sgl-project#6910)
* Tiny re-introduce profile id logging (sgl-project#6912)
* Add triton version as a fused_moe_triton config search key to avoid performace decrease in different Triton version (sgl-project#5955)
* reduce torch.zeros overhead in moe align block size kernel (sgl-project#6369)
* chore: upgrade sgl-kernel v0.1.6 (sgl-project#6945)
* add fbgemm moe grouped gemm kernel benchmark (sgl-project#6924)
* [Docker] Add docker file for SGL Router (sgl-project#6915)
* Disabling mixed chunked prefill when eagle is enabled (sgl-project#6874)
* Add canary for EPLB rebalancing (sgl-project#6895)
* Refactor global_server_args_dict (sgl-project#6866)
* Fuse routed scaling factor in topk_reduce kernel (sgl-project#6220)
* Update server timeout time in AMD CI. (sgl-project#6953)
* [misc] add is_cpu() (sgl-project#6950)
* Add H20 fused MoE kernel tuning configs for DeepSeek-R1/V3 (sgl-project#6885)
* Add a CUDA kernel for fusing mapping and weighted sum for MoE. (sgl-project#6916)
* chore: bump sgl-kernel v0.1.6.post1 (sgl-project#6955)
* chore: upgrade sgl-kernel v0.1.6.post1 (sgl-project#6957)
* [DeepseekR1-FP4] Add Support for nvidia/DeepSeekR1-FP4 model (sgl-project#6853)
* Revert "Fuse routed scaling factor in topk_reduce kernel (sgl-project#6220)" (sgl-project#6968)
* [AMD] Add more tests to per-commit-amd (sgl-project#6926)
* chore: bump sgl-kernel v0.1.7 (sgl-project#6963)
* Slightly improve the sampler to skip unnecessary steps (sgl-project#6956)
* rebase h20 fused_moe config (sgl-project#6966)
* Fix CI and triton moe Configs (sgl-project#6974)
* Remove unnecessary kernels of num_token_non_padded (sgl-project#6965)
* Extend cuda graph capture bs for B200 (sgl-project#6937)
* Fuse routed scaling factor in deepseek (sgl-project#6970)
* Sync cuda graph runners (sgl-project#6976)
* Fix draft extend ut stability with flush cache (sgl-project#6979)
* Fix triton sliding window test case (sgl-project#6981)
* Fix expert distribution dumping causes OOM (sgl-project#6967)
* Minor remove one kernel for DeepSeek (sgl-project#6977)
* [perf][sgl-kernel] extend cutlass_mla_decode to support num_head < 128 (sgl-project#6929)
* Enable more unit tests for AMD CI. (sgl-project#6983)
* Use torch.compile to fuse flash attention decode metadata preparation (sgl-project#6973)
* Eliminate stream sync to speed up LoRA batch init  (sgl-project#6960)
* support qwen3 emebedding (sgl-project#6990)
* Fix torch profiler bugs for bench_offline_throughput.py (sgl-project#6557)
* chore: upgrade flashinfer v0.2.6.post1 jit (sgl-project#6958)
* cleanup tmp dir (sgl-project#7007)
* chore: update pr test xeon (sgl-project#7008)
* Fix cutlass MLA gets almost zero accuracy (sgl-project#6998)
* Update amd nightly models CI. (sgl-project#6992)
* feat: add direct routing strategy to DP worker (sgl-project#6884)
* Fallback to lower triton version for unfound fused moe configs (sgl-project#7013)
* Fix torchvision version for Blackwell (sgl-project#7015)
* Simplify prepare_extend_after_decode (sgl-project#6987)
* Migrate to assertEqual (sgl-project#6741)
* Fix torch version in blackwell dockerfile (sgl-project#7017)
* chore: update pr test xeon (sgl-project#7018)
* Update default settings for blackwell (sgl-project#7023)
* Support both approximate and exact expert distribution collection (sgl-project#6964)
* Add decode req pool (sgl-project#6980)
* [theta]merge 0610
* [theta]merge 0610
* [CI] Add CI workflow for sgl-router docker build (sgl-project#7027)
* Fix fused_moe triton configs (sgl-project#7029)
* CPU: map changes from developing branch in sgl-kernel (sgl-project#6833)
* chore: bump v0.4.7 (sgl-project#7038)
* Update README.md (sgl-project#7040)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants