Skip to content

[Bug] Document is different with source of --reasoning-parser value #6023

@engchina

Description

@engchina

Checklist

  • 1. I have searched related issues but cannot get the expected help.
  • 2. The bug has not been fixed in the latest version.
  • 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
  • 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
  • 5. Please use English, otherwise it will be closed.

Describe the bug

Document is different with source of --reasoning-parser value。

In document, https://docs.sglang.ai/backend/separate_reasoning.html,

Currently, SGLang supports the following reasoning models:

[DeepSeek R1 series](https://huggingface.co/collections/deepseek-ai/deepseek-r1-678e1e131c0169c0bc89728d): The reasoning content is wrapped with <think> and </think> tags.

[QwQ](https://huggingface.co/Qwen/QwQ-32B): The reasoning content is wrapped with <think> and </think> tags.

In source code,

  --reasoning-parser {deepseek-r1,qwen3}
                        Specify the parser for reasoning models, supported parsers are: ['deepseek-r1', 'qwen3'].

execute below code will get the result,

python -m sglang.launch_server --help

Reproduction

python -m sglang.launch_server --help

Environment

Python: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0]
CUDA available: True
GPU 0,1,2: NVIDIA GeForce RTX 4090
GPU 0,1,2 Compute Capability: 8.9
CUDA_HOME: /usr/local/cuda-12.6
NVCC: Cuda compilation tools, release 12.6, V12.6.85
CUDA Driver Version: 560.94
PyTorch: 2.6.0+cu124
sglang: 0.4.6.post2
sgl_kernel: 0.1.1
flashinfer_python: 0.2.5
triton: 3.2.0
transformers: 4.51.1
torchao: 0.9.0
numpy: 1.26.4
aiohttp: 3.11.13
fastapi: 0.115.11
hf_transfer: 0.1.9
huggingface_hub: 0.30.1
interegular: 0.3.3
modelscope: 1.23.2
orjson: 3.10.15
outlines: 0.1.11
packaging: 24.2
psutil: 7.0.0
pydantic: 2.10.6
python-multipart: 0.0.20
pyzmq: 26.3.0
uvicorn: 0.34.0
uvloop: 0.21.0
vllm: 0.8.4
xgrammar: 0.1.17
openai: 1.66.3
tiktoken: 0.9.0
anthropic: 0.49.0
litellm: 1.63.8
decord: 0.6.0
NVIDIA Topology:
        GPU0    GPU1    GPU2    GPU3    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      SYS     SYS     SYS                             N/A
GPU1    SYS      X      SYS     SYS                             N/A
GPU2    SYS     SYS      X      SYS                             N/A
GPU3    SYS     SYS     SYS      X                              N/A

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

Hypervisor vendor: Microsoft
ulimit soft: 1024

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions