-
-
Notifications
You must be signed in to change notification settings - Fork 12.6k
[Bugfix] Fix incorrect updates to num_computed_tokens in multi-step scheduling #9038
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bugfix] Fix incorrect updates to num_computed_tokens in multi-step scheduling #9038
Conversation
|
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
LiuXiaoxuanPKU
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
| assert seq.data.get_num_computed_tokens( | ||
| ) == prompt_len + num_prompt_steps - 1 | ||
|
|
||
| prompt_num_computed_tokens = seq.data.get_num_computed_tokens() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: merge this call with line 58 so we only call get_num_computed_tokens once.
| while not seq.is_finished(): | ||
| assert seq.data.get_num_computed_tokens( | ||
| ) == prompt_num_computed_tokens + decode_step_counter | ||
| for _ in range(num_scheduler_steps): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
QQ: why do we need the for loop here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
when num_scheduler_steps=n, engine.step() doesn't do all n steps. It still does only 1 step.
This for-loop is required for multi-step.
Also, multi-step doesn't provide any guarantees that output processing will happen every step. The only guarantee is that after the completion of num_scheduler_steps, all data-structures will be correct. So, we finish all the steps in a multi-step before asserting.
|
|
||
| if do_update: | ||
| seq_group.update_num_computed_tokens( | ||
| seq_group_meta.token_chunk_size) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the cleanup, the logic is much cleaner now!
|
I guess we should skip the test when using rocm backend: https://buildkite.com/vllm/ci-aws/builds/9509#0192580b-67a9-4a0b-810a-7bdfc48b509a |
Head branch was pushed to by a user without write access
* [Build/CI] Upgrade to gcc 10 in the base build Docker image (vllm-project#8814) * [Docs] Add README to the build docker image (vllm-project#8825) * [CI/Build] Fix missing ci dependencies (vllm-project#8834) * [misc][installation] build from source without compilation (vllm-project#8818) * [ci] Soft fail Entrypoints, Samplers, LoRA, Decoder-only VLM (vllm-project#8872) Signed-off-by: kevin <[email protected]> * [Bugfix] Include encoder prompts len to non-stream api usage response (vllm-project#8861) * [Misc] Change dummy profiling and BOS fallback warns to log once (vllm-project#8820) * [Bugfix] Fix print_warning_once's line info (vllm-project#8867) * fix validation: Only set tool_choice `auto` if at least one tool is provided (vllm-project#8568) * [Bugfix] Fixup advance_step.cu warning (vllm-project#8815) * [BugFix] Fix test breakages from transformers 4.45 upgrade (vllm-project#8829) * [Installation] Allow lower versions of FastAPI to maintain Ray 2.9 compatibility (vllm-project#8764) * [Feature] Add support for Llama 3.1 and 3.2 tool use (vllm-project#8343) Signed-off-by: Max de Bayser <[email protected]> * [Core] rename`PromptInputs` and `inputs` (vllm-project#8876) * [misc] fix collect env (vllm-project#8894) * [MISC] Fix invalid escape sequence '\' (vllm-project#8830) Signed-off-by: Peter Pan <[email protected]> * [Bugfix][VLM] Fix Fuyu batching inference with `max_num_seqs>1` (vllm-project#8892) * [TPU] Update pallas.py to support trillium (vllm-project#8871) * [torch.compile] use empty tensor instead of None for profiling (vllm-project#8875) * [Kernel] AQ AZP 4/4: Integrate asymmetric quantization to linear method (vllm-project#7271) * [Bugfix] fix for deepseek w4a16 (vllm-project#8906) Co-authored-by: mgoin <[email protected]> * [Core] Multi-Step + Single Step Prefills via Chunked Prefill code path (vllm-project#8378) Co-authored-by: Varun Sundar Rabindranath <[email protected]> * [misc][distributed] add VLLM_SKIP_P2P_CHECK flag (vllm-project#8911) * [Core] Priority-based scheduling in async engine (vllm-project#8850) * [misc] fix wheel name (vllm-project#8919) * [Bugfix][Intel] Fix XPU Dockerfile Build (vllm-project#7824) Signed-off-by: tylertitsworth <[email protected]> Co-authored-by: youkaichao <[email protected]> * [Misc] Remove vLLM patch of `BaichuanTokenizer` (vllm-project#8921) * [Bugfix] Fix code for downloading models from modelscope (vllm-project#8443) * [Bugfix] Fix PP for Multi-Step (vllm-project#8887) * [CI/Build] Update models tests & examples (vllm-project#8874) Co-authored-by: Roger Wang <[email protected]> * [Frontend] Make beam search emulator temperature modifiable (vllm-project#8928) Co-authored-by: Eduard Balzin <[email protected]> * [Bugfix] Support testing prefill throughput with benchmark_serving.py --hf-output-len 1 (vllm-project#8891) * [doc] organize installation doc and expose per-commit docker (vllm-project#8931) * [Core] Improve choice of Python multiprocessing method (vllm-project#8823) Signed-off-by: Russell Bryant <[email protected]> Co-authored-by: youkaichao <[email protected]> * [Bugfix] Block manager v2 with preemption and lookahead slots (vllm-project#8824) * [Bugfix] Fix Marlin MoE act order when is_k_full == False (vllm-project#8741) Co-authored-by: Tyler Michael Smith <[email protected]> * [CI/Build] Add test decorator for minimum GPU memory (vllm-project#8925) * [Build/CI] Set FETCHCONTENT_BASE_DIR to one location for better caching (vllm-project#8930) * [Model] Support Qwen2.5-Math-RM-72B (vllm-project#8896) * [Model][LoRA]LoRA support added for MiniCPMV2.5 (vllm-project#7199) * [BugFix] Fix seeded random sampling with encoder-decoder models (vllm-project#8870) Co-authored-by: Roger Wang <[email protected]> * [Misc] Fix typo in BlockSpaceManagerV1 (vllm-project#8944) * [Frontend] Added support for HF's new `continue_final_message` parameter (vllm-project#8942) * [Kernel][Model] Varlen prefill + Prefill chunking support for mamba kernels and Jamba model (vllm-project#8533) * [Model] support input embeddings for qwen2vl (vllm-project#8856) * [Misc][CI/Build] Include `cv2` via `mistral_common[opencv]` (vllm-project#8951) * [Model][LoRA]LoRA support added for MiniCPMV2.6 (vllm-project#8943) Co-authored-by: DarkLight1337 <[email protected]> * [Model] Expose InternVL2 max_dynamic_patch as a mm_processor_kwarg (vllm-project#8946) * [Core] Make scheduling policy settable via EngineArgs (vllm-project#8956) * [Misc] Adjust max_position_embeddings for LoRA compatibility (vllm-project#8957) * [ci] Add CODEOWNERS for test directories (vllm-project#8795) Signed-off-by: kevin <[email protected]> * [CI][SpecDecode] Fix spec decode tests, use flash attention backend for spec decode CI tests. (vllm-project#8975) * [Frontend][Core] Move guided decoding params into sampling params (vllm-project#8252) Signed-off-by: Joe Runde <[email protected]> Co-authored-by: Nick Hill <[email protected]> * [CI/Build] Fix machete generated kernel files ordering (vllm-project#8976) Signed-off-by: kevin <[email protected]> Co-authored-by: Cody Yu <[email protected]> * [torch.compile] fix tensor alias (vllm-project#8982) * [Misc] add process_weights_after_loading for DummyLoader (vllm-project#8969) * [Bugfix] Fix Fuyu tensor parallel inference (vllm-project#8986) * [Bugfix] Fix Token IDs Reference for MiniCPM-V When Images are Provided With No Placeholders (vllm-project#8991) Signed-off-by: Alex-Brooks <[email protected]> * [Core] [Frontend] Priority scheduling for embeddings and in the OpenAI-API (vllm-project#8965) * [Doc] Update list of supported models (vllm-project#8987) * Update benchmark_serving.py to read and write json-datasets, results in UTF8, for better compatibility with Windows (vllm-project#8997) * [Spec Decode] (1/2) Remove batch expansion (vllm-project#8839) * [Core] Combined support for multi-step scheduling, chunked prefill & prefix caching (vllm-project#8804) Co-authored-by: Varun Sundar Rabindranath <[email protected]> Co-authored-by: Andrew Feldman <[email protected]> * [Misc] Update Default Image Mapper Error Log (vllm-project#8977) Signed-off-by: Alex-Brooks <[email protected]> Co-authored-by: Roger Wang <[email protected]> * [Core] CUDA Graphs for Multi-Step + Chunked-Prefill (vllm-project#8645) Co-authored-by: Varun Sundar Rabindranath <[email protected]> * [OpenVINO] Enable GPU support for OpenVINO vLLM backend (vllm-project#8192) * [Model] Adding Granite MoE. (vllm-project#8206) Co-authored-by: Nick Hill <[email protected]> * [Doc] Update Granite model docs (vllm-project#9025) * [Bugfix] example template should not add parallel_tool_prompt if tools is none (vllm-project#9007) * [Misc] log when using default MoE config (vllm-project#8971) * [BugFix] Enforce Mistral ToolCall id constraint when using the Mistral tool call parser (vllm-project#9020) * [Core] Make BlockSpaceManagerV2 the default BlockManager to use. (vllm-project#8678) * [Frontend] [Neuron] Parse literals out of override-neuron-config (vllm-project#8959) Co-authored-by: Jerzy Zagorski <[email protected]> * [misc] add forward context for attention (vllm-project#9029) * Fix failing spec decode test (vllm-project#9054) * [Bugfix] Weight loading fix for OPT model (vllm-project#9042) Co-authored-by: dvres <[email protected]> * [Frontend][Feature] support tool calling for internlm/internlm2_5-7b-chat model (vllm-project#8405) * [CI/Build] Per file CUDA Archs (improve wheel size and dev build times) (vllm-project#8845) * [Misc] Enable multi-step output streaming by default (vllm-project#9047) * [Models] Add remaining model PP support (vllm-project#7168) Signed-off-by: Muralidhar Andoorveedu <[email protected]> Signed-off-by: Murali Andoorveedu <[email protected]> Co-authored-by: DarkLight1337 <[email protected]> * [Misc] Move registry to its own file (vllm-project#9064) * [Bugfix] Reshape the dimensions of the input image embeddings in Qwen2VL (vllm-project#9071) * [Bugfix] Flash attention arches not getting set properly (vllm-project#9062) * [Model] add a bunch of supported lora modules for mixtral (vllm-project#9008) Signed-off-by: Prashant Gupta <[email protected]> * Remove AMD Ray Summit Banner (vllm-project#9075) * [Hardware][PowerPC] Make oneDNN dependency optional for Power (vllm-project#9039) Signed-off-by: Varad Ahirwadkar <[email protected]> * [Core][VLM] Test registration for OOT multimodal models (vllm-project#8717) Co-authored-by: DarkLight1337 <[email protected]> * Adds truncate_prompt_tokens param for embeddings creation (vllm-project#8999) Signed-off-by: Flavia Beo <[email protected]> * [Kernel] Zero point support in fused MarlinMoE kernel + AWQ Fused MoE (vllm-project#8973) Co-authored-by: Dipika <[email protected]> Co-authored-by: Dipika Sikka <[email protected]> * [CI] Update performance benchmark: upgrade trt-llm to r24.07, and add SGLang (vllm-project#7412) * [Misc] Improved prefix cache example (vllm-project#9077) * [Misc] Add random seed for prefix cache benchmark (vllm-project#9081) * [Misc] Fix CI lint (vllm-project#9085) * [Hardware][Neuron] Add on-device sampling support for Neuron (vllm-project#8746) Co-authored-by: Ashraf Mahgoub <[email protected]> * [torch.compile] improve allreduce registration (vllm-project#9061) * [Doc] Update README.md with Ray summit slides (vllm-project#9088) * [Bugfix] use blockmanagerv1 for encoder-decoder (vllm-project#9084) Co-authored-by: Roger Wang <[email protected]> * [Bugfix] Fixes Phi3v & Ultravox Multimodal EmbeddingInputs (vllm-project#8979) * [Model] Support Gemma2 embedding model (vllm-project#9004) * [Bugfix] Deprecate registration of custom configs to huggingface (vllm-project#9083) * [Bugfix] Fix order of arguments matters in config.yaml (vllm-project#8960) * [core] use forward context for flash infer (vllm-project#9097) * [Bugfix] Fix try-catch conditions to import correct Flash Attention Backend in Draft Model (vllm-project#9101) * [Frontend] API support for beam search (vllm-project#9087) Co-authored-by: youkaichao <[email protected]> * [Misc] Remove user-facing error for removed VLM args (vllm-project#9104) * [Model] PP support for embedding models and update docs (vllm-project#9090) Co-authored-by: Roger Wang <[email protected]> * [Bugfix] fix tool_parser error handling when serve a model not support it (vllm-project#8709) * [Bugfix] Fix incorrect updates to num_computed_tokens in multi-step scheduling (vllm-project#9038) Co-authored-by: Varun Sundar Rabindranath <[email protected]> * [Bugfix][Hardware][CPU] Fix CPU model input for decode (vllm-project#9044) * [BugFix][Core] Fix BlockManagerV2 when Encoder Input is None (vllm-project#9103) * [core] remove beam search from the core (vllm-project#9105) * [Model] Explicit interface for vLLM models and support OOT embedding models (vllm-project#9108) * [Hardware][CPU] Cross-attention and Encoder-Decoder models support on CPU backend (vllm-project#9089) * [Core] Refactor GGUF parameters packing and forwarding (vllm-project#8859) * [Model] Support NVLM-D and fix QK Norm in InternViT (vllm-project#9045) Co-authored-by: Roger Wang <[email protected]> Co-authored-by: Isotr0py <[email protected]> * [Doc]: Add deploying_with_k8s guide (vllm-project#8451) * [CI/Build] Add linting for github actions workflows (vllm-project#7876) Signed-off-by: Russell Bryant <[email protected]> * [Doc] Include performance benchmark in README (vllm-project#9135) * [misc] fix comment and variable name (vllm-project#9139) * Add Slack to README (vllm-project#9137) * [misc] update utils to support comparing multiple settings (vllm-project#9140) * [Intel GPU] Fix xpu decode input (vllm-project#9145) * [misc] improve ux on readme (vllm-project#9147) * [Frontend] API support for beam search for MQLLMEngine (vllm-project#9117) * [Core][Frontend] Add Support for Inference Time mm_processor_kwargs (vllm-project#9131) Signed-off-by: Alex-Brooks <[email protected]> * Factor out common weight loading code * Fix EAGLE model loading * [Frontend] Add Early Validation For Chat Template / Tool Call Parser (vllm-project#9151) Signed-off-by: Alex-Brooks <[email protected]> * Improve efficiency * Rename * Update LLaVA-NeXT-Video * [CI/Build] Add examples folder into Docker image so that we can leverage the templates*.jinja when serving models (vllm-project#8758) Signed-off-by: Peter Pan <[email protected]> * [Bugfix] fix OpenAI API server startup with --disable-frontend-multiprocessing (vllm-project#8537) * Automatic loading and save memory * Rename * Update docstring * Simplify * Cleanup * Fully enable recursive loading * Clarify * [Doc] Update vlm.rst to include an example on videos (vllm-project#9155) Co-authored-by: Cyrus Leung <[email protected]> * Fix incorrect semantics * Move function * Update error message * Fix Ultravox loading * spacing * [Doc] Improve contributing and installation documentation (vllm-project#9132) Signed-off-by: Rafael Vasquez <[email protected]> * Fix server * [Bugfix] Try to handle older versions of pytorch (vllm-project#9086) --------- Signed-off-by: kevin <[email protected]> Signed-off-by: Max de Bayser <[email protected]> Signed-off-by: Peter Pan <[email protected]> Signed-off-by: tylertitsworth <[email protected]> Signed-off-by: Russell Bryant <[email protected]> Signed-off-by: Joe Runde <[email protected]> Signed-off-by: Alex-Brooks <[email protected]> Signed-off-by: Muralidhar Andoorveedu <[email protected]> Signed-off-by: Murali Andoorveedu <[email protected]> Signed-off-by: Prashant Gupta <[email protected]> Signed-off-by: Varad Ahirwadkar <[email protected]> Signed-off-by: Flavia Beo <[email protected]> Signed-off-by: Rafael Vasquez <[email protected]> Co-authored-by: Tyler Michael Smith <[email protected]> Co-authored-by: Michael Goin <[email protected]> Co-authored-by: fyuan1316 <[email protected]> Co-authored-by: youkaichao <[email protected]> Co-authored-by: Kevin H. Luu <[email protected]> Co-authored-by: Pernekhan Utemuratov <[email protected]> Co-authored-by: Chirag Jain <[email protected]> Co-authored-by: Nick Hill <[email protected]> Co-authored-by: Cyrus Leung <[email protected]> Co-authored-by: Maximilien de Bayser <[email protected]> Co-authored-by: Peter Pan <[email protected]> Co-authored-by: Isotr0py <[email protected]> Co-authored-by: Brittany <[email protected]> Co-authored-by: Luka Govedič <[email protected]> Co-authored-by: Lucas Wilkinson <[email protected]> Co-authored-by: Varun Sundar Rabindranath <[email protected]> Co-authored-by: Varun Sundar Rabindranath <[email protected]> Co-authored-by: Sebastian Schoennenbeck <[email protected]> Co-authored-by: Tyler Titsworth <[email protected]> Co-authored-by: youkaichao <[email protected]> Co-authored-by: tastelikefeet <[email protected]> Co-authored-by: Roger Wang <[email protected]> Co-authored-by: Edouard B. <[email protected]> Co-authored-by: Eduard Balzin <[email protected]> Co-authored-by: Chen Zhang <[email protected]> Co-authored-by: Russell Bryant <[email protected]> Co-authored-by: sroy745 <[email protected]> Co-authored-by: ElizaWszola <[email protected]> Co-authored-by: Zilin Zhu <[email protected]> Co-authored-by: Jee Jee Li <[email protected]> Co-authored-by: juncheoll <[email protected]> Co-authored-by: danieljannai21 <[email protected]> Co-authored-by: Mor Zusman <[email protected]> Co-authored-by: whyiug <[email protected]> Co-authored-by: Roger Wang <[email protected]> Co-authored-by: Lily Liu <[email protected]> Co-authored-by: Joe Runde <[email protected]> Co-authored-by: Cody Yu <[email protected]> Co-authored-by: Divakar Verma <[email protected]> Co-authored-by: Alex Brooks <[email protected]> Co-authored-by: vlsav <[email protected]> Co-authored-by: afeldman-nm <[email protected]> Co-authored-by: Andrew Feldman <[email protected]> Co-authored-by: Sergey Shlyapnikov <[email protected]> Co-authored-by: Shawn Tan <[email protected]> Co-authored-by: Travis Johnson <[email protected]> Co-authored-by: Guillaume Calmettes <[email protected]> Co-authored-by: xendo <[email protected]> Co-authored-by: Jerzy Zagorski <[email protected]> Co-authored-by: Domen Vreš <[email protected]> Co-authored-by: dvres <[email protected]> Co-authored-by: 代君 <[email protected]> Co-authored-by: Murali Andoorveedu <[email protected]> Co-authored-by: Prashant Gupta <[email protected]> Co-authored-by: Simon Mo <[email protected]> Co-authored-by: Varad Ahirwadkar <[email protected]> Co-authored-by: Flávia Béo <[email protected]> Co-authored-by: Dipika <[email protected]> Co-authored-by: Dipika Sikka <[email protected]> Co-authored-by: Kuntai Du <[email protected]> Co-authored-by: Andy Dai <[email protected]> Co-authored-by: Chongming Ni <[email protected]> Co-authored-by: Ashraf Mahgoub <[email protected]> Co-authored-by: Zhuohan Li <[email protected]> Co-authored-by: hhzhang16 <[email protected]> Co-authored-by: Xin Yang <[email protected]> Co-authored-by: TJian <[email protected]> Co-authored-by: Brendan Wong <[email protected]> Co-authored-by: Yanyi Liu <[email protected]> Co-authored-by: Isotr0py <[email protected]> Co-authored-by: TimWang <[email protected]> Co-authored-by: Kunshang Ji <[email protected]> Co-authored-by: Daniele <[email protected]> Co-authored-by: Sayak Paul <[email protected]> Co-authored-by: Cyrus Leung <[email protected]> Co-authored-by: Rafael Vasquez <[email protected]> Co-authored-by: bnellnm <[email protected]>
…cheduling (vllm-project#9038) Co-authored-by: Varun Sundar Rabindranath <[email protected]> Signed-off-by: Alvant <[email protected]>
…cheduling (vllm-project#9038) Co-authored-by: Varun Sundar Rabindranath <[email protected]> Signed-off-by: Amit Garg <[email protected]>
…cheduling (vllm-project#9038) Co-authored-by: Varun Sundar Rabindranath <[email protected]> Signed-off-by: Sumit Dubey <[email protected]>
…cheduling (vllm-project#9038) Co-authored-by: Varun Sundar Rabindranath <[email protected]> Signed-off-by: LeiWang1999 <[email protected]>
num_computed_tokenstracks the number of tokens in a sequence that has a computed KV cache slot. This semantics is not respected inmainwhen Multi-Step scheduling is used.Thanks @LiuXiaoxuanPKU for identifying this bug.
Added unit tests to test updates to
num_computed_tokensPR Checklist (Click to Expand)
Thank you for your contribution to vLLM! Before submitting the pull request, please ensure the PR meets the following criteria. This helps vLLM maintain the code quality and improve the efficiency of the review process.
PR Title and Classification
Only specific types of PRs will be reviewed. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following:
[Bugfix]for bug fixes.[CI/Build]for build or continuous integration improvements.[Doc]for documentation fixes and improvements.[Model]for adding a new model or improving an existing model. Model name should appear in the title.[Frontend]For changes on the vLLM frontend (e.g., OpenAI API server,LLMclass, etc.)[Kernel]for changes affecting CUDA kernels or other compute kernels.[Core]for changes in the core vLLM logic (e.g.,LLMEngine,AsyncLLMEngine,Scheduler, etc.)[Hardware][Vendor]for hardware-specific changes. Vendor name should appear in the prefix (e.g.,[Hardware][AMD]).[Misc]for PRs that do not fit the above categories. Please use this sparingly.Note: If the PR spans more than one category, please include all relevant prefixes.
Code Quality
The PR need to meet the following code quality standards:
format.shto format your code.docs/source/if the PR modifies the user-facing behaviors of vLLM. It helps vLLM user understand and utilize the new features or changes.Adding or changing kernels
Each custom kernel needs a schema and one or more implementations to be registered with PyTorch.
Tensorsrequire meta-functions. Meta-functions should be implemented and registered in python so that dynamic dims can be handled automatically. See above documents for a description of meta-functions.torch.libary.opcheck()to test the function registration and meta-function for any registered ops. Seetests/kernelsfor examples.Notes for Large Changes
Please keep the changes as concise as possible. For major architectural changes (>500 LOC excluding kernel/data/config/test), we would expect a GitHub issue (RFC) discussing the technical design and justification. Otherwise, we will tag it with
rfc-requiredand might not go through the PR.What to Expect for the Reviews
The goal of the vLLM team is to be a transparent reviewing machine. We would like to make the review process transparent and efficient and make sure no contributor feel confused or frustrated. However, the vLLM team is small, so we need to prioritize some PRs over others. Here is what you can expect from the review process:
action-requiredlabel on the PR if there are changes required. The contributor should address the comments and ping the reviewer to re-review the PR.Thank You
Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM. Your contributions make vLLM a great tool for everyone!