Skip to content

Conversation

@sroy745
Copy link
Collaborator

@sroy745 sroy745 commented Sep 25, 2024

In this pr we address an issue with the logic in BlockManager V2 when computing whether a sequence group can be swapped in.

In the current implementation we don't consider the unseen tokens when deciding whether a sequence group can be swapped in or not. Later when we try to allocate blocks for the sequence group we might hit a NoFreeBlocksAvailable exception depending on how may unseen tokens are there.

In this pr we add the logic for considering these unseen tokens when determining if a sequence group is eligible for swapping in. We have added unit-tests to verify these cases both in unit-tests for block_manager_v2, naive_block and prefix_caching_block.

sroy745 added 30 commits May 28, 2024 20:39
@sroy745 sroy745 changed the title [WIP][Bugfix] Block manager v2 with preemption and lookahead slots [Bugfix] Block manager v2 with preemption and lookahead slots Sep 27, 2024
@sroy745 sroy745 marked this pull request as ready for review September 27, 2024 06:13
@sroy745
Copy link
Collaborator Author

sroy745 commented Sep 27, 2024

Thanks for the review. Addressed your comments. PTAL

Comment on lines 602 to 633
# NOTE: for naive block, we use set to eliminate common blocks among
# seqs, also we compare the empty slots in the mutable blocks with
# lookahead slots to get the number of unique new block that are
# needed.
num_touched_blocks = 0
for block in blocks:
if not block.is_full:
num_touched_blocks += 1
if num_lookahead_slots > block.num_empty_slots:
num_touched_blocks += cdiv(
num_lookahead_slots - block.num_empty_slots,
self._block_size)
else:
# If the block has a match in the cache and the cached block
# is not referenced, then we still count it as a touched block
if not self.is_block_cached(block) or \
(block.content_hash is not None and \
self._cached_blocks[block.content_hash] in self.evictor):
seq_id_empty_slots: Dict[int, int] = dict()
# TODO(cade): make sure the logic is correct and clean it up.
for seq_id, blocks in seq_id_blocks.items():
seq_id_empty_slots[seq_id] = 0
for block in blocks:
if block.is_full:
# If the block has a match in the cache and the cached
# block is not referenced, then we still count it as a
# touched block
if not self.is_block_cached(block) or \
(block.content_hash is not None and \
self._cached_blocks[block.content_hash] in \
self.evictor):
num_touched_blocks += 1
else:
seq_id_empty_slots[seq_id] = block.num_empty_slots

for seq_id, _ in seq_id_blocks.items():
num_tokens_to_append = num_lookahead_slots
if (seq_id_num_unseen_tokens is not None
and seq_id in seq_id_num_unseen_tokens):
num_tokens_to_append += seq_id_num_unseen_tokens[seq_id]
if num_tokens_to_append > 0:
num_touched_blocks += get_num_blocks_touched_by_append_slots(
num_tokens_to_append, seq_id_empty_slots[seq_id],
self._block_size)
return num_touched_blocks
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should not be here?

Comment on lines 313 to 332
seq_id_empty_slots: Dict[int, int] = dict()
# TODO(cade): make sure the logic is correct and clean it up.
for block in blocks:
if not block.is_full and num_lookahead_slots != 0:
new_block_count += 1
if num_lookahead_slots > block.num_empty_slots:
new_block_count += cdiv(
num_lookahead_slots - block.num_empty_slots,
self._block_size)
else:
old_block_set.add(block.block_id)
for seq_id, blocks in seq_id_blocks.items():
seq_id_empty_slots[seq_id] = 0
for block in blocks:
if block.is_full:
old_block_set.add(block.block_id)
else:
seq_id_empty_slots[seq_id] = block.num_empty_slots

new_block_count = 0
for seq_id, _ in seq_id_blocks.items():
num_tokens_to_append = num_lookahead_slots
if (seq_id_num_unseen_tokens is not None
and seq_id in seq_id_num_unseen_tokens):
num_tokens_to_append += seq_id_num_unseen_tokens[seq_id]
if num_tokens_to_append > 0:
new_block_count += get_num_blocks_touched_by_append_slots(
num_tokens_to_append, seq_id_empty_slots[seq_id],
self._block_size)
Copy link
Collaborator

@comaniac comaniac Sep 27, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rewrite the above "NOTE" to explain the new logic (considering unseen tokens).

I feel we could simplify the logic, because we only need the number of touched blocks instead of their IDs. The idea is:

  1. We first count the number of full blocks, because neither lookahead slots or unseen tokens won't affect them.
  2. Then use the existing block_table.get_num_blocks_touched_by_append_slots to calculate unfilled blocks. This function already considers unseens tokens so it should be usable (if not, then could we add support to this function)?
# In _can_swap
block_table_and_unseen_ids: List[Tuple(BlockTable, int)] = []
for seq in seq_group.get_seqs(status=status):
    block_table = self.block_tables[seq.seq_id]
    if block_table.blocks is not None:
        block_table_and_unseen_ids.append(
            block_table, 
            len(block_table.get_unseen_token_ids(seq.get_token_ids()))
        )
num_blocks_touched = self.block_allocator.get_num_blocks_touched(
    block_table_and_unseen_ids, device,
    num_lookahead_slots=num_lookahead_slots
)

# In get_num_blocks_touched
old_block_set = set()
new_block_count = 0

for block_table, unseen_tokens in block_table_and_unseen_ids:
    blocks = block_table.blocks
    num_full_blocks = len(blocks)
    # 1. I suppose only the last block can be unfilled, but please double check.
    # 2. I suppose this can also handle prefix caching as the same block IDs will be de-duplicated by the set.
    if not blocks[-1].is_full:
        num_full_blocks -= 1
    old_block_set.update(blocks[:num_full_blocks])

    # This includes full blocks.
    num_touched_blocks = block_table.get_num_blocks_touched_by_append_slots(...)
    # New block count should be the number of touched blocks - full blocks
    new_block_count += num_touched_blocks - num_full_blocks
    return new_block_count + len(old_block_set)

@sroy745
Copy link
Collaborator Author

sroy745 commented Sep 28, 2024

Thanks for the review. Addressed comments. PTAL

Copy link
Collaborator

@comaniac comaniac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Pretty clean now.

@comaniac comaniac added the ready ONLY add when PR is ready to merge/full CI is needed label Sep 28, 2024
@sroy745
Copy link
Collaborator Author

sroy745 commented Sep 29, 2024

@comaniac
Thanks for the review. Synced to head, all tests passing and ready to merge.

@DarkLight1337 DarkLight1337 merged commit 5bf8789 into vllm-project:main Sep 29, 2024
@DarkLight1337 DarkLight1337 removed the bug Something isn't working label Sep 29, 2024
shajrawi pushed a commit to ROCm/vllm that referenced this pull request Oct 9, 2024
* [Build/CI] Upgrade to gcc 10 in the base build Docker image (vllm-project#8814)

* [Docs] Add README to the build docker image (vllm-project#8825)

* [CI/Build] Fix missing ci dependencies (vllm-project#8834)

* [misc][installation] build from source without compilation (vllm-project#8818)

* [ci] Soft fail Entrypoints, Samplers, LoRA, Decoder-only VLM (vllm-project#8872)

Signed-off-by: kevin <[email protected]>

* [Bugfix] Include encoder prompts len to non-stream api usage response (vllm-project#8861)

* [Misc] Change dummy profiling and BOS fallback warns to log once (vllm-project#8820)

* [Bugfix] Fix print_warning_once's line info (vllm-project#8867)

* fix validation: Only set tool_choice `auto` if at least one tool is provided (vllm-project#8568)

* [Bugfix] Fixup advance_step.cu warning (vllm-project#8815)

* [BugFix] Fix test breakages from transformers 4.45 upgrade (vllm-project#8829)

* [Installation] Allow lower versions of FastAPI to maintain Ray 2.9 compatibility (vllm-project#8764)

* [Feature] Add support for Llama 3.1 and 3.2 tool use (vllm-project#8343)

Signed-off-by: Max de Bayser <[email protected]>

* [Core] rename`PromptInputs` and `inputs` (vllm-project#8876)

* [misc] fix collect env (vllm-project#8894)

* [MISC] Fix invalid escape sequence '\' (vllm-project#8830)

Signed-off-by: Peter Pan <[email protected]>

* [Bugfix][VLM] Fix Fuyu batching inference with `max_num_seqs>1` (vllm-project#8892)

* [TPU] Update pallas.py to support trillium (vllm-project#8871)

* [torch.compile] use empty tensor instead of None for profiling (vllm-project#8875)

* [Kernel] AQ AZP 4/4: Integrate asymmetric quantization to linear method (vllm-project#7271)

* [Bugfix] fix for deepseek w4a16 (vllm-project#8906)

Co-authored-by: mgoin <[email protected]>

* [Core] Multi-Step + Single Step Prefills via Chunked Prefill code path (vllm-project#8378)

Co-authored-by: Varun Sundar Rabindranath <[email protected]>

* [misc][distributed] add VLLM_SKIP_P2P_CHECK flag (vllm-project#8911)

* [Core] Priority-based scheduling in async engine (vllm-project#8850)

* [misc] fix wheel name (vllm-project#8919)

* [Bugfix][Intel] Fix XPU Dockerfile Build (vllm-project#7824)

Signed-off-by: tylertitsworth <[email protected]>
Co-authored-by: youkaichao <[email protected]>

* [Misc] Remove vLLM patch of `BaichuanTokenizer` (vllm-project#8921)

* [Bugfix] Fix code for downloading models from modelscope (vllm-project#8443)

* [Bugfix] Fix PP for Multi-Step (vllm-project#8887)

* [CI/Build] Update models tests & examples (vllm-project#8874)

Co-authored-by: Roger Wang <[email protected]>

* [Frontend] Make beam search emulator temperature modifiable (vllm-project#8928)

Co-authored-by: Eduard Balzin <[email protected]>

* [Bugfix] Support testing prefill throughput with benchmark_serving.py --hf-output-len 1 (vllm-project#8891)

* [doc] organize installation doc and expose per-commit docker (vllm-project#8931)

* [Core] Improve choice of Python multiprocessing method (vllm-project#8823)

Signed-off-by: Russell Bryant <[email protected]>
Co-authored-by: youkaichao <[email protected]>

* [Bugfix] Block manager v2 with preemption and lookahead slots (vllm-project#8824)

* [Bugfix] Fix Marlin MoE act order when is_k_full == False (vllm-project#8741)

Co-authored-by: Tyler Michael Smith <[email protected]>

* [CI/Build] Add test decorator for minimum GPU memory (vllm-project#8925)

* [Build/CI] Set FETCHCONTENT_BASE_DIR to one location for better caching (vllm-project#8930)

* [Model] Support Qwen2.5-Math-RM-72B (vllm-project#8896)

* [Model][LoRA]LoRA support added for MiniCPMV2.5 (vllm-project#7199)

* [BugFix] Fix seeded random sampling with encoder-decoder models (vllm-project#8870)

Co-authored-by: Roger Wang <[email protected]>

* [Misc] Fix typo in BlockSpaceManagerV1 (vllm-project#8944)

* [Frontend] Added support for HF's new `continue_final_message` parameter (vllm-project#8942)

* [Kernel][Model] Varlen prefill + Prefill chunking support for mamba kernels and Jamba model (vllm-project#8533)

* [Model] support input embeddings for qwen2vl (vllm-project#8856)

* [Misc][CI/Build] Include `cv2` via `mistral_common[opencv]`  (vllm-project#8951)

* [Model][LoRA]LoRA support added for MiniCPMV2.6 (vllm-project#8943)

Co-authored-by: DarkLight1337 <[email protected]>

* [Model] Expose InternVL2 max_dynamic_patch as a mm_processor_kwarg (vllm-project#8946)

* [Core] Make scheduling policy settable via EngineArgs (vllm-project#8956)

* [Misc] Adjust max_position_embeddings for LoRA compatibility (vllm-project#8957)

* [ci] Add CODEOWNERS for test directories  (vllm-project#8795)

Signed-off-by: kevin <[email protected]>

* [CI][SpecDecode] Fix spec decode tests, use flash attention backend for spec decode CI tests. (vllm-project#8975)

* [Frontend][Core] Move guided decoding params into sampling params (vllm-project#8252)

Signed-off-by: Joe Runde <[email protected]>
Co-authored-by: Nick Hill <[email protected]>

* [CI/Build] Fix machete generated kernel files ordering (vllm-project#8976)

Signed-off-by: kevin <[email protected]>
Co-authored-by: Cody Yu <[email protected]>

* [torch.compile] fix tensor alias (vllm-project#8982)

* [Misc] add process_weights_after_loading for DummyLoader (vllm-project#8969)

* [Bugfix] Fix Fuyu tensor parallel inference (vllm-project#8986)

* [Bugfix] Fix Token IDs Reference for MiniCPM-V When Images are Provided With No Placeholders (vllm-project#8991)

Signed-off-by: Alex-Brooks <[email protected]>

* [Core] [Frontend] Priority scheduling for embeddings and in the OpenAI-API (vllm-project#8965)

* [Doc] Update list of supported models (vllm-project#8987)

* Update benchmark_serving.py to read and write json-datasets, results in UTF8, for better compatibility with Windows (vllm-project#8997)

* [Spec Decode] (1/2) Remove batch expansion (vllm-project#8839)

* [Core] Combined support for multi-step scheduling, chunked prefill & prefix caching (vllm-project#8804)

Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Andrew Feldman <[email protected]>

* [Misc] Update Default Image Mapper Error Log (vllm-project#8977)

Signed-off-by: Alex-Brooks <[email protected]>
Co-authored-by: Roger Wang <[email protected]>

* [Core] CUDA Graphs for Multi-Step + Chunked-Prefill (vllm-project#8645)

Co-authored-by: Varun Sundar Rabindranath <[email protected]>

* [OpenVINO] Enable GPU support for OpenVINO vLLM backend (vllm-project#8192)

* [Model]  Adding Granite MoE. (vllm-project#8206)

Co-authored-by: Nick Hill <[email protected]>

* [Doc] Update Granite model docs (vllm-project#9025)

* [Bugfix] example template should not add parallel_tool_prompt if tools is none (vllm-project#9007)

* [Misc] log when using default MoE config (vllm-project#8971)

* [BugFix] Enforce Mistral ToolCall id constraint when using the Mistral tool call parser (vllm-project#9020)

* [Core] Make BlockSpaceManagerV2 the default BlockManager to use. (vllm-project#8678)

* [Frontend] [Neuron] Parse literals out of override-neuron-config (vllm-project#8959)

Co-authored-by: Jerzy Zagorski <[email protected]>

* [misc] add forward context for attention (vllm-project#9029)

* Fix failing spec decode test (vllm-project#9054)

* [Bugfix] Weight loading fix for OPT model (vllm-project#9042)

Co-authored-by: dvres <[email protected]>

* [Frontend][Feature] support tool calling for internlm/internlm2_5-7b-chat model (vllm-project#8405)

* [CI/Build] Per file CUDA Archs (improve wheel size and dev build times) (vllm-project#8845)

* [Misc] Enable multi-step output streaming by default (vllm-project#9047)

* [Models] Add remaining model PP support (vllm-project#7168)

Signed-off-by: Muralidhar Andoorveedu <[email protected]>
Signed-off-by: Murali Andoorveedu <[email protected]>
Co-authored-by: DarkLight1337 <[email protected]>

* [Misc] Move registry to its own file (vllm-project#9064)

* [Bugfix] Reshape the dimensions of the input image embeddings in Qwen2VL (vllm-project#9071)

* [Bugfix] Flash attention arches not getting set properly (vllm-project#9062)

* [Model] add a bunch of supported lora modules for mixtral (vllm-project#9008)

Signed-off-by: Prashant Gupta <[email protected]>

* Remove AMD Ray Summit Banner (vllm-project#9075)

* [Hardware][PowerPC] Make oneDNN dependency optional for Power (vllm-project#9039)

Signed-off-by: Varad Ahirwadkar <[email protected]>

* [Core][VLM] Test registration for OOT multimodal models (vllm-project#8717)

Co-authored-by: DarkLight1337 <[email protected]>

* Adds truncate_prompt_tokens param for embeddings creation (vllm-project#8999)

Signed-off-by: Flavia Beo <[email protected]>

* [Kernel] Zero point support in fused MarlinMoE kernel + AWQ Fused MoE (vllm-project#8973)

Co-authored-by: Dipika <[email protected]>
Co-authored-by: Dipika Sikka <[email protected]>

* [CI] Update performance benchmark: upgrade trt-llm to r24.07, and add SGLang (vllm-project#7412)

* [Misc] Improved prefix cache example (vllm-project#9077)

* [Misc] Add random seed for prefix cache benchmark (vllm-project#9081)

* [Misc] Fix CI lint (vllm-project#9085)

* [Hardware][Neuron] Add on-device sampling support for Neuron (vllm-project#8746)

Co-authored-by: Ashraf Mahgoub <[email protected]>

* [torch.compile] improve allreduce registration (vllm-project#9061)

* [Doc] Update README.md with Ray summit slides (vllm-project#9088)

* [Bugfix] use blockmanagerv1 for encoder-decoder (vllm-project#9084)

Co-authored-by: Roger Wang <[email protected]>

* [Bugfix] Fixes Phi3v & Ultravox Multimodal EmbeddingInputs (vllm-project#8979)

* [Model] Support Gemma2 embedding model (vllm-project#9004)

* [Bugfix] Deprecate registration of custom configs to huggingface (vllm-project#9083)

* [Bugfix] Fix order of arguments matters in config.yaml (vllm-project#8960)

* [core] use forward context for flash infer (vllm-project#9097)

* [Bugfix] Fix try-catch conditions to import correct Flash Attention Backend in Draft Model (vllm-project#9101)

* [Frontend] API support for beam search (vllm-project#9087)

Co-authored-by: youkaichao <[email protected]>

* [Misc] Remove user-facing error for removed VLM args (vllm-project#9104)

* [Model] PP support for embedding models and update docs (vllm-project#9090)

Co-authored-by: Roger Wang <[email protected]>

* [Bugfix] fix tool_parser error handling when serve a model not support it (vllm-project#8709)

* [Bugfix] Fix incorrect updates to num_computed_tokens in multi-step scheduling (vllm-project#9038)

Co-authored-by: Varun Sundar Rabindranath <[email protected]>

* [Bugfix][Hardware][CPU] Fix CPU model input for decode (vllm-project#9044)

* [BugFix][Core] Fix BlockManagerV2 when Encoder Input is None (vllm-project#9103)

* [core] remove beam search from the core (vllm-project#9105)

* [Model] Explicit interface for vLLM models and support OOT embedding models (vllm-project#9108)

* [Hardware][CPU] Cross-attention and Encoder-Decoder models support on CPU backend (vllm-project#9089)

* [Core] Refactor GGUF parameters packing and forwarding (vllm-project#8859)

* [Model] Support NVLM-D and fix QK Norm in InternViT (vllm-project#9045)

Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Isotr0py <[email protected]>

* [Doc]: Add deploying_with_k8s guide (vllm-project#8451)

* [CI/Build] Add linting for github actions workflows (vllm-project#7876)

Signed-off-by: Russell Bryant <[email protected]>

* [Doc] Include performance benchmark in README (vllm-project#9135)

* [misc] fix comment and variable name (vllm-project#9139)

* Add Slack to README (vllm-project#9137)

* [misc] update utils to support comparing multiple settings (vllm-project#9140)

* [Intel GPU] Fix xpu decode input  (vllm-project#9145)

* [misc] improve ux on readme (vllm-project#9147)

* [Frontend] API support for beam search for MQLLMEngine (vllm-project#9117)

* [Core][Frontend] Add Support for Inference Time mm_processor_kwargs (vllm-project#9131)

Signed-off-by: Alex-Brooks <[email protected]>

* Factor out common weight loading code

* Fix EAGLE model loading

* [Frontend] Add Early Validation For Chat Template / Tool Call Parser (vllm-project#9151)

Signed-off-by: Alex-Brooks <[email protected]>

* Improve efficiency

* Rename

* Update LLaVA-NeXT-Video

* [CI/Build] Add examples folder into Docker image so that we can leverage the templates*.jinja when serving models (vllm-project#8758)

Signed-off-by: Peter Pan <[email protected]>

* [Bugfix] fix OpenAI API server startup with --disable-frontend-multiprocessing (vllm-project#8537)

* Automatic loading and save memory

* Rename

* Update docstring

* Simplify

* Cleanup

* Fully enable recursive loading

* Clarify

* [Doc] Update vlm.rst to include an example on videos (vllm-project#9155)

Co-authored-by: Cyrus Leung <[email protected]>

* Fix incorrect semantics

* Move function

* Update error message

* Fix Ultravox loading

* spacing

* [Doc] Improve contributing and installation documentation (vllm-project#9132)

Signed-off-by: Rafael Vasquez <[email protected]>

* Fix server

* [Bugfix] Try to handle older versions of pytorch (vllm-project#9086)

---------

Signed-off-by: kevin <[email protected]>
Signed-off-by: Max de Bayser <[email protected]>
Signed-off-by: Peter Pan <[email protected]>
Signed-off-by: tylertitsworth <[email protected]>
Signed-off-by: Russell Bryant <[email protected]>
Signed-off-by: Joe Runde <[email protected]>
Signed-off-by: Alex-Brooks <[email protected]>
Signed-off-by: Muralidhar Andoorveedu <[email protected]>
Signed-off-by: Murali Andoorveedu <[email protected]>
Signed-off-by: Prashant Gupta <[email protected]>
Signed-off-by: Varad Ahirwadkar <[email protected]>
Signed-off-by: Flavia Beo <[email protected]>
Signed-off-by: Rafael Vasquez <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: fyuan1316 <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Kevin H. Luu <[email protected]>
Co-authored-by: Pernekhan Utemuratov <[email protected]>
Co-authored-by: Chirag Jain <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Maximilien de Bayser <[email protected]>
Co-authored-by: Peter Pan <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Brittany <[email protected]>
Co-authored-by: Luka Govedič <[email protected]>
Co-authored-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Sebastian Schoennenbeck <[email protected]>
Co-authored-by: Tyler Titsworth <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: tastelikefeet <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Edouard B. <[email protected]>
Co-authored-by: Eduard Balzin <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
Co-authored-by: sroy745 <[email protected]>
Co-authored-by: ElizaWszola <[email protected]>
Co-authored-by: Zilin Zhu <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: juncheoll <[email protected]>
Co-authored-by: danieljannai21 <[email protected]>
Co-authored-by: Mor Zusman <[email protected]>
Co-authored-by: whyiug <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Lily Liu <[email protected]>
Co-authored-by: Joe Runde <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
Co-authored-by: Divakar Verma <[email protected]>
Co-authored-by: Alex Brooks <[email protected]>
Co-authored-by: vlsav <[email protected]>
Co-authored-by: afeldman-nm <[email protected]>
Co-authored-by: Andrew Feldman <[email protected]>
Co-authored-by: Sergey Shlyapnikov <[email protected]>
Co-authored-by: Shawn Tan <[email protected]>
Co-authored-by: Travis Johnson <[email protected]>
Co-authored-by: Guillaume Calmettes <[email protected]>
Co-authored-by: xendo <[email protected]>
Co-authored-by: Jerzy Zagorski <[email protected]>
Co-authored-by: Domen Vreš <[email protected]>
Co-authored-by: dvres <[email protected]>
Co-authored-by: 代君 <[email protected]>
Co-authored-by: Murali Andoorveedu <[email protected]>
Co-authored-by: Prashant Gupta <[email protected]>
Co-authored-by: Simon Mo <[email protected]>
Co-authored-by: Varad Ahirwadkar <[email protected]>
Co-authored-by: Flávia Béo <[email protected]>
Co-authored-by: Dipika <[email protected]>
Co-authored-by: Dipika Sikka <[email protected]>
Co-authored-by: Kuntai Du <[email protected]>
Co-authored-by: Andy Dai <[email protected]>
Co-authored-by: Chongming Ni <[email protected]>
Co-authored-by: Ashraf Mahgoub <[email protected]>
Co-authored-by: Zhuohan Li <[email protected]>
Co-authored-by: hhzhang16 <[email protected]>
Co-authored-by: Xin Yang <[email protected]>
Co-authored-by: TJian <[email protected]>
Co-authored-by: Brendan Wong <[email protected]>
Co-authored-by: Yanyi Liu <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: TimWang <[email protected]>
Co-authored-by: Kunshang Ji <[email protected]>
Co-authored-by: Daniele <[email protected]>
Co-authored-by: Sayak Paul <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Rafael Vasquez <[email protected]>
Co-authored-by: bnellnm <[email protected]>
Alvant pushed a commit to compressa-ai/vllm that referenced this pull request Oct 26, 2024
garg-amit pushed a commit to garg-amit/vllm that referenced this pull request Oct 28, 2024
sumitd2 pushed a commit to sumitd2/vllm that referenced this pull request Nov 14, 2024
LeiWang1999 pushed a commit to LeiWang1999/vllm-bitblas that referenced this pull request Mar 26, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants