Skip to content

Conversation

@njhill
Copy link
Member

@njhill njhill commented Sep 26, 2024

No description provided.

@njhill njhill requested a review from ywang96 September 26, 2024 00:12
@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

do_sample=True)
warpers = generation_model._get_logits_warper(generation_config, device)
assert len(warpers) == 2 # top_p and top_k
processors = generation_model._get_logits_processor(generation_config,
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_get_logits_warper was rolled into _get_logits_processor

vllm/config.py Outdated
"investigate.")

if rope_type == "mrope":
if rope_type in ("mrope", "default"):
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"mrope" gets renamed to "default" in the Qwen2-VL config class

Copy link
Member

@DarkLight1337 DarkLight1337 Sep 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Qwen2-VL cannot be run in transformers>=4.45 even with this change.

[rank0]: Traceback (most recent call last):
[rank0]:   File "/home/cyrusleung/vllm/examples/offline_inference_vision_language_multi_image.py", line 286, in <module>
[rank0]:     main(args)
[rank0]:   File "/home/cyrusleung/vllm/examples/offline_inference_vision_language_multi_image.py", line 262, in main
[rank0]:     run_generate(model, QUESTION, IMAGE_URLS)
[rank0]:   File "/home/cyrusleung/vllm/examples/offline_inference_vision_language_multi_image.py", line 205, in run_generate
[rank0]:     req_data = model_example_map[model](question, image_urls)
[rank0]:   File "/home/cyrusleung/vllm/examples/offline_inference_vision_language_multi_image.py", line 151, in load_qwen2_vl
[rank0]:     llm = LLM(
[rank0]:   File "/home/cyrusleung/vllm/vllm/entrypoints/llm.py", line 214, in __init__
[rank0]:     self.llm_engine = LLMEngine.from_engine_args(
[rank0]:   File "/home/cyrusleung/vllm/vllm/engine/llm_engine.py", line 564, in from_engine_args
[rank0]:     engine = cls(
[rank0]:   File "/home/cyrusleung/vllm/vllm/engine/llm_engine.py", line 325, in __init__
[rank0]:     self.model_executor = executor_class(
[rank0]:   File "/home/cyrusleung/vllm/vllm/executor/executor_base.py", line 47, in __init__
[rank0]:     self._init_executor()
[rank0]:   File "/home/cyrusleung/vllm/vllm/executor/gpu_executor.py", line 40, in _init_executor
[rank0]:     self.driver_worker.load_model()
[rank0]:   File "/home/cyrusleung/vllm/vllm/worker/worker.py", line 183, in load_model
[rank0]:     self.model_runner.load_model()
[rank0]:   File "/home/cyrusleung/vllm/vllm/worker/model_runner.py", line 1016, in load_model
[rank0]:     self.model = get_model(model_config=self.model_config,
[rank0]:   File "/home/cyrusleung/vllm/vllm/model_executor/model_loader/__init__.py", line 19, in get_model
[rank0]:     return loader.load_model(model_config=model_config,
[rank0]:   File "/home/cyrusleung/vllm/vllm/model_executor/model_loader/loader.py", line 399, in load_model
[rank0]:     model = _initialize_model(model_config, self.load_config,
[rank0]:   File "/home/cyrusleung/vllm/vllm/model_executor/model_loader/loader.py", line 176, in _initialize_model
[rank0]:     return build_model(
[rank0]:   File "/home/cyrusleung/vllm/vllm/model_executor/model_loader/loader.py", line 161, in build_model
[rank0]:     return model_class(config=hf_config,
[rank0]:   File "/home/cyrusleung/vllm/vllm/model_executor/models/qwen2_vl.py", line 876, in __init__
[rank0]:     self.model = Qwen2Model(config, cache_config, quant_config)
[rank0]:   File "/home/cyrusleung/vllm/vllm/model_executor/models/qwen2.py", line 248, in __init__
[rank0]:     self.start_layer, self.end_layer, self.layers = make_layers(
[rank0]:   File "/home/cyrusleung/vllm/vllm/model_executor/models/utils.py", line 282, in make_layers
[rank0]:     [PPMissingLayer() for _ in range(start_layer)] + [
[rank0]:   File "/home/cyrusleung/vllm/vllm/model_executor/models/utils.py", line 283, in <listcomp>
[rank0]:     maybe_offload_to_cpu(layer_fn(prefix=f"{prefix}.{idx}"))
[rank0]:   File "/home/cyrusleung/vllm/vllm/model_executor/models/qwen2.py", line 250, in <lambda>
[rank0]:     lambda prefix: Qwen2DecoderLayer(config=config,
[rank0]:   File "/home/cyrusleung/vllm/vllm/model_executor/models/qwen2.py", line 175, in __init__
[rank0]:     self.self_attn = Qwen2Attention(
[rank0]:   File "/home/cyrusleung/vllm/vllm/model_executor/models/qwen2.py", line 133, in __init__
[rank0]:     self.rotary_emb = get_rope(
[rank0]:   File "/home/cyrusleung/vllm/vllm/model_executor/layers/rotary_embedding.py", line 1003, in get_rope
[rank0]:     raise ValueError(f"Unknown RoPE scaling type {scaling_type}")
[rank0]: ValueError: Unknown RoPE scaling type default

But if we install the older version of transformers mentioned in the docs (git+https://github.com/huggingface/transformers.git@21fac7abba2a37fae86106f87fcf9974fd1e3830), then vLLM cannot be run because it imports MLlamaConfig from the top level. We'll open a separate PR to patch in Qwen2-VL support for transformers v4.45.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @DarkLight1337 yeah I was just making a change to update the scaling type in the config back to "mrope" if it's "default". @ywang96 found this open issue huggingface/transformers#33401

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change made in ce1d477

try:
tokenizer = get_tokenizer(lora_request.lora_path, *args, **kwargs)
except OSError as e:
except (OSError, ValueError) as e:
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ValueError rather than OSError is now thrown when there's no config.json present

Copy link
Member

@ywang96 ywang96 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for fixing! Didn't know we have an except check for a particular error for loading lora tokenizer.

Let's see if this can make CI go all green!

@ywang96 ywang96 added the ready ONLY add when PR is ready to merge/full CI is needed label Sep 26, 2024
Copy link
Member

@mgoin mgoin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These seem like reasonable changes, LGTM if it makes the build green thanks!

@njhill
Copy link
Member Author

njhill commented Sep 26, 2024

Pushed another commit to hopefully fix some additional failures.

@jeejeelee
Copy link
Collaborator

It seems that this failure is caused by transformers modifying the related function args. This has been reported to the official chatglm3 team.

@DarkLight1337
Copy link
Member

DarkLight1337 commented Sep 26, 2024

I can confirm that the following tests pass locally:

  • tests/distributed/test_pipeline_parallel.py
  • tests/entrypoints/openai/test_serving_chat.py
  • tests/models/decoder_only/language/test_granite.py
  • tests/models/decoder_only/vision_language/test_llava_next_video.py
  • tests/models/decoder_only/vision_language/test_llava_onevision.py
  • tests/models/test_registry.py

The following tests still fail locally (but pass in CI):

  • tests/samplers/test_sampler.py
_____________________________________________________________________________________ test_sampler_top_k_top_p[cuda:1-71] _____________________________________________________________________________________

seed = 71, device = 'cuda:1'

        hf_probs = processors(torch.zeros_like(fake_logits), fake_logits.clone())
        hf_probs = torch.softmax(hf_probs, dim=-1, dtype=torch.float)
>       torch.testing.assert_close(hf_probs, sample_probs, rtol=0.0, atol=1e-5)
E       AssertionError: Tensor-likes are not close!
E       
E       Mismatched elements: 2 / 5312000 (0.0%)
E       Greatest absolute difference: 0.4882833957672119 at index (61, 9638) (up to 1e-05 allowed)
E       Greatest relative difference: 1.0 at index (61, 9638) (up to 0.0 allowed)

FAILED tests/samplers/test_sampler.py::test_sampler_top_k_top_p[cuda:0-11] - AssertionError: Tensor-likes are not close!
FAILED tests/samplers/test_sampler.py::test_sampler_top_k_top_p[cuda:0-36] - AssertionError: Tensor-likes are not close!
FAILED tests/samplers/test_sampler.py::test_sampler_top_k_top_p[cuda:0-71] - AssertionError: Tensor-likes are not close!
FAILED tests/samplers/test_sampler.py::test_sampler_top_k_top_p[cuda:1-11] - AssertionError: Tensor-likes are not close!
FAILED tests/samplers/test_sampler.py::test_sampler_top_k_top_p[cuda:1-36] - AssertionError: Tensor-likes are not close!
FAILED tests/samplers/test_sampler.py::test_sampler_top_k_top_p[cuda:1-71] - AssertionError: Tensor-likes are not close!

@DarkLight1337
Copy link
Member

DarkLight1337 commented Sep 26, 2024

@hongxiayang can you look into the failing AMD tests and fix them in a separate PR?

Update: I suspect it has something to do with entrypoints/openai/test_oot_registration.py. Let's see whether running that separately can fix the problem.

@DarkLight1337
Copy link
Member

I have unblocked additional tests to check for regressions.

@DarkLight1337
Copy link
Member

DarkLight1337 commented Sep 26, 2024

Re: LoRA 1 and LoRA 3 failures, I think we need to regenerate https://huggingface.co/yard1/llama-2-7b-sql-lora-test to have model_type in the config? seems that the error still fails to get caught somehow...

Update: We need (?) to fix the error message matching in

or "requires you to execute the tokenizer file" in str(e)):

Update 2: This is odd. The test passes locally without having to update the error message matching...

Update 3: Never mind, I was looking at the wrong logs.

@njhill
Copy link
Member Author

njhill commented Sep 26, 2024

@DarkLight1337 looks like test_basic_distributed_correctness.py is still called in the tests but was removed in #7820

@DarkLight1337
Copy link
Member

@DarkLight1337 looks like test_basic_distributed_correctness.py is still called in the tests but was removed in #7820

I'll update this PR once the other tests finish.

@njhill
Copy link
Member Author

njhill commented Sep 26, 2024

Re: LoRA 1 and LoRA 3 failures, I think we need to regenerate https://huggingface.co/yard1/llama-2-7b-sql-lora-test to have model_type in the config? seems that the error still fails to get caught somehow...

Update: We need (?) to fix the error message matching in

or "requires you to execute the tokenizer file" in str(e)):

Update 2: This is odd. The test passes locally without having to update the error message matching...

The CI failure looks potentially like an incompatibility of the custom chatglm3-6b tokenizer and the latest transformers version.

@DarkLight1337
Copy link
Member

Re: LoRA 1 and LoRA 3 failures, I think we need to regenerate https://huggingface.co/yard1/llama-2-7b-sql-lora-test to have model_type in the config? seems that the error still fails to get caught somehow...

Update: We need (?) to fix the error message matching in

or "requires you to execute the tokenizer file" in str(e)):

Update 2: This is odd. The test passes locally without having to update the error message matching...

Oops, I just realized I was looking at the old logs before your fix. Only LoRA 4 fails now.

@ywang96
Copy link
Member

ywang96 commented Sep 26, 2024

The CI failure looks potentially like an incompatibility of the custom chatglm3-6b tokenizer and the latest transformers version.

IMO we should just skip this test since @jeejeelee mentioned it's also reported to the chatglm team

@DarkLight1337
Copy link
Member

The PR should be good to go now, hope that the tests pass this time. (LoRA Long Context has been failing for a while, so it is probably unrelated to transformers upgrade. We can fix that later)

@njhill
Copy link
Member Author

njhill commented Sep 26, 2024

Everything has passed apart from llava OOM still :(

@ywang96
Copy link
Member

ywang96 commented Sep 26, 2024

Everything has passed apart from llava OOM still :(

Let's see if this final try works - if not I'll just remove this test because it is pretty memory intensive (7B model with 4 images on a L4 gpu)

@simon-mo simon-mo merged commit 4b377d6 into vllm-project:main Sep 26, 2024
@njhill njhill deleted the transformers-fixes branch September 26, 2024 23:49
@vrdn-23
Copy link
Contributor

vrdn-23 commented Oct 1, 2024

Re: LoRA 1 and LoRA 3 failures, I think we need to regenerate https://huggingface.co/yard1/llama-2-7b-sql-lora-test to have model_type in the config? seems that the error still fails to get caught somehow...

Update: We need (?) to fix the error message matching in

or "requires you to execute the tokenizer file" in str(e)):

Update 2: This is odd. The test passes locally without having to update the error message matching...

Update 3: Never mind, I was looking at the wrong logs.

@DarkLight1337 @njhill I'm seeing the same error while loading LoRA adapters on the latest version of 0.6.2. It seems that the changes Nick made have not been reflected in the latest pip version release? Is there a post patch fix incoming with these changes soon?

@DarkLight1337
Copy link
Member

Re: LoRA 1 and LoRA 3 failures, I think we need to regenerate https://huggingface.co/yard1/llama-2-7b-sql-lora-test to have model_type in the config? seems that the error still fails to get caught somehow...
Update: We need (?) to fix the error message matching in

or "requires you to execute the tokenizer file" in str(e)):

Update 2: This is odd. The test passes locally without having to update the error message matching...
Update 3: Never mind, I was looking at the wrong logs.

@DarkLight1337 @njhill I'm seeing the same error while loading LoRA adapters on the latest version of 0.6.2. It seems that the changes Nick made have not been reflected in the latest pip version release? Is there a post patch fix incoming with these changes soon?

Yes, this PR was merged after v0.6.2. I'm not sure when the next patch will be dropped.

shajrawi pushed a commit to ROCm/vllm that referenced this pull request Oct 9, 2024
* [Build/CI] Upgrade to gcc 10 in the base build Docker image (vllm-project#8814)

* [Docs] Add README to the build docker image (vllm-project#8825)

* [CI/Build] Fix missing ci dependencies (vllm-project#8834)

* [misc][installation] build from source without compilation (vllm-project#8818)

* [ci] Soft fail Entrypoints, Samplers, LoRA, Decoder-only VLM (vllm-project#8872)

Signed-off-by: kevin <[email protected]>

* [Bugfix] Include encoder prompts len to non-stream api usage response (vllm-project#8861)

* [Misc] Change dummy profiling and BOS fallback warns to log once (vllm-project#8820)

* [Bugfix] Fix print_warning_once's line info (vllm-project#8867)

* fix validation: Only set tool_choice `auto` if at least one tool is provided (vllm-project#8568)

* [Bugfix] Fixup advance_step.cu warning (vllm-project#8815)

* [BugFix] Fix test breakages from transformers 4.45 upgrade (vllm-project#8829)

* [Installation] Allow lower versions of FastAPI to maintain Ray 2.9 compatibility (vllm-project#8764)

* [Feature] Add support for Llama 3.1 and 3.2 tool use (vllm-project#8343)

Signed-off-by: Max de Bayser <[email protected]>

* [Core] rename`PromptInputs` and `inputs` (vllm-project#8876)

* [misc] fix collect env (vllm-project#8894)

* [MISC] Fix invalid escape sequence '\' (vllm-project#8830)

Signed-off-by: Peter Pan <[email protected]>

* [Bugfix][VLM] Fix Fuyu batching inference with `max_num_seqs>1` (vllm-project#8892)

* [TPU] Update pallas.py to support trillium (vllm-project#8871)

* [torch.compile] use empty tensor instead of None for profiling (vllm-project#8875)

* [Kernel] AQ AZP 4/4: Integrate asymmetric quantization to linear method (vllm-project#7271)

* [Bugfix] fix for deepseek w4a16 (vllm-project#8906)

Co-authored-by: mgoin <[email protected]>

* [Core] Multi-Step + Single Step Prefills via Chunked Prefill code path (vllm-project#8378)

Co-authored-by: Varun Sundar Rabindranath <[email protected]>

* [misc][distributed] add VLLM_SKIP_P2P_CHECK flag (vllm-project#8911)

* [Core] Priority-based scheduling in async engine (vllm-project#8850)

* [misc] fix wheel name (vllm-project#8919)

* [Bugfix][Intel] Fix XPU Dockerfile Build (vllm-project#7824)

Signed-off-by: tylertitsworth <[email protected]>
Co-authored-by: youkaichao <[email protected]>

* [Misc] Remove vLLM patch of `BaichuanTokenizer` (vllm-project#8921)

* [Bugfix] Fix code for downloading models from modelscope (vllm-project#8443)

* [Bugfix] Fix PP for Multi-Step (vllm-project#8887)

* [CI/Build] Update models tests & examples (vllm-project#8874)

Co-authored-by: Roger Wang <[email protected]>

* [Frontend] Make beam search emulator temperature modifiable (vllm-project#8928)

Co-authored-by: Eduard Balzin <[email protected]>

* [Bugfix] Support testing prefill throughput with benchmark_serving.py --hf-output-len 1 (vllm-project#8891)

* [doc] organize installation doc and expose per-commit docker (vllm-project#8931)

* [Core] Improve choice of Python multiprocessing method (vllm-project#8823)

Signed-off-by: Russell Bryant <[email protected]>
Co-authored-by: youkaichao <[email protected]>

* [Bugfix] Block manager v2 with preemption and lookahead slots (vllm-project#8824)

* [Bugfix] Fix Marlin MoE act order when is_k_full == False (vllm-project#8741)

Co-authored-by: Tyler Michael Smith <[email protected]>

* [CI/Build] Add test decorator for minimum GPU memory (vllm-project#8925)

* [Build/CI] Set FETCHCONTENT_BASE_DIR to one location for better caching (vllm-project#8930)

* [Model] Support Qwen2.5-Math-RM-72B (vllm-project#8896)

* [Model][LoRA]LoRA support added for MiniCPMV2.5 (vllm-project#7199)

* [BugFix] Fix seeded random sampling with encoder-decoder models (vllm-project#8870)

Co-authored-by: Roger Wang <[email protected]>

* [Misc] Fix typo in BlockSpaceManagerV1 (vllm-project#8944)

* [Frontend] Added support for HF's new `continue_final_message` parameter (vllm-project#8942)

* [Kernel][Model] Varlen prefill + Prefill chunking support for mamba kernels and Jamba model (vllm-project#8533)

* [Model] support input embeddings for qwen2vl (vllm-project#8856)

* [Misc][CI/Build] Include `cv2` via `mistral_common[opencv]`  (vllm-project#8951)

* [Model][LoRA]LoRA support added for MiniCPMV2.6 (vllm-project#8943)

Co-authored-by: DarkLight1337 <[email protected]>

* [Model] Expose InternVL2 max_dynamic_patch as a mm_processor_kwarg (vllm-project#8946)

* [Core] Make scheduling policy settable via EngineArgs (vllm-project#8956)

* [Misc] Adjust max_position_embeddings for LoRA compatibility (vllm-project#8957)

* [ci] Add CODEOWNERS for test directories  (vllm-project#8795)

Signed-off-by: kevin <[email protected]>

* [CI][SpecDecode] Fix spec decode tests, use flash attention backend for spec decode CI tests. (vllm-project#8975)

* [Frontend][Core] Move guided decoding params into sampling params (vllm-project#8252)

Signed-off-by: Joe Runde <[email protected]>
Co-authored-by: Nick Hill <[email protected]>

* [CI/Build] Fix machete generated kernel files ordering (vllm-project#8976)

Signed-off-by: kevin <[email protected]>
Co-authored-by: Cody Yu <[email protected]>

* [torch.compile] fix tensor alias (vllm-project#8982)

* [Misc] add process_weights_after_loading for DummyLoader (vllm-project#8969)

* [Bugfix] Fix Fuyu tensor parallel inference (vllm-project#8986)

* [Bugfix] Fix Token IDs Reference for MiniCPM-V When Images are Provided With No Placeholders (vllm-project#8991)

Signed-off-by: Alex-Brooks <[email protected]>

* [Core] [Frontend] Priority scheduling for embeddings and in the OpenAI-API (vllm-project#8965)

* [Doc] Update list of supported models (vllm-project#8987)

* Update benchmark_serving.py to read and write json-datasets, results in UTF8, for better compatibility with Windows (vllm-project#8997)

* [Spec Decode] (1/2) Remove batch expansion (vllm-project#8839)

* [Core] Combined support for multi-step scheduling, chunked prefill & prefix caching (vllm-project#8804)

Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Andrew Feldman <[email protected]>

* [Misc] Update Default Image Mapper Error Log (vllm-project#8977)

Signed-off-by: Alex-Brooks <[email protected]>
Co-authored-by: Roger Wang <[email protected]>

* [Core] CUDA Graphs for Multi-Step + Chunked-Prefill (vllm-project#8645)

Co-authored-by: Varun Sundar Rabindranath <[email protected]>

* [OpenVINO] Enable GPU support for OpenVINO vLLM backend (vllm-project#8192)

* [Model]  Adding Granite MoE. (vllm-project#8206)

Co-authored-by: Nick Hill <[email protected]>

* [Doc] Update Granite model docs (vllm-project#9025)

* [Bugfix] example template should not add parallel_tool_prompt if tools is none (vllm-project#9007)

* [Misc] log when using default MoE config (vllm-project#8971)

* [BugFix] Enforce Mistral ToolCall id constraint when using the Mistral tool call parser (vllm-project#9020)

* [Core] Make BlockSpaceManagerV2 the default BlockManager to use. (vllm-project#8678)

* [Frontend] [Neuron] Parse literals out of override-neuron-config (vllm-project#8959)

Co-authored-by: Jerzy Zagorski <[email protected]>

* [misc] add forward context for attention (vllm-project#9029)

* Fix failing spec decode test (vllm-project#9054)

* [Bugfix] Weight loading fix for OPT model (vllm-project#9042)

Co-authored-by: dvres <[email protected]>

* [Frontend][Feature] support tool calling for internlm/internlm2_5-7b-chat model (vllm-project#8405)

* [CI/Build] Per file CUDA Archs (improve wheel size and dev build times) (vllm-project#8845)

* [Misc] Enable multi-step output streaming by default (vllm-project#9047)

* [Models] Add remaining model PP support (vllm-project#7168)

Signed-off-by: Muralidhar Andoorveedu <[email protected]>
Signed-off-by: Murali Andoorveedu <[email protected]>
Co-authored-by: DarkLight1337 <[email protected]>

* [Misc] Move registry to its own file (vllm-project#9064)

* [Bugfix] Reshape the dimensions of the input image embeddings in Qwen2VL (vllm-project#9071)

* [Bugfix] Flash attention arches not getting set properly (vllm-project#9062)

* [Model] add a bunch of supported lora modules for mixtral (vllm-project#9008)

Signed-off-by: Prashant Gupta <[email protected]>

* Remove AMD Ray Summit Banner (vllm-project#9075)

* [Hardware][PowerPC] Make oneDNN dependency optional for Power (vllm-project#9039)

Signed-off-by: Varad Ahirwadkar <[email protected]>

* [Core][VLM] Test registration for OOT multimodal models (vllm-project#8717)

Co-authored-by: DarkLight1337 <[email protected]>

* Adds truncate_prompt_tokens param for embeddings creation (vllm-project#8999)

Signed-off-by: Flavia Beo <[email protected]>

* [Kernel] Zero point support in fused MarlinMoE kernel + AWQ Fused MoE (vllm-project#8973)

Co-authored-by: Dipika <[email protected]>
Co-authored-by: Dipika Sikka <[email protected]>

* [CI] Update performance benchmark: upgrade trt-llm to r24.07, and add SGLang (vllm-project#7412)

* [Misc] Improved prefix cache example (vllm-project#9077)

* [Misc] Add random seed for prefix cache benchmark (vllm-project#9081)

* [Misc] Fix CI lint (vllm-project#9085)

* [Hardware][Neuron] Add on-device sampling support for Neuron (vllm-project#8746)

Co-authored-by: Ashraf Mahgoub <[email protected]>

* [torch.compile] improve allreduce registration (vllm-project#9061)

* [Doc] Update README.md with Ray summit slides (vllm-project#9088)

* [Bugfix] use blockmanagerv1 for encoder-decoder (vllm-project#9084)

Co-authored-by: Roger Wang <[email protected]>

* [Bugfix] Fixes Phi3v & Ultravox Multimodal EmbeddingInputs (vllm-project#8979)

* [Model] Support Gemma2 embedding model (vllm-project#9004)

* [Bugfix] Deprecate registration of custom configs to huggingface (vllm-project#9083)

* [Bugfix] Fix order of arguments matters in config.yaml (vllm-project#8960)

* [core] use forward context for flash infer (vllm-project#9097)

* [Bugfix] Fix try-catch conditions to import correct Flash Attention Backend in Draft Model (vllm-project#9101)

* [Frontend] API support for beam search (vllm-project#9087)

Co-authored-by: youkaichao <[email protected]>

* [Misc] Remove user-facing error for removed VLM args (vllm-project#9104)

* [Model] PP support for embedding models and update docs (vllm-project#9090)

Co-authored-by: Roger Wang <[email protected]>

* [Bugfix] fix tool_parser error handling when serve a model not support it (vllm-project#8709)

* [Bugfix] Fix incorrect updates to num_computed_tokens in multi-step scheduling (vllm-project#9038)

Co-authored-by: Varun Sundar Rabindranath <[email protected]>

* [Bugfix][Hardware][CPU] Fix CPU model input for decode (vllm-project#9044)

* [BugFix][Core] Fix BlockManagerV2 when Encoder Input is None (vllm-project#9103)

* [core] remove beam search from the core (vllm-project#9105)

* [Model] Explicit interface for vLLM models and support OOT embedding models (vllm-project#9108)

* [Hardware][CPU] Cross-attention and Encoder-Decoder models support on CPU backend (vllm-project#9089)

* [Core] Refactor GGUF parameters packing and forwarding (vllm-project#8859)

* [Model] Support NVLM-D and fix QK Norm in InternViT (vllm-project#9045)

Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Isotr0py <[email protected]>

* [Doc]: Add deploying_with_k8s guide (vllm-project#8451)

* [CI/Build] Add linting for github actions workflows (vllm-project#7876)

Signed-off-by: Russell Bryant <[email protected]>

* [Doc] Include performance benchmark in README (vllm-project#9135)

* [misc] fix comment and variable name (vllm-project#9139)

* Add Slack to README (vllm-project#9137)

* [misc] update utils to support comparing multiple settings (vllm-project#9140)

* [Intel GPU] Fix xpu decode input  (vllm-project#9145)

* [misc] improve ux on readme (vllm-project#9147)

* [Frontend] API support for beam search for MQLLMEngine (vllm-project#9117)

* [Core][Frontend] Add Support for Inference Time mm_processor_kwargs (vllm-project#9131)

Signed-off-by: Alex-Brooks <[email protected]>

* Factor out common weight loading code

* Fix EAGLE model loading

* [Frontend] Add Early Validation For Chat Template / Tool Call Parser (vllm-project#9151)

Signed-off-by: Alex-Brooks <[email protected]>

* Improve efficiency

* Rename

* Update LLaVA-NeXT-Video

* [CI/Build] Add examples folder into Docker image so that we can leverage the templates*.jinja when serving models (vllm-project#8758)

Signed-off-by: Peter Pan <[email protected]>

* [Bugfix] fix OpenAI API server startup with --disable-frontend-multiprocessing (vllm-project#8537)

* Automatic loading and save memory

* Rename

* Update docstring

* Simplify

* Cleanup

* Fully enable recursive loading

* Clarify

* [Doc] Update vlm.rst to include an example on videos (vllm-project#9155)

Co-authored-by: Cyrus Leung <[email protected]>

* Fix incorrect semantics

* Move function

* Update error message

* Fix Ultravox loading

* spacing

* [Doc] Improve contributing and installation documentation (vllm-project#9132)

Signed-off-by: Rafael Vasquez <[email protected]>

* Fix server

* [Bugfix] Try to handle older versions of pytorch (vllm-project#9086)

---------

Signed-off-by: kevin <[email protected]>
Signed-off-by: Max de Bayser <[email protected]>
Signed-off-by: Peter Pan <[email protected]>
Signed-off-by: tylertitsworth <[email protected]>
Signed-off-by: Russell Bryant <[email protected]>
Signed-off-by: Joe Runde <[email protected]>
Signed-off-by: Alex-Brooks <[email protected]>
Signed-off-by: Muralidhar Andoorveedu <[email protected]>
Signed-off-by: Murali Andoorveedu <[email protected]>
Signed-off-by: Prashant Gupta <[email protected]>
Signed-off-by: Varad Ahirwadkar <[email protected]>
Signed-off-by: Flavia Beo <[email protected]>
Signed-off-by: Rafael Vasquez <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: fyuan1316 <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Kevin H. Luu <[email protected]>
Co-authored-by: Pernekhan Utemuratov <[email protected]>
Co-authored-by: Chirag Jain <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Maximilien de Bayser <[email protected]>
Co-authored-by: Peter Pan <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Brittany <[email protected]>
Co-authored-by: Luka Govedič <[email protected]>
Co-authored-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Sebastian Schoennenbeck <[email protected]>
Co-authored-by: Tyler Titsworth <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: tastelikefeet <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Edouard B. <[email protected]>
Co-authored-by: Eduard Balzin <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
Co-authored-by: sroy745 <[email protected]>
Co-authored-by: ElizaWszola <[email protected]>
Co-authored-by: Zilin Zhu <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: juncheoll <[email protected]>
Co-authored-by: danieljannai21 <[email protected]>
Co-authored-by: Mor Zusman <[email protected]>
Co-authored-by: whyiug <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Lily Liu <[email protected]>
Co-authored-by: Joe Runde <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
Co-authored-by: Divakar Verma <[email protected]>
Co-authored-by: Alex Brooks <[email protected]>
Co-authored-by: vlsav <[email protected]>
Co-authored-by: afeldman-nm <[email protected]>
Co-authored-by: Andrew Feldman <[email protected]>
Co-authored-by: Sergey Shlyapnikov <[email protected]>
Co-authored-by: Shawn Tan <[email protected]>
Co-authored-by: Travis Johnson <[email protected]>
Co-authored-by: Guillaume Calmettes <[email protected]>
Co-authored-by: xendo <[email protected]>
Co-authored-by: Jerzy Zagorski <[email protected]>
Co-authored-by: Domen Vreš <[email protected]>
Co-authored-by: dvres <[email protected]>
Co-authored-by: 代君 <[email protected]>
Co-authored-by: Murali Andoorveedu <[email protected]>
Co-authored-by: Prashant Gupta <[email protected]>
Co-authored-by: Simon Mo <[email protected]>
Co-authored-by: Varad Ahirwadkar <[email protected]>
Co-authored-by: Flávia Béo <[email protected]>
Co-authored-by: Dipika <[email protected]>
Co-authored-by: Dipika Sikka <[email protected]>
Co-authored-by: Kuntai Du <[email protected]>
Co-authored-by: Andy Dai <[email protected]>
Co-authored-by: Chongming Ni <[email protected]>
Co-authored-by: Ashraf Mahgoub <[email protected]>
Co-authored-by: Zhuohan Li <[email protected]>
Co-authored-by: hhzhang16 <[email protected]>
Co-authored-by: Xin Yang <[email protected]>
Co-authored-by: TJian <[email protected]>
Co-authored-by: Brendan Wong <[email protected]>
Co-authored-by: Yanyi Liu <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: TimWang <[email protected]>
Co-authored-by: Kunshang Ji <[email protected]>
Co-authored-by: Daniele <[email protected]>
Co-authored-by: Sayak Paul <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Rafael Vasquez <[email protected]>
Co-authored-by: bnellnm <[email protected]>
Alvant pushed a commit to compressa-ai/vllm that referenced this pull request Oct 26, 2024
garg-amit pushed a commit to garg-amit/vllm that referenced this pull request Oct 28, 2024
sumitd2 pushed a commit to sumitd2/vllm that referenced this pull request Nov 14, 2024
LeiWang1999 pushed a commit to LeiWang1999/vllm-bitblas that referenced this pull request Mar 26, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants