-
Notifications
You must be signed in to change notification settings - Fork 29.2k
[Version: 4.49.0] Qwen2.5-VL is not supported in vLLM because of transformers #36292
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
me too. |
2 similar comments
me too. |
me too. |
mee too |
cc @zucchini-nlp, but it would help a lot if someone could post the logs/error to help us figure out what's going on here! |
Hey all! The issue is being fixed on vllm side with vllm-project/vllm#13592 afaik, the team will check compatibility with v4.49 release cc @Isotr0py |
That's because the qwen2.5-vl implementation in vllm 0.7.2 (the latest release, not latest commit) is still trying import vLLM team is planning to make a new release including the corresponding fix (vllm-project/vllm#13286), perhaps today or tomorrow. |
Hello! We have released |
Unfortunately, I can't.. my GPU cluster do not have access to huggingface.. but I've the latest version of the model stored in this path.. . |
Updates: I managed to use the mentioned "weird" method to solve the case by using: pip install --force-reinstall git+https://github.com/huggingface/transformers.git@9985d06add07a4cc691dc54a7e34f54205c04d40 Do you know why it can be solved? Transformers 4.49.0dev version is helpful. Here, the version 4.49.0 dev can support Qwen2.5-VL-7B, but "pip install transformers --upgrade" to get 4.49.0 version does not help. |
GOOD NEWS |
@usun1997 , hello , it still doesn't work, could u share your version? |
Check your config.json file, and change the processor name in preprocessor_config.json "image_processor_type": "Qwen2VLImageProcessor", This should work for most cases if you have latest transformers |
still not working for me.. Same error: ValueError: Model architectures ['Qwen2_5_VLForConditionalGeneration'] failed to be inspected. Please check the logs for more details. |
Has it been fixed? I still have the error. |
@gouqi666 we meet again :) , have you found a fix for this issue? |
Still not working for me... |
haha, change Qwen2_5_VL to Qwen2_VL in the config.json
…---Original---
From: "Kolesh ***@***.***>
Date: Fri, Feb 28, 2025 00:04 AM
To: ***@***.***>;
Cc: ***@***.******@***.***>;
Subject: Re: [huggingface/transformers] [Version: 4.49.0] Qwen2.5-VL is notsupported in vLLM because of transformers (Issue #36292)
@gouqi666 we meet again :) , have you found a fix for this issue?
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: ***@***.***>
koleshjr left a comment (huggingface/transformers#36292)
@gouqi666 we meet again :) , have you found a fix for this issue?
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
I think the huggingface model has already been modified. For example for Qwen2.5-vl-7b:
but still I am facing this issue |
have you updatef your local directory ?
…---Original---
From: "Kolesh ***@***.***>
Date: Fri, Feb 28, 2025 19:03 PM
To: ***@***.***>;
Cc: ***@***.******@***.***>;
Subject: Re: [huggingface/transformers] [Version: 4.49.0] Qwen2.5-VL is notsupported in vLLM because of transformers (Issue #36292)
haha, change Qwen2_5_VL to Qwen2_VL in the config.json
…
I think the huggingface model has already been modified. For example for Qwen2.5-vl-7b:
<html> <body> <!--StartFragment--> | { -- | -- | "min_pixels": 3136, | "max_pixels": 12845056, | "patch_size": 14, | "temporal_patch_size": 2, | "merge_size": 2, | "image_mean": [ | 0.48145466, | 0.4578275, | 0.40821073 | ], | "image_std": [ | 0.26862954, | 0.26130258, | 0.27577711 | ], | "image_processor_type": "Qwen2VLImageProcessor", | "processor_class": "Qwen2_5_VLProcessor" | } <!--EndFragment--> </body> </html>
but still I am facing this issue
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: ***@***.***>
koleshjr left a comment (huggingface/transformers#36292)
haha, change Qwen2_5_VL to Qwen2_VL in the config.json
…
I think the huggingface model has already been modified. For example for Qwen2.5-vl-7b:
<html> <body> <!--StartFragment--> | { -- | -- | "min_pixels": 3136, | "max_pixels": 12845056, | "patch_size": 14, | "temporal_patch_size": 2, | "merge_size": 2, | "image_mean": [ | 0.48145466, | 0.4578275, | 0.40821073 | ], | "image_std": [ | 0.26862954, | 0.26130258, | 0.27577711 | ], | "image_processor_type": "Qwen2VLImageProcessor", | "processor_class": "Qwen2_5_VLProcessor" | } <!--EndFragment--> </body> </html>
but still I am facing this issue
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Yes... same error.. even after updating the model localy. |
change the transformer version to 4.49 and vllm to 0.7.3
…---Original---
From: "Igor L ***@***.***>
Date: Sat, Mar 1, 2025 05:05 AM
To: ***@***.***>;
Cc: ***@***.******@***.***>;
Subject: Re: [huggingface/transformers] [Version: 4.49.0] Qwen2.5-VL is notsupported in vLLM because of transformers (Issue #36292)
have you updatef your local directory ?
…
Yes... same error.. even after updating the model localy.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: ***@***.***>
igorpereirabr1 left a comment (huggingface/transformers#36292)
have you updatef your local directory ?
…
Yes... same error.. even after updating the model localy.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
same error with vll==0.7.3 and transformers==4.49.0 |
Try using python 3.12. I had the same issue with 3.11 but now everything works fine. |
I had solved this problem, It's my source code error,but vllm send a puzzling message |
@philipwan what message is it? I can take it back to see if we can update on the vllm side so that we don't confuse users! |
"ValueError: Model architectures ['Qwen2_5_VLForConditionalGeneration'] failed to be inspected. Please check the logs for more details." |
I'm having the same problem with:
downloaded the latest model from HF from today
|
for me
it works, but don't know the reason, damn
|
System Info
error: Model architectures Qwen2 5 VLForConditionalGeneration' failed to be inspected. Please check the logs for more details.
right now people say that they are using methods like:
I mean. This is not supposed to be like this. I can't connect to your github without a vpn, and with a vpn, I can't connect to my workspace. Could transformers team just fix the problem instead of letting people solve it by some weird method. Thanks!
Who can help?
No response
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
use Xinference, download the newest vLLM so that you get transformers 4.49.0. You download Qwen2.5 VL and deploy in vLLM and get the error:
Expected behavior
NO MORE ERROR DURING DEPLOYMENT OF QWEN2.5 VL 7B IN VLLM
The text was updated successfully, but these errors were encountered: