Skip to content

[Bugfix] Temporarily disable gptq_bitblas on ROCm #17411

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
May 1, 2025

Conversation

nlzy
Copy link
Contributor

@nlzy nlzy commented Apr 29, 2025

In v0.8.5, those lines of code replaces gptq with gptq_bitblas:

vllm/vllm/config.py

Lines 795 to 810 in ba41cc9

quantization_override = method.override_quantization_method(
quant_cfg, self.quantization)
if quantization_override is not None:
# Raise error if the override is not custom (custom would
# be in QUANTIZATION_METHODS but not QuantizationMethods)
# and hasn't been added to the overrides list.
if (name in get_args(QuantizationMethods)
and name not in overrides):
raise ValueError(
f"Quantization method {name} is an override but "
"is has not been added to the `overrides` list "
"above. This is necessary to ensure that the "
"overrides are checked in order of preference.")
quant_method = quantization_override
self.quantization = quantization_override
break

After the replacement, a check is introduced to verify whether the current platform supports the specified quantization method:

current_platform.verify_quantization(self.quantization)

However, gptq_bitblas is not included in the list of supported methods for ROCm, causing vLLM to throw an exception and exit:

vllm/vllm/platforms/rocm.py

Lines 132 to 135 in ba41cc9

supported_quantization: list[str] = [
"awq", "gptq", "fp8", "compressed-tensors", "fbgemm_fp8", "gguf",
"quark", "ptpc_fp8"
]

Due to this check, gptq_bitblas should not be functional on the ROCm platform in this version of vLLM. Any user attempting to use gptq_bitblas on ROCm should encounter this error and the program will exit.

In other words, it appears that no one has actually tested or successfully run gptq_bitblas on the ROCm platform. In this situation, I believe we should temporarily disable gptq_bitblas for ROCm users. The feature can be re-enabled once a developer successfully tests and runs gptq_bitblas on ROCm.

This PR primarily considers users on the ROCm platform who rely on GPTQ quantization. These users will encounter issues where vLLM fails to start after updating from v0.8.4 to v0.8.5.

FIX #17410

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added the documentation Improvements or additions to documentation label Apr 29, 2025
@nlzy nlzy force-pushed the disable-gptq-bitblas-on-rocm branch from c76c4b1 to ea73247 Compare April 29, 2025 20:27
@mgoin
Copy link
Member

mgoin commented Apr 29, 2025

cc @LeiWang1999 this seems reasonable, is this okay with you?

@LeiWang1999
Copy link
Contributor

Sorry for the trouble, lgtm. currently BitBLAS support for ROCm requires building from source. We will submit another PR to enable the ROCm backend once our ROCm solution is ready.

@vllm-bot vllm-bot merged commit 1144a8e into vllm-project:main May 1, 2025
21 checks passed
@nlzy nlzy deleted the disable-gptq-bitblas-on-rocm branch May 2, 2025 08:10
radeksm pushed a commit to radeksm/vllm that referenced this pull request May 2, 2025
RichardoMrMu pushed a commit to RichardoMrMu/vllm that referenced this pull request May 12, 2025
zzzyq pushed a commit to zzzyq/vllm that referenced this pull request May 24, 2025
minpeter pushed a commit to minpeter/vllm that referenced this pull request Jun 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug]: GPTQ cannot be used on ROCm after upgraded vLLM to v0.8.5
4 participants