-
-
Notifications
You must be signed in to change notification settings - Fork 7.7k
Update test requirements to CUDA 12.8 #17576
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
Signed-off-by: 22quinn <[email protected]>
@huydhn could you take a look? |
Also lint? |
I don't think However, the bigger issue here is that while vLLM CI uses CUDA 12.8. PyTorch default is still 12.6, causing this discrepancy. I have been working around this by using |
@huydhn I tried using extra-index-url, but always got torch 2.7 cu126 installed. Also there were a bunch of CUDA dependencies in test.txt not matching CUDA 12.8, e.g. I think need to modify the hook by adding |
Signed-off-by: 22quinn <[email protected]>
Signed-off-by: 22quinn <[email protected]>
Signed-off-by: 22quinn <[email protected]>
From what I read in their doc https://docs.astral.sh/uv/guides/integration/pytorch/#automatic-backend-selection, does |
discussed with Huy offline, this PR is needed as it unifies the version of other cuda dependencies like nvidia-cuda-runtime-cu12 |
Signed-off-by: 22quinn <[email protected]> Signed-off-by: Mu Huai <[email protected]>
Signed-off-by: 22quinn <[email protected]>
Signed-off-by: 22quinn <[email protected]> Signed-off-by: Yuqi Zhang <[email protected]>
This is a follow-up fix to #16859
By default, it will install torch with CUDA 12.6