-
Notifications
You must be signed in to change notification settings - Fork 260
migrate to linux_job_v2 and manylinux 2_28 #1302
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/1302
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ✅ No FailuresAs of commit 6b5f811 with merge base d4ca98f ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
* Update float8_test.yml to use linux_job_v2 * Update nightly_smoke_test.yml * Update float8_test.yml no binutils * Update post_build_script.sh * Update post_build_script.sh * Update regression_test.yml * Update regression_test.yml
this is to handle the migration of dev-infra from manylinux_2_26 to 2_28: pytorch/pytorch#123649
this required us update to linux_job_v2 for everything using nightly pytorch.
float8 tests: change to linux_job_v2 and remove binutils (they don't seem to be needed anymore)
nightly smoke tests: change to linux_job_v2
regression tests: only the nightlies need to use the new linux_job_v2, the rest needs to remain unchanged. We are currentlly pinning the pytorch version due to the bad_alloc tests but when that gets unpinned in #1283 the same changes for float8 test will need to be made to the nightly regression tests
these changes are working if CI is passing here
wheel builds: we update the manylinux version to 2_28, this was tested with https://github.com/pytorch/ao/actions/runs/11901019589 where all the cpu/cuda jobs pass, though we are still seeing failures with rocm jobs, that is a larger issue, unrelated to this PR, fixes for that are ongoing in pytorch/pytorch#140631