Issue description
Once we update python3Packages.torch to 2.X.X, its torchWithRocm variation will depend on cudaPacakges.cuda_nvcc (through python3Packages.openai-triton), which is unfree. This means hydra won't be building and caching torchWithRocm.
Upstream enforces that there be a copy of bin/ptxas (from cuda_nvcc):
However, they do not actually use it except for cuda support (obviosuly).
We should patch around this to re-enable caching, and we should work with upstream to make the dependency optional in the first place
CC @NixOS/rocm-maintainers
Issue description
Once we update
python3Packages.torchto2.X.X, itstorchWithRocmvariation will depend oncudaPacakges.cuda_nvcc(throughpython3Packages.openai-triton), which is unfree. This means hydra won't be building and cachingtorchWithRocm.Upstream enforces that there be a copy of
bin/ptxas(fromcuda_nvcc):However, they do not actually use it except for cuda support (obviosuly).
We should patch around this to re-enable caching, and we should work with upstream to make the dependency optional in the first place
CC @NixOS/rocm-maintainers