-
-
Notifications
You must be signed in to change notification settings - Fork 6
Convert nvcomp-feedstock to Python API feedstock #23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Hi! This is the friendly automated conda-forge-linting service. I just wanted to let you know that I linted all conda-recipes in your PR ( I do have some suggestions for making it better though... For recipe/meta.yaml:
This message was generated by GitHub Actions workflow run https://github.com/conda-forge/conda-forge-webservices/actions/runs/16381732656. Examine the logs at this URL for more detail. |
- arm-variant * {{ arm_variant_type }} # [aarch64] | ||
- cf-nvidia-tools 1 # [linux] | ||
- cf-nvidia-tools 1.* # [linux] | ||
- cross-python_{{ target_platform }} # [build_platform != target_platform] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am curious where cross-python_{{ target_platform }}
comes into play?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This recipe is using the cross-target conda-forge infrastructure to create the aarch64 package. It doesn't matter that we aren't actually compiling anything. We have to pretend that we are actually building from source because there could be side effects from the cross-python package.
CUDA 12.8 added support for architectures `sm_100`, `sm_101` and `sm_120`, while CUDA 12.9 further added `sm_103` and `sm_121`. To build for these, maintainers will need to modify their existing list of specified architectures (e.g. `CMAKE_CUDA_ARCHITECTURES`, `TORCH_CUDA_ARCH_LIST`, etc.) for their package. A good balance between broad support and storage footprint (resp. compilation time) is to add `sm_100` and `sm_120`. Since CUDA 12.8, the conda-forge nvcc package now sets `CUDAARCHS` and `TORCH_CUDA_ARCH_LIST` in its activation script to a string containing all of the supported real architectures plus the virtual architecture of the latest. Recipes for packages who use these variables to control their build but do not want to build for all supported architectures will need to override these variables in their build script. ref: https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html#new-features
…nda-forge-pinning 2025.07.18.21.26.56
This PR is ready, but it's Friday, and we should wait until RAPIDS is ready. @bdice, could you please post here or merge once RAPIDS has migrated/repodata-patched it's packages to use libnvcomp where appropriate? I can also make backports of libnvcomp if you want older versions of that package available. |
Here are PRs to RAPIDS packages that use |
@carterbox The RAPIDS changes are now merged. It should be fine to update |
Checklist
0
(if the version changed)conda-smithy
(Use the phrase@conda-forge-admin, please rerender
in a comment in this PR for automated rerendering)This feedstock now ships the python API of nvcomp by revending the python modules from PYPI wheels. The C API is stripped from the wheels, and instead we depend on
libnvcomp-dev
from the new libnvcomp-feedstock. The libnvcomp-feedstock repackages the binaries and cmake files from the redist tarball.Closes #22
Closes #24
Closes #21
Closes #20