-
Notifications
You must be signed in to change notification settings - Fork 16
Eagerly use all vended numba.cuda modules, Bump supported Numba-CUDA version to 0.21+
#239
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughThis PR raises numba-cuda constraints to >=0.21.0,<0.23.0 across manifests, replaces many imports with Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
numba.cuda modulesnumba.cuda modules, Bump supported Numba-CUDA version to 0.21+
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
numbast/src/numbast/static/tests/test_static_demo.py (1)
45-56: Test assertion is almost certainly wrong and will not validate the behavior
arris allocated with shape(1,)and written in the kernel, but the test asserts:assert all(arr.copy_to_host() == [])Comparing a non-empty array to
[]is logically incorrect and is likely to raise a broadcasting error rather than perform a meaningful assertion.Consider asserting on the actual numeric value instead, e.g.:
+import numpy as np @@ - kernel[1, 1](arr) - assert all(arr.copy_to_host() == []) + kernel[1, 1](arr) + host = arr.copy_to_host() + # Expect conversion from __myfloat16(3.14) to float64 + assert np.allclose(host, [3.14], rtol=1e-3, atol=1e-3)This makes the intent of the test clear and avoids shape/broadcasting issues.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (26)
ci/test_conda_python.sh(1 hunks)conda/environment-cu12.yaml(1 hunks)conda/environment-cu13.yaml(1 hunks)conda/environment_template.yaml(1 hunks)conda/recipes/numbast/meta.yaml(1 hunks)conda/recipes/numbast_extensions/meta.yaml(1 hunks)docs/source/faq.rst(1 hunks)numbast/benchmarks/test_arithmetic.py(1 hunks)numbast/pyproject.toml(2 hunks)numbast/src/numbast/function.py(1 hunks)numbast/src/numbast/functor.py(1 hunks)numbast/src/numbast/static/function.py(2 hunks)numbast/src/numbast/static/renderer.py(3 hunks)numbast/src/numbast/static/struct.py(9 hunks)numbast/src/numbast/static/tests/test_conflict_shim_names.py(1 hunks)numbast/src/numbast/static/tests/test_function_static_bindings.py(1 hunks)numbast/src/numbast/static/tests/test_link_two_files.py(1 hunks)numbast/src/numbast/static/tests/test_operator_bindings.py(1 hunks)numbast/src/numbast/static/tests/test_static_demo.py(1 hunks)numbast/src/numbast/static/tests/test_struct_static_bindings.py(1 hunks)numbast/src/numbast/struct.py(1 hunks)numbast/src/numbast/types.py(1 hunks)numbast/tests/data/sample_function.cuh(1 hunks)numbast/tests/demo/demo.py(1 hunks)numbast/tests/test_struct.py(1 hunks)numbast_extensions/pyproject.toml(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
numbast/src/numbast/static/struct.py (1)
ast_canopy/ast_canopy/pylibastcanopy.pyi (1)
Type(84-94)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: Build Conda Python / build (arm64, 3.13, 12.9.1, rockylinux8)
- GitHub Check: Build Conda Python / build (arm64, 3.12, 12.9.1, rockylinux8)
- GitHub Check: Build Conda Python / build (arm64, 3.10, 12.9.1, rockylinux8)
- GitHub Check: Build Conda Python / build (arm64, 3.11, 12.9.1, rockylinux8)
- GitHub Check: Build Conda Python / build (amd64, 3.11, 12.9.1, rockylinux8)
- GitHub Check: Build Conda Python / build (amd64, 3.10, 12.9.1, rockylinux8)
🔇 Additional comments (26)
conda/recipes/numbast/meta.yaml (1)
50-56: Confirm numba-cuda constraint matches supported and tested rangeBumping the run dependency to
numba-cuda>=0.21.0,<0.23.0looks consistent with the PR goal; just make sure this window is (a) the minimum that still exposes all thenumba.cuda.*entry points you now rely on, and (b) aligned with CI configs and any version matrix in the docs (or tightened to a specific release if you decide to pin more strictly before merge).numbast/benchmarks/test_arithmetic.py (1)
1-3: Importing float32 from numba.cuda.types is consistent with the CUDA-focused migrationUsing
numba.cuda.types.float32for the casts inside the kernel matches the rest of the PR’s shift to CUDA-specific modules and shouldn’t change behavior here; please just confirm there are no remainingnumba.float32imports in this benchmark suite that would make type handling inconsistent.numbast/tests/demo/demo.py (1)
5-7: PrimitiveModel import moved to CUDA datamodel — verify availability across supported versionsUsing
PrimitiveModelfromnumba.cuda.datamodel.modelsaligns this demo with the CUDA datamodel and the rest of the migration; please verify that this symbol is present and stable across your declarednumba-cudarange and that the demo still runs end‑to‑end on a GPU with both the minimum and maximum supported versions.numbast_extensions/pyproject.toml (1)
18-18: Keep numba-cuda constraint in sync with core package and docsThe
numba-cuda>=0.21.0,<0.23.0window fornumbast_extensionslooks right for this migration; please double‑check that (1) the mainnumbastpackage exposes a compatible requirement, (2) your docs/FAQ show the same range, and (3)numbast>=0.2.0is indeed compatible with this newernumba-cudaspan or, if not, that the lower bound onnumbastis raised accordingly.conda/recipes/numbast_extensions/meta.yaml (1)
42-47: Verify conda numba-cuda bounds match pyproject and CIThe conda
runrequirement fornumba-cuda>=0.21.0,<0.23.0correctly mirrors the pyproject change; please verify that all other packaging/CI entry points (pyproject, environment YAMLs, test scripts) now use this same window so users don’t hit solver conflicts between wheels and conda installs.numbast/src/numbast/static/function.py (1)
229-235: Switch to numba.cuda.typing/numba.cuda.types — validate across supported versionsImporting
signature,CPointer, andint32fromnumba.cuda.typing/numba.cuda.typeskeeps the generated static bindings on the CUDA-specific type system and matches the broader refactor; please confirm that these symbols are available and stable across the fullnumba-cudarange you declare (esp. the lowest and highest supported versions) and that the generated bindings still compile and run correctly in both cases.Also applies to: 276-277
numbast/src/numbast/functor.py (1)
4-8: CUDA-specific typeof_impl/lower_constant look appropriate — confirm API stability and CPU needsRegistering
Functorvianumba.cuda.extending.typeof_implandnumba.cuda.core.imputils.lower_constantcorrectly targets the CUDA compilation pipeline; please verify that these CUDA entry points exist and are stable over your supportednumba-cudarange, and that you don’t rely onFunctorbeing recognized in non‑CUDA (CPU) typing/constant‑lowering contexts anymore—otherwise you may also want to keep the corenumba.core.*registrations in parallel.conda/environment-cu12.yaml (1)
12-12: Version constraint updated consistently for CUDA 12 environment.The
numba-cuda[cu12]constraint has been updated to match the project-wide version range>=0.21.0,<0.23.0.ci/test_conda_python.sh (1)
26-26: CI test environment updated to use new version range.The test environment creation has been updated to use
numba-cuda>=0.21.0,<0.23.0, ensuring CI tests run against the correct version range.numbast/pyproject.toml (1)
21-21: Version constraints updated consistently across all dependency specifications.The numba-cuda version constraint has been updated to
>=0.21.0,<0.23.0in:
- Main project dependencies (line 21)
- Optional test dependencies for both cu12 (line 44) and cu13 (line 49)
This ensures consistent versioning across all installation scenarios.
Also applies to: 44-44, 49-49
conda/environment-cu13.yaml (1)
12-12: Version constraint updated consistently for CUDA 13 environment.The
numba-cuda[cu13]constraint has been updated to match the project-wide version range>=0.21.0,<0.23.0, completing the migration across all CUDA variants (cu12, cu13).numbast/tests/test_struct.py (1)
9-9: Import path is incorrect.
StructModelshould be imported fromnumba.core.datamodel.models(internal) ornumba.extending.models(public API), not fromnumba.cuda.datamodel. Thenumba.cuda.datamodelpath does not provide this export. This import will fail at runtime.Likely an incorrect or invalid review comment.
numbast/src/numbast/static/tests/test_function_static_bindings.py (1)
9-9: Reconsider import source for standard scalar types.Import
int32andfloat32fromnumba.typesrather thannumba.cuda.types. These are standard numeric types documented in Numba's core types API, not CUDA-specific types. Thenumba.cuda.typesmodule is documented for CUDA-specific vector types (e.g.,float32x2,float32x3), not scalar types. Using the correct import path maintains clarity and follows Numba's documented conventions.Likely an incorrect or invalid review comment.
conda/environment_template.yaml (1)
12-12: Verify that the version range matches the tested commit.The version constraint has been updated to
>=0.21.0,<0.23.0. The original comment references commit9a5252803843fd73644c7b1ba54c84823209b161, but this commit could not be verified against publicly available numba-cuda release information. Confirm that this commit corresponds to a release within the specified version range and that all features used are available in numba-cuda 0.21.0+.numbast/src/numbast/struct.py (1)
9-20: Systematic migration to CUDA-specific vended modules.The imports have been correctly updated to use numba-cuda's vended CUDA target modules:
numba.cuda.extendingfor extension mechanisms (register_model, lower_cast, make_attribute_wrapper)numba.cuda.typingfor typing constructs (signature, templates)numba.cuda.datamodel.modelsfor data models (StructModel, PrimitiveModel)numba.cudafor CUDA-specific declarations and lowering utilitiesAll imported symbols are available in numba-cuda >=0.21.0 and this approach aligns with the PR's goal to eagerly use vended
numba.cudamodules rather than upstream Numba internals.numbast/src/numbast/static/tests/test_struct_static_bindings.py (1)
9-10: CUDA-specific Type/Number/StructModel/PrimitiveModel imports align with static struct specsSwitching the struct specs over to
numba.cuda.types.Type/Numberandnumba.cuda.datamodel.StructModel/PrimitiveModelis consistent with the rest of the CUDA-centric refactor and should keep the generated bindings on the CUDA surface.Please just double-check against the numba-cuda versions you intend to support that all of these symbols are present and stable (especially
NumberandPrimitiveModel), and that there are no remaining struct specs elsewhere still importing fromnumba.types/numba.core.datamodelfor consistency.numbast/src/numbast/static/tests/test_operator_bindings.py (1)
7-8: Operator-binding specs correctly switched to CUDA types/datamodelUsing
numba.cuda.types.Typeandnumba.cuda.datamodel.StructModelfor the operator bindings test is consistent with the rest of the static binding migration and should keep the generated APIs on the CUDA path.Please confirm these CUDA-side symbols exist and behave as expected in all numba-cuda versions covered by your new compatibility ranges so that this test doesn’t become version-fragile.
numbast/src/numbast/function.py (1)
8-8: Signature/typing move tonumba.cuda.typingmatches CUDA-specific lowering, but verify API compatibilityImporting
nb_signatureandSignaturefromnumba.cuda.typingis consistent with using CUDAConcreteTemplate,register_global, andlowerfrom the CUDA registries, and keeps all typing constructs on the CUDA surface.Given this is fairly low-level, please verify against your target numba/numba-cuda versions that:
numba.cuda.typing.signature/Signatureare the intended counterparts to the previously used core versions, and- there are no remaining call sites elsewhere still constructing
Signaturefromnumba.core.typing, which could lead to mixed or incompatible signature types.If everything is on
numba.cuda.typingnow, this change looks conceptually tight.numbast/src/numbast/static/tests/test_conflict_shim_names.py (1)
6-7: Conflict-shim-name test now correctly uses CUDA struct spec typesSwitching the struct spec imports here to
numba.cuda.types.Typeandnumba.cuda.datamodel.StructModelkeeps this test in sync with the rest of the CUDA static binding pipeline; the shim-name checks themselves remain unchanged.Please confirm that
make_bindingis not used anywhere with mixed core vs CUDA datamodel/types, and thatnumba.cuda.datamodel.StructModelis the expected class for your targeted numba-cuda range.numbast/src/numbast/types.py (1)
9-9: Switchingtypeoftonumba.cuda.typing.typeofis consistent, but enum support should be confirmedUsing
typeoffromnumba.cuda.typing.typeofinregister_enum_typelines this module up with the rest of the CUDA-typing migration and avoids relying onnumba.misc.special.Given this is now responsible for mapping C++ enums to Numba types, please verify under your supported numba/numba-cuda versions that:
numba.cuda.typing.typeof.typeofacceptsIntEnuminstances and returns the expected Numba enum/integral type, and- there are no remaining imports of
typeoffromnumba.misc.specialelsewhere to avoid inconsistencies.If those checks pass, the change here looks appropriate.
numbast/src/numbast/static/renderer.py (1)
351-352: Registry andlower_castimports now fully CUDA-specific—confirm API surface in supported versionsUpdating
registry_setupto import:
TargetRegistryfromnumba.cuda.core.imputils, andlower_castfromnumba.cuda.extendingkeeps the generated registry setup and lowering hooks on the CUDA side, matching the rest of the PR.
Please sanity-check in the numba-cuda versions you intend to support that:
numba.cuda.core.imputils.Registryexposes the methods referenced inBaseRenderer.SeparateRegistrySetup(includinglower_cast), andnumba.cuda.extending.lower_castexists and is the appropriate lowering helper for CUDA.If those APIs are present and tested in your CI matrix, this wiring looks correct.
Also applies to: 367-368
docs/source/faq.rst (1)
127-169: New numba-cuda compatibility FAQ is clear; keep version ranges in sync with packaging metadataThe added “Generated bindings and Numba-CUDA version requirements” section reads well and clearly documents the required
numba-cudaranges for each Numbast series, plus guidance for dynamic vs static bindings.Given these ranges are now duplicated across docs,
pyproject.toml, and conda recipes, please ensure:
- The table (
0.6.0 → >=0.21.0,<0.23.0,0.5.x → >=0.20.1,<0.21.0) exactly matches the pins enforced in your packaging/CI configs, and- You update this FAQ in lockstep when either Numbast or numba-cuda version constraints change.
With that caveat, this documentation addition looks solid.
numbast/src/numbast/static/tests/test_link_two_files.py (1)
7-8: Two-file static-link test correctly migrated to CUDA types/datamodelUsing
numba.cuda.types.Type/Numberandnumba.cuda.datamodel.StructModel/PrimitiveModelin thespecsforFoo,Bar, andMyIntaligns this test with the rest of the CUDA-first static binding path and should keep the generated code compatible with the CUDA registries.Please confirm that all other static tests using the same headers/specs have also been updated to the CUDA variants and that these symbols are available across the full numba-cuda version range advertised in your new FAQ.
numbast/src/numbast/static/tests/test_static_demo.py (1)
7-8: Verify the import paths, particularlyPrimitiveModelsourceThe
float64import fromnumba.cuda.typesappears sound, but documentation indicatesPrimitiveModeltypically resides innumba.core.datamodel, notnumba.cuda.datamodel. Confirm these imports work against your pinned numba-cuda version, or check whethernumba.cuda.datamodelre-exportsPrimitiveModelfrom core.numbast/src/numbast/static/struct.py (2)
218-224: Remove mentions ofFunctionandConversion— they are not imported or used in this codeThe review's request to verify the CUDA API surface is reasonable, but contains inaccuracies:
The code correctly imports:
from numba.cuda.typing import signature✓from numba.cuda.types import CPointer, int32✓from numba.cuda.extending import as_numba_type, register_model, make_attribute_wrapper✓However, the review incorrectly claims the code imports:
Functionfromnumba.cuda.types— not imported or used anywhere in struct.pyConversionfromnumba.cuda.typeconv— not imported or used anywhere in struct.pyVerify availability of the actual imports (signature, CPointer, int32, as_numba_type, register_model, make_attribute_wrapper) against numba-cuda >=0.21,<0.23, as these are the symbols that need compatibility assurance.
Likely an incorrect or invalid review comment.
866-869: The suggested import pathnumba.cuda.typing.signaturedoes not exist and will cause import failures.The web search confirms that
signatureis not exported fromnumba.cuda.typing. The canonical import paths are:
from numba.typing.templates import signature(modern Numba)from numba.core.typing.templates import signature(also available)The current code using
from numba.core.typing import signatureis closer to correct than the suggested change tonumba.cuda.typing.signature.The suggestion for
numba.cuda.types.CPointeris valid—this is properly vended by numba.cuda—but this accounts for only half of the proposed changes.The review's premise that both imports should align under
numba.cuda.*paths cannot be achieved forsignature, as it is not part of the CUDA-vended API. The consistency issue identified is valid, but the proposed solution is incorrect.Likely an incorrect or invalid review comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (2)
numbast/src/numbast/static/renderer.py(3 hunks)numbast/tests/data/sample_function.cuh(1 hunks)
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: isVoid
Repo: NVIDIA/numbast PR: 239
File: numbast/src/numbast/static/struct.py:10-11
Timestamp: 2025-12-04T23:04:08.148Z
Learning: numba-cuda version 0.21.0 and later re-exports StructModel and PrimitiveModel from numba.cuda.datamodel.models, so importing from numba.cuda.datamodel.models is correct for projects using numba-cuda>=0.21.0.
📚 Learning: 2025-12-04T23:04:08.148Z
Learnt from: isVoid
Repo: NVIDIA/numbast PR: 239
File: numbast/src/numbast/static/struct.py:10-11
Timestamp: 2025-12-04T23:04:08.148Z
Learning: numba-cuda version 0.21.0 and later re-exports StructModel and PrimitiveModel from numba.cuda.datamodel.models, so importing from numba.cuda.datamodel.models is correct for projects using numba-cuda>=0.21.0.
Applied to files:
numbast/src/numbast/static/renderer.py
🔇 Additional comments (3)
numbast/tests/data/sample_function.cuh (1)
1-8: Device stub and header boilerplate look goodLicense/SPDX,
#pragma once, and the__device__function are all well-formed. Using(void)ptr;in the body cleanly avoids unused-parameter warnings while keeping the function a no‑op.numbast/src/numbast/static/renderer.py (2)
125-127: LGTM! Addresses previous review feedback.The check and import are now aligned to
numba.cuda.types, resolving the concern from the previous review where checkingnumba.types.__dict__but importing fromnumba.cuda.typescould cause failures.Based on past review comments, this change was confirmed as the correct approach.
351-351: Verify import pathnumba.cuda.core.imputils.Registryexists in numba-cuda 0.21.0+.The migration from
numba.core.imputils.Registrytonumba.cuda.core.imputils.Registryaligns with the PR's CUDA-specific module usage. Confirm this import path is available in the target numba-cuda version to prevent runtime import errors.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (1)
numbast/src/numbast/static/tests/test_static_demo.py(2 hunks)
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: isVoid
Repo: NVIDIA/numbast PR: 239
File: numbast/src/numbast/static/renderer.py:367-367
Timestamp: 2025-12-04T23:19:43.544Z
Learning: In the NVIDIA/numbast repository, prefer importing from `numba.cuda.*` modules over `numba.*` modules. For example, use `from numba.cuda.extending import lower_cast` instead of `from numba.extending import lower_cast` or `from numba.core.extending import lower_cast`.
Learnt from: isVoid
Repo: NVIDIA/numbast PR: 239
File: numbast/src/numbast/static/struct.py:10-11
Timestamp: 2025-12-04T23:04:08.148Z
Learning: numba-cuda version 0.21.0 and later re-exports StructModel and PrimitiveModel from numba.cuda.datamodel.models, so importing from numba.cuda.datamodel.models is correct for projects using numba-cuda>=0.21.0.
📚 Learning: 2025-12-04T23:19:43.544Z
Learnt from: isVoid
Repo: NVIDIA/numbast PR: 239
File: numbast/src/numbast/static/renderer.py:367-367
Timestamp: 2025-12-04T23:19:43.544Z
Learning: In the NVIDIA/numbast repository, prefer importing from `numba.cuda.*` modules over `numba.*` modules. For example, use `from numba.cuda.extending import lower_cast` instead of `from numba.extending import lower_cast` or `from numba.core.extending import lower_cast`.
Applied to files:
numbast/src/numbast/static/tests/test_static_demo.py
📚 Learning: 2025-12-04T23:04:08.148Z
Learnt from: isVoid
Repo: NVIDIA/numbast PR: 239
File: numbast/src/numbast/static/struct.py:10-11
Timestamp: 2025-12-04T23:04:08.148Z
Learning: numba-cuda version 0.21.0 and later re-exports StructModel and PrimitiveModel from numba.cuda.datamodel.models, so importing from numba.cuda.datamodel.models is correct for projects using numba-cuda>=0.21.0.
Applied to files:
numbast/src/numbast/static/tests/test_static_demo.py
🔇 Additional comments (3)
numbast/src/numbast/static/tests/test_static_demo.py (3)
6-7: LGTM! Numpy import supports the improved test assertion.The numpy import is necessary for the floating-point comparison using
np.allcloseon line 59.
9-9: LGTM! Import aligns with numba.cuda module migration.The import correctly uses
numba.cuda.typesinstead ofnumba.types, consistent with the PR's objective to migrate to numba-cuda 0.21.0+ modules.Based on learnings, prefer importing from
numba.cuda.*modules overnumba.*modules.
58-59: LGTM! More robust floating-point comparison.The test now properly copies data from device to host and uses
np.allclosewith appropriate tolerances for comparing floating-point values. The tolerance values (rtol=1e-3, atol=1e-3) are reasonable for validating a custom__myfloat16tofloat64conversion.
This PR bumps Numbast compatible Numba-CUDA version to 0.21.0+. And properly documents each version's Numba-CUDA version pinning at FAQ section.
Summary by CodeRabbit
Documentation
Chores
Tests
New Files
✏️ Tip: You can customize this high-level summary in your review settings.