Skip to content

Conversation

@isVoid
Copy link
Collaborator

@isVoid isVoid commented Nov 6, 2025

This PR bumps Numbast compatible Numba-CUDA version to 0.21.0+. And properly documents each version's Numba-CUDA version pinning at FAQ section.

Summary by CodeRabbit

  • Documentation

    • Added guidance on numba-cuda compatibility and binding-generation requirements.
  • Chores

    • Raised and broadened numba-cuda version constraints to >=0.21.0,<0.23.0 across environment and manifest files.
    • Aligned project imports and build surfaces to CUDA-targeted APIs consistent with the updated constraint.
  • Tests

    • Updated test suites to import and use CUDA-specific types and datamodels.
  • New Files

    • Added a small CUDA header used by tests.

✏️ Tip: You can customize this high-level summary in your review settings.

@isVoid isVoid marked this pull request as draft November 6, 2025 21:49
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 4, 2025

Walkthrough

This PR raises numba-cuda constraints to >=0.21.0,<0.23.0 across manifests, replaces many imports with numba.cuda.* variants, adds a FAQ on compatibility, and adds a small CUDA header test file.

Changes

Cohort / File(s) Summary
Dependency configuration
ci/test_conda_python.sh, conda/environment-cu12.yaml, conda/environment-cu13.yaml, conda/environment_template.yaml, conda/recipes/numbast/meta.yaml, conda/recipes/numbast_extensions/meta.yaml, numbast/pyproject.toml, numbast_extensions/pyproject.toml
Updated numba-cuda version constraint from >=0.20.1,<0.21.0 to >=0.21.0,<0.23.0 in CI, conda envs, recipes, and pyproject specs.
Core import updates
numbast/src/numbast/function.py, numbast/src/numbast/functor.py, numbast/src/numbast/types.py, numbast/src/numbast/struct.py, numbast/benchmarks/test_arithmetic.py, numbast/src/numbast/class_template.py
Switched imports from core Numba modules to CUDA-specific equivalents (e.g., numba.core.typingnumba.cuda.typing, numba.typesnumba.cuda.types, numba.misc.special.typeofnumba.cuda.typing.typeof).
Static renderer & codegen
numbast/src/numbast/static/function.py, numbast/src/numbast/static/renderer.py, numbast/src/numbast/static/struct.py
Updated codegen/rendering imports and registry/lowering usages to numba.cuda.* (typing, types, datamodel, extending, imputils, Registry), and adjusted emitted public-import lines to CUDA namespaces.
Tests — import updates
numbast/src/numbast/static/tests/*.py, numbast/src/numbast/static/tests/test_conflict_shim_names.py, numbast/src/numbast/static/tests/test_function_static_bindings.py, numbast/src/numbast/static/tests/test_link_two_files.py, numbast/src/numbast/static/tests/test_operator_bindings.py, numbast/src/numbast/static/tests/test_static_demo.py, numbast/src/numbast/static/tests/test_struct_static_bindings.py, numbast/tests/demo/demo.py, numbast/tests/test_struct.py
Updated test imports to use numba.cuda.* for types and datamodel (e.g., numba.cuda.types, numba.cuda.datamodel); one test adjusted an assertion to check host array contents.
Documentation
docs/source/faq.rst
Added FAQ section "Generated bindings and Numba-CUDA version requirements" with a Numbast→numba-cuda compatibility table and guidance.
Test data
numbast/tests/data/sample_function.cuh
Added CUDA header declaring __device__ void func_with_void_ptr_arg(void *ptr) {} with SPDX and include guard directives.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

  • Verify all manifests and CI scripts consistently updated numba-cuda constraints.
  • Review codegen/rendering files (numbast/src/numbast/static/*, numbast/src/numbast/struct.py) to ensure correct CUDA import paths and that emitted public import lines match intended runtime APIs.
  • Run or inspect tests that now import numba.cuda.* to confirm no remaining CPU-only imports and that the changed assertion in test_static_demo.py is intentional.

🐇 I nibble bytes and hop through code,
CUDA paths now trace my road.
Versions nudged, imports aligned,
A tiny header tucked inside.
Hooray — the bindings hop with pride!

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main changes: updating imports to use numba.cuda modules and bumping the supported Numba-CUDA version to 0.21+, which aligns with the extensive import path updates and version constraint changes across the codebase.
Docstring Coverage ✅ Passed Docstring coverage is 83.33% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@isVoid isVoid marked this pull request as ready for review December 4, 2025 21:36
@isVoid isVoid changed the title Eagerly use all vended numba.cuda modules Eagerly use all vended numba.cuda modules, Bump supported Numba-CUDA version to 0.21+ Dec 4, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
numbast/src/numbast/static/tests/test_static_demo.py (1)

45-56: Test assertion is almost certainly wrong and will not validate the behavior

arr is allocated with shape (1,) and written in the kernel, but the test asserts:

assert all(arr.copy_to_host() == [])

Comparing a non-empty array to [] is logically incorrect and is likely to raise a broadcasting error rather than perform a meaningful assertion.

Consider asserting on the actual numeric value instead, e.g.:

+import numpy as np
@@
-    kernel[1, 1](arr)
-    assert all(arr.copy_to_host() == [])
+    kernel[1, 1](arr)
+    host = arr.copy_to_host()
+    # Expect conversion from __myfloat16(3.14) to float64
+    assert np.allclose(host, [3.14], rtol=1e-3, atol=1e-3)

This makes the intent of the test clear and avoids shape/broadcasting issues.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e501c89 and 5b10afb.

📒 Files selected for processing (26)
  • ci/test_conda_python.sh (1 hunks)
  • conda/environment-cu12.yaml (1 hunks)
  • conda/environment-cu13.yaml (1 hunks)
  • conda/environment_template.yaml (1 hunks)
  • conda/recipes/numbast/meta.yaml (1 hunks)
  • conda/recipes/numbast_extensions/meta.yaml (1 hunks)
  • docs/source/faq.rst (1 hunks)
  • numbast/benchmarks/test_arithmetic.py (1 hunks)
  • numbast/pyproject.toml (2 hunks)
  • numbast/src/numbast/function.py (1 hunks)
  • numbast/src/numbast/functor.py (1 hunks)
  • numbast/src/numbast/static/function.py (2 hunks)
  • numbast/src/numbast/static/renderer.py (3 hunks)
  • numbast/src/numbast/static/struct.py (9 hunks)
  • numbast/src/numbast/static/tests/test_conflict_shim_names.py (1 hunks)
  • numbast/src/numbast/static/tests/test_function_static_bindings.py (1 hunks)
  • numbast/src/numbast/static/tests/test_link_two_files.py (1 hunks)
  • numbast/src/numbast/static/tests/test_operator_bindings.py (1 hunks)
  • numbast/src/numbast/static/tests/test_static_demo.py (1 hunks)
  • numbast/src/numbast/static/tests/test_struct_static_bindings.py (1 hunks)
  • numbast/src/numbast/struct.py (1 hunks)
  • numbast/src/numbast/types.py (1 hunks)
  • numbast/tests/data/sample_function.cuh (1 hunks)
  • numbast/tests/demo/demo.py (1 hunks)
  • numbast/tests/test_struct.py (1 hunks)
  • numbast_extensions/pyproject.toml (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
numbast/src/numbast/static/struct.py (1)
ast_canopy/ast_canopy/pylibastcanopy.pyi (1)
  • Type (84-94)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
  • GitHub Check: Build Conda Python / build (arm64, 3.13, 12.9.1, rockylinux8)
  • GitHub Check: Build Conda Python / build (arm64, 3.12, 12.9.1, rockylinux8)
  • GitHub Check: Build Conda Python / build (arm64, 3.10, 12.9.1, rockylinux8)
  • GitHub Check: Build Conda Python / build (arm64, 3.11, 12.9.1, rockylinux8)
  • GitHub Check: Build Conda Python / build (amd64, 3.11, 12.9.1, rockylinux8)
  • GitHub Check: Build Conda Python / build (amd64, 3.10, 12.9.1, rockylinux8)
🔇 Additional comments (26)
conda/recipes/numbast/meta.yaml (1)

50-56: Confirm numba-cuda constraint matches supported and tested range

Bumping the run dependency to numba-cuda>=0.21.0,<0.23.0 looks consistent with the PR goal; just make sure this window is (a) the minimum that still exposes all the numba.cuda.* entry points you now rely on, and (b) aligned with CI configs and any version matrix in the docs (or tightened to a specific release if you decide to pin more strictly before merge).

numbast/benchmarks/test_arithmetic.py (1)

1-3: Importing float32 from numba.cuda.types is consistent with the CUDA-focused migration

Using numba.cuda.types.float32 for the casts inside the kernel matches the rest of the PR’s shift to CUDA-specific modules and shouldn’t change behavior here; please just confirm there are no remaining numba.float32 imports in this benchmark suite that would make type handling inconsistent.

numbast/tests/demo/demo.py (1)

5-7: PrimitiveModel import moved to CUDA datamodel — verify availability across supported versions

Using PrimitiveModel from numba.cuda.datamodel.models aligns this demo with the CUDA datamodel and the rest of the migration; please verify that this symbol is present and stable across your declared numba-cuda range and that the demo still runs end‑to‑end on a GPU with both the minimum and maximum supported versions.

numbast_extensions/pyproject.toml (1)

18-18: Keep numba-cuda constraint in sync with core package and docs

The numba-cuda>=0.21.0,<0.23.0 window for numbast_extensions looks right for this migration; please double‑check that (1) the main numbast package exposes a compatible requirement, (2) your docs/FAQ show the same range, and (3) numbast>=0.2.0 is indeed compatible with this newer numba-cuda span or, if not, that the lower bound on numbast is raised accordingly.

conda/recipes/numbast_extensions/meta.yaml (1)

42-47: Verify conda numba-cuda bounds match pyproject and CI

The conda run requirement for numba-cuda>=0.21.0,<0.23.0 correctly mirrors the pyproject change; please verify that all other packaging/CI entry points (pyproject, environment YAMLs, test scripts) now use this same window so users don’t hit solver conflicts between wheels and conda installs.

numbast/src/numbast/static/function.py (1)

229-235: Switch to numba.cuda.typing/numba.cuda.types — validate across supported versions

Importing signature, CPointer, and int32 from numba.cuda.typing / numba.cuda.types keeps the generated static bindings on the CUDA-specific type system and matches the broader refactor; please confirm that these symbols are available and stable across the full numba-cuda range you declare (esp. the lowest and highest supported versions) and that the generated bindings still compile and run correctly in both cases.

Also applies to: 276-277

numbast/src/numbast/functor.py (1)

4-8: CUDA-specific typeof_impl/lower_constant look appropriate — confirm API stability and CPU needs

Registering Functor via numba.cuda.extending.typeof_impl and numba.cuda.core.imputils.lower_constant correctly targets the CUDA compilation pipeline; please verify that these CUDA entry points exist and are stable over your supported numba-cuda range, and that you don’t rely on Functor being recognized in non‑CUDA (CPU) typing/constant‑lowering contexts anymore—otherwise you may also want to keep the core numba.core.* registrations in parallel.

conda/environment-cu12.yaml (1)

12-12: Version constraint updated consistently for CUDA 12 environment.

The numba-cuda[cu12] constraint has been updated to match the project-wide version range >=0.21.0,<0.23.0.

ci/test_conda_python.sh (1)

26-26: CI test environment updated to use new version range.

The test environment creation has been updated to use numba-cuda>=0.21.0,<0.23.0, ensuring CI tests run against the correct version range.

numbast/pyproject.toml (1)

21-21: Version constraints updated consistently across all dependency specifications.

The numba-cuda version constraint has been updated to >=0.21.0,<0.23.0 in:

  • Main project dependencies (line 21)
  • Optional test dependencies for both cu12 (line 44) and cu13 (line 49)

This ensures consistent versioning across all installation scenarios.

Also applies to: 44-44, 49-49

conda/environment-cu13.yaml (1)

12-12: Version constraint updated consistently for CUDA 13 environment.

The numba-cuda[cu13] constraint has been updated to match the project-wide version range >=0.21.0,<0.23.0, completing the migration across all CUDA variants (cu12, cu13).

numbast/tests/test_struct.py (1)

9-9: Import path is incorrect.

StructModel should be imported from numba.core.datamodel.models (internal) or numba.extending.models (public API), not from numba.cuda.datamodel. The numba.cuda.datamodel path does not provide this export. This import will fail at runtime.

Likely an incorrect or invalid review comment.

numbast/src/numbast/static/tests/test_function_static_bindings.py (1)

9-9: Reconsider import source for standard scalar types.

Import int32 and float32 from numba.types rather than numba.cuda.types. These are standard numeric types documented in Numba's core types API, not CUDA-specific types. The numba.cuda.types module is documented for CUDA-specific vector types (e.g., float32x2, float32x3), not scalar types. Using the correct import path maintains clarity and follows Numba's documented conventions.

Likely an incorrect or invalid review comment.

conda/environment_template.yaml (1)

12-12: Verify that the version range matches the tested commit.

The version constraint has been updated to >=0.21.0,<0.23.0. The original comment references commit 9a5252803843fd73644c7b1ba54c84823209b161, but this commit could not be verified against publicly available numba-cuda release information. Confirm that this commit corresponds to a release within the specified version range and that all features used are available in numba-cuda 0.21.0+.

numbast/src/numbast/struct.py (1)

9-20: Systematic migration to CUDA-specific vended modules.

The imports have been correctly updated to use numba-cuda's vended CUDA target modules:

  • numba.cuda.extending for extension mechanisms (register_model, lower_cast, make_attribute_wrapper)
  • numba.cuda.typing for typing constructs (signature, templates)
  • numba.cuda.datamodel.models for data models (StructModel, PrimitiveModel)
  • numba.cuda for CUDA-specific declarations and lowering utilities

All imported symbols are available in numba-cuda >=0.21.0 and this approach aligns with the PR's goal to eagerly use vended numba.cuda modules rather than upstream Numba internals.

numbast/src/numbast/static/tests/test_struct_static_bindings.py (1)

9-10: CUDA-specific Type/Number/StructModel/PrimitiveModel imports align with static struct specs

Switching the struct specs over to numba.cuda.types.Type / Number and numba.cuda.datamodel.StructModel / PrimitiveModel is consistent with the rest of the CUDA-centric refactor and should keep the generated bindings on the CUDA surface.

Please just double-check against the numba-cuda versions you intend to support that all of these symbols are present and stable (especially Number and PrimitiveModel), and that there are no remaining struct specs elsewhere still importing from numba.types / numba.core.datamodel for consistency.

numbast/src/numbast/static/tests/test_operator_bindings.py (1)

7-8: Operator-binding specs correctly switched to CUDA types/datamodel

Using numba.cuda.types.Type and numba.cuda.datamodel.StructModel for the operator bindings test is consistent with the rest of the static binding migration and should keep the generated APIs on the CUDA path.

Please confirm these CUDA-side symbols exist and behave as expected in all numba-cuda versions covered by your new compatibility ranges so that this test doesn’t become version-fragile.

numbast/src/numbast/function.py (1)

8-8: Signature/typing move to numba.cuda.typing matches CUDA-specific lowering, but verify API compatibility

Importing nb_signature and Signature from numba.cuda.typing is consistent with using CUDA ConcreteTemplate, register_global, and lower from the CUDA registries, and keeps all typing constructs on the CUDA surface.

Given this is fairly low-level, please verify against your target numba/numba-cuda versions that:

  • numba.cuda.typing.signature / Signature are the intended counterparts to the previously used core versions, and
  • there are no remaining call sites elsewhere still constructing Signature from numba.core.typing, which could lead to mixed or incompatible signature types.

If everything is on numba.cuda.typing now, this change looks conceptually tight.

numbast/src/numbast/static/tests/test_conflict_shim_names.py (1)

6-7: Conflict-shim-name test now correctly uses CUDA struct spec types

Switching the struct spec imports here to numba.cuda.types.Type and numba.cuda.datamodel.StructModel keeps this test in sync with the rest of the CUDA static binding pipeline; the shim-name checks themselves remain unchanged.

Please confirm that make_binding is not used anywhere with mixed core vs CUDA datamodel/types, and that numba.cuda.datamodel.StructModel is the expected class for your targeted numba-cuda range.

numbast/src/numbast/types.py (1)

9-9: Switching typeof to numba.cuda.typing.typeof is consistent, but enum support should be confirmed

Using typeof from numba.cuda.typing.typeof in register_enum_type lines this module up with the rest of the CUDA-typing migration and avoids relying on numba.misc.special.

Given this is now responsible for mapping C++ enums to Numba types, please verify under your supported numba/numba-cuda versions that:

  • numba.cuda.typing.typeof.typeof accepts IntEnum instances and returns the expected Numba enum/integral type, and
  • there are no remaining imports of typeof from numba.misc.special elsewhere to avoid inconsistencies.

If those checks pass, the change here looks appropriate.

numbast/src/numbast/static/renderer.py (1)

351-352: Registry and lower_cast imports now fully CUDA-specific—confirm API surface in supported versions

Updating registry_setup to import:

  • TargetRegistry from numba.cuda.core.imputils, and
  • lower_cast from numba.cuda.extending

keeps the generated registry setup and lowering hooks on the CUDA side, matching the rest of the PR.

Please sanity-check in the numba-cuda versions you intend to support that:

  • numba.cuda.core.imputils.Registry exposes the methods referenced in BaseRenderer.SeparateRegistrySetup (including lower_cast), and
  • numba.cuda.extending.lower_cast exists and is the appropriate lowering helper for CUDA.

If those APIs are present and tested in your CI matrix, this wiring looks correct.

Also applies to: 367-368

docs/source/faq.rst (1)

127-169: New numba-cuda compatibility FAQ is clear; keep version ranges in sync with packaging metadata

The added “Generated bindings and Numba-CUDA version requirements” section reads well and clearly documents the required numba-cuda ranges for each Numbast series, plus guidance for dynamic vs static bindings.

Given these ranges are now duplicated across docs, pyproject.toml, and conda recipes, please ensure:

  • The table (0.6.0 → >=0.21.0,<0.23.0, 0.5.x → >=0.20.1,<0.21.0) exactly matches the pins enforced in your packaging/CI configs, and
  • You update this FAQ in lockstep when either Numbast or numba-cuda version constraints change.

With that caveat, this documentation addition looks solid.

numbast/src/numbast/static/tests/test_link_two_files.py (1)

7-8: Two-file static-link test correctly migrated to CUDA types/datamodel

Using numba.cuda.types.Type / Number and numba.cuda.datamodel.StructModel / PrimitiveModel in the specs for Foo, Bar, and MyInt aligns this test with the rest of the CUDA-first static binding path and should keep the generated code compatible with the CUDA registries.

Please confirm that all other static tests using the same headers/specs have also been updated to the CUDA variants and that these symbols are available across the full numba-cuda version range advertised in your new FAQ.

numbast/src/numbast/static/tests/test_static_demo.py (1)

7-8: Verify the import paths, particularly PrimitiveModel source

The float64 import from numba.cuda.types appears sound, but documentation indicates PrimitiveModel typically resides in numba.core.datamodel, not numba.cuda.datamodel. Confirm these imports work against your pinned numba-cuda version, or check whether numba.cuda.datamodel re-exports PrimitiveModel from core.

numbast/src/numbast/static/struct.py (2)

218-224: Remove mentions of Function and Conversion — they are not imported or used in this code

The review's request to verify the CUDA API surface is reasonable, but contains inaccuracies:

The code correctly imports:

  • from numba.cuda.typing import signature
  • from numba.cuda.types import CPointer, int32
  • from numba.cuda.extending import as_numba_type, register_model, make_attribute_wrapper

However, the review incorrectly claims the code imports:

  • Function from numba.cuda.types — not imported or used anywhere in struct.py
  • Conversion from numba.cuda.typeconv — not imported or used anywhere in struct.py

Verify availability of the actual imports (signature, CPointer, int32, as_numba_type, register_model, make_attribute_wrapper) against numba-cuda >=0.21,<0.23, as these are the symbols that need compatibility assurance.

Likely an incorrect or invalid review comment.


866-869: The suggested import path numba.cuda.typing.signature does not exist and will cause import failures.

The web search confirms that signature is not exported from numba.cuda.typing. The canonical import paths are:

  • from numba.typing.templates import signature (modern Numba)
  • from numba.core.typing.templates import signature (also available)

The current code using from numba.core.typing import signature is closer to correct than the suggested change to numba.cuda.typing.signature.

The suggestion for numba.cuda.types.CPointer is valid—this is properly vended by numba.cuda—but this accounts for only half of the proposed changes.

The review's premise that both imports should align under numba.cuda.* paths cannot be achieved for signature, as it is not part of the CUDA-vended API. The consistency issue identified is valid, but the proposed solution is incorrect.

Likely an incorrect or invalid review comment.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5b10afb and fa5f7af.

📒 Files selected for processing (2)
  • numbast/src/numbast/static/renderer.py (3 hunks)
  • numbast/tests/data/sample_function.cuh (1 hunks)
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: isVoid
Repo: NVIDIA/numbast PR: 239
File: numbast/src/numbast/static/struct.py:10-11
Timestamp: 2025-12-04T23:04:08.148Z
Learning: numba-cuda version 0.21.0 and later re-exports StructModel and PrimitiveModel from numba.cuda.datamodel.models, so importing from numba.cuda.datamodel.models is correct for projects using numba-cuda>=0.21.0.
📚 Learning: 2025-12-04T23:04:08.148Z
Learnt from: isVoid
Repo: NVIDIA/numbast PR: 239
File: numbast/src/numbast/static/struct.py:10-11
Timestamp: 2025-12-04T23:04:08.148Z
Learning: numba-cuda version 0.21.0 and later re-exports StructModel and PrimitiveModel from numba.cuda.datamodel.models, so importing from numba.cuda.datamodel.models is correct for projects using numba-cuda>=0.21.0.

Applied to files:

  • numbast/src/numbast/static/renderer.py
🔇 Additional comments (3)
numbast/tests/data/sample_function.cuh (1)

1-8: Device stub and header boilerplate look good

License/SPDX, #pragma once, and the __device__ function are all well-formed. Using (void)ptr; in the body cleanly avoids unused-parameter warnings while keeping the function a no‑op.

numbast/src/numbast/static/renderer.py (2)

125-127: LGTM! Addresses previous review feedback.

The check and import are now aligned to numba.cuda.types, resolving the concern from the previous review where checking numba.types.__dict__ but importing from numba.cuda.types could cause failures.

Based on past review comments, this change was confirmed as the correct approach.


351-351: Verify import path numba.cuda.core.imputils.Registry exists in numba-cuda 0.21.0+.

The migration from numba.core.imputils.Registry to numba.cuda.core.imputils.Registry aligns with the PR's CUDA-specific module usage. Confirm this import path is available in the target numba-cuda version to prevent runtime import errors.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0375b48 and 56f62d6.

📒 Files selected for processing (1)
  • numbast/src/numbast/static/tests/test_static_demo.py (2 hunks)
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: isVoid
Repo: NVIDIA/numbast PR: 239
File: numbast/src/numbast/static/renderer.py:367-367
Timestamp: 2025-12-04T23:19:43.544Z
Learning: In the NVIDIA/numbast repository, prefer importing from `numba.cuda.*` modules over `numba.*` modules. For example, use `from numba.cuda.extending import lower_cast` instead of `from numba.extending import lower_cast` or `from numba.core.extending import lower_cast`.
Learnt from: isVoid
Repo: NVIDIA/numbast PR: 239
File: numbast/src/numbast/static/struct.py:10-11
Timestamp: 2025-12-04T23:04:08.148Z
Learning: numba-cuda version 0.21.0 and later re-exports StructModel and PrimitiveModel from numba.cuda.datamodel.models, so importing from numba.cuda.datamodel.models is correct for projects using numba-cuda>=0.21.0.
📚 Learning: 2025-12-04T23:19:43.544Z
Learnt from: isVoid
Repo: NVIDIA/numbast PR: 239
File: numbast/src/numbast/static/renderer.py:367-367
Timestamp: 2025-12-04T23:19:43.544Z
Learning: In the NVIDIA/numbast repository, prefer importing from `numba.cuda.*` modules over `numba.*` modules. For example, use `from numba.cuda.extending import lower_cast` instead of `from numba.extending import lower_cast` or `from numba.core.extending import lower_cast`.

Applied to files:

  • numbast/src/numbast/static/tests/test_static_demo.py
📚 Learning: 2025-12-04T23:04:08.148Z
Learnt from: isVoid
Repo: NVIDIA/numbast PR: 239
File: numbast/src/numbast/static/struct.py:10-11
Timestamp: 2025-12-04T23:04:08.148Z
Learning: numba-cuda version 0.21.0 and later re-exports StructModel and PrimitiveModel from numba.cuda.datamodel.models, so importing from numba.cuda.datamodel.models is correct for projects using numba-cuda>=0.21.0.

Applied to files:

  • numbast/src/numbast/static/tests/test_static_demo.py
🔇 Additional comments (3)
numbast/src/numbast/static/tests/test_static_demo.py (3)

6-7: LGTM! Numpy import supports the improved test assertion.

The numpy import is necessary for the floating-point comparison using np.allclose on line 59.


9-9: LGTM! Import aligns with numba.cuda module migration.

The import correctly uses numba.cuda.types instead of numba.types, consistent with the PR's objective to migrate to numba-cuda 0.21.0+ modules.

Based on learnings, prefer importing from numba.cuda.* modules over numba.* modules.


58-59: LGTM! More robust floating-point comparison.

The test now properly copies data from device to host and uses np.allclose with appropriate tolerances for comparing floating-point values. The tolerance values (rtol=1e-3, atol=1e-3) are reasonable for validating a custom __myfloat16 to float64 conversion.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant