-
Notifications
You must be signed in to change notification settings - Fork 16
Support Functions Typed with __nv_bfloat16_raw
#262
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughAdds support for a new Numba CUDA type Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: Path: .coderabbit.yaml Review profile: ASSERTIVE Plan: Pro 📒 Files selected for processing (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
🔇 Additional comments (1)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (5)
numbast/src/numbast/static/renderer.py(1 hunks)numbast/src/numbast/static/tests/data/bf16.cuh(1 hunks)numbast/src/numbast/static/tests/test_bf16_support.py(1 hunks)numbast/src/numbast/static/types.py(1 hunks)numbast/src/numbast/types.py(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
numbast/src/numbast/static/types.py (1)
numbast/src/numbast/static/renderer.py (2)
BaseRenderer(15-137)_try_import_numba_type(104-129)
numbast/src/numbast/static/tests/test_bf16_support.py (1)
numbast/src/numbast/static/tests/conftest.py (1)
make_binding(25-58)
🔇 Additional comments (6)
numbast/src/numbast/static/types.py (1)
62-64: LGTM - Consistent with existing bfloat16 handling.The implementation correctly mirrors the pattern used for
__nv_bfloat16at lines 58-60 and properly registers the type import.numbast/src/numbast/types.py (2)
12-12: Import from internal module may be fragile.Same concern as in renderer.py: importing
_type_unnamed1405307fromnumba.cuda._internal.cuda_bf16relies on an internal API with an auto-generated name. Ensure this is stable across the target numba versions.
88-89: LGTM - Early return pattern is correct.The early return for
__nv_bfloat16_rawfollows the established pattern and correctly returns the imported type.numbast/src/numbast/static/tests/test_bf16_support.py (1)
28-49: LGTM - Well-structured test for raw bfloat16 conversion.The test appropriately exercises the round-trip conversion between bfloat16 and its raw representation, verifying the bindings work correctly.
numbast/src/numbast/static/tests/data/bf16.cuh (1)
13-19: LGTM - Conversion functions use CUDA BF16 API correctly.The implementation correctly uses CUDA constructors for BF16 conversions. The
__nv_bfloat16_rawconstructor accepts thenv_bfloat16value, and the__nv_bfloat16constructor accepts the raw representation via its constexpr constructor signature. Both inline device function declarations are correct.Note:
__nv_bfloat16_rawmay be deprecated in future CUDA releases; consider this for long-term maintenance.numbast/src/numbast/static/renderer.py (1)
113-115:_type_unnamed1405307is an auto-generated internal type name that is fragile across numba/numba-cuda versions.The import relies on
_type_unnamed1405307fromnumba.cuda._internal.cuda_bf16, which is not a standard numba export but rather an auto-generated internal type from the Numbast bfloat16 bindings. The auto-generated naming convention (_type_unnamed{id}) makes this fragile:
- Different numba-cuda or Numbast builds may generate different type IDs
- No guarantee this type name remains stable across patch or minor version updates
- Not a documented public API, so changes are not tracked in release notes
Consider documenting the required numba/numba-cuda version constraints or adding a runtime check that validates this type's availability at import time with a clear error message if the expected type cannot be found.
Currently Numbast's type system does not recognize
__nv_bfloat16_rawdata type. Certain CUDA C++ functions takes the raw type as input, so we also provide those support here.Summary by CodeRabbit
Release Notes
New Features
Tests
✏️ Tip: You can customize this high-level summary in your review settings.