-
Notifications
You must be signed in to change notification settings - Fork 72
[Rewriter]: fuse successive Relu/Clip nodes #2410
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Rewriter]: fuse successive Relu/Clip nodes #2410
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2410 +/- ##
==========================================
+ Coverage 69.32% 69.53% +0.20%
==========================================
Files 204 206 +2
Lines 25854 26034 +180
Branches 2696 2715 +19
==========================================
+ Hits 17923 18102 +179
+ Misses 7001 7000 -1
- Partials 930 932 +2 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds graph rewrite rules to fuse consecutive Relu and Clip operations, updates the test harness to control ONNX Runtime’s optimization level, and provides unit tests to validate the new transformations.
- Introduce four fusion rules (
Relu(Relu)
,Relu(Clip)
,Clip(Relu)
,Clip(Clip)
) infuse_relus_clips.py
- Extend
assert_numerically_equal
intesting.py
to accept anort_optimization_level
argument - Add comprehensive tests in
fuse_relus_clips_test.py
to cover valid and invalid fusion scenarios
Reviewed Changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated no comments.
File | Description |
---|---|
onnxscript/rewriter/fuse_relus_clips.py | Implement new RewriteRule classes and assemble them into a set. |
onnxscript/rewriter/testing.py | Update test helper to pass through ONNX Runtime optimization level. |
onnxscript/rewriter/fuse_relus_clips_test.py | Add unit tests for each fusion pattern and edge‐case validations. |
Comments suppressed due to low confidence (2)
onnxscript/rewriter/fuse_relus_clips.py:161
- The variable name
fuse_sucessive_relu_clip_rule
has a typo (sucessive
vs.successive
). Rename it tofuse_successive_relu_clip_rule
for consistency with the other rules, and update any references.
fuse_sucessive_relu_clip_rule = FuseSuccessiveReluClip().rule()
onnxscript/rewriter/testing.py:27
- [nitpick] The
Args:
section in the docstring does not match the parameter order of the function signature. Consider reordering the entries so they follow(original_model_proto, rewritten_model_proto, args, ort_optimization_level, rtol, atol)
.
ort_optimization_level: Onnxruntime optimization level.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you - I think this can be part of the default rewrite rules
cc @gramalingam
ab90aaf
to
e0f6332
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please fix optional lint (it's mainly spelling).
- Relu(Relu(X)) -> Relu - Relu(Clip(X)) -> Clip - Clip(Relu(X)) -> Clip - Clip(Clip(X)) -> Clip
e0f6332
to
b05b6c0
Compare
b05b6c0
to
e30950c
Compare
@justinchuby Is the error "AttributeError: module 'onnx_ir.convenience' has no attribute 'get_const_tensor'" suggesting a newer version of onnx-ir? |
Trying merging from main? |
@titaiwangms turns out we also want to update the nox file. It needs to stay pinned because we want to test with the lowest supported version of onnx-ir. |
@AyoubMDL Looks like there is a case failing. |
Strange - could you help ensure the tensor min_clip has a proper ir.Shape defined? If not there may be a bug in ir.tensor(), or logic in this rewrite rule @AyoubMDL |
I'll check |
It was from my side. Sometimes when I |
- Clip(Relu(X)) -> Clip | ||
- Clip(Clip(X)) -> Clip | ||
""" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
from __future__ import annotations
would be helpful
This PR adds the following transformation: