Skip to content

[Rewriter] Add optimizer to fold Pad operators into Conv #2363

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

Johansmm
Copy link
Contributor

@Johansmm Johansmm commented Jun 4, 2025

Following (#2301), fuse_pad_into_conv rule set is introduced to reduce the following list of operators:

  • Conv ∘ Pad -> Conv
  • ConvInteger ∘ Pad -> ConvInteger

Additionally, NormalizePadFormat is introduced in order to change auto_pads Conv attribute in its explicit pads list (ref: https://onnx.ai/onnx/operators/onnx__Conv.html).

@Johansmm Johansmm force-pushed the 2301-fold-pad-into-conv branch from 4b9b69b to 19b0418 Compare June 4, 2025 20:35
@Johansmm Johansmm requested a review from justinchuby June 4, 2025 22:09
Copy link

codecov bot commented Jun 4, 2025

Codecov Report

Attention: Patch coverage is 95.48611% with 13 lines in your changes missing coverage. Please review.

Project coverage is 69.83%. Comparing base (e63a16b) to head (7f7d17e).

Files with missing lines Patch % Lines
onnxscript/rewriter/fuse_pad_into_conv_test.py 94.44% 4 Missing and 4 partials ⚠️
onnxscript/rewriter/fuse_pad_into_conv.py 96.52% 3 Missing and 2 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2363      +/-   ##
==========================================
+ Coverage   69.55%   69.83%   +0.28%     
==========================================
  Files         210      212       +2     
  Lines       26120    26408     +288     
  Branches     2721     2764      +43     
==========================================
+ Hits        18168    18443     +275     
- Misses       7017     7024       +7     
- Partials      935      941       +6     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@Johansmm Johansmm force-pushed the 2301-fold-pad-into-conv branch from 19b0418 to 1604446 Compare June 24, 2025 18:14
@Johansmm
Copy link
Contributor Author

Push force rebasing on main and fixing conflicts.

@Johansmm Johansmm marked this pull request as draft June 24, 2025 19:48
@Johansmm Johansmm requested a review from justinchuby June 24, 2025 21:52
Copy link
Contributor Author

@Johansmm Johansmm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@justinchuby I forgot to explain in the previous message that the changes were not ready yet (just fixing rebase with main).
In the last commits I update the code with all the suggestions.
Now this work is ready to be reviewed.

@Johansmm Johansmm marked this pull request as ready for review June 24, 2025 21:55
@justinchuby
Copy link
Collaborator

Could you update the PR title and description? Thanks

@Johansmm Johansmm changed the title 2301 fold pad into conv [Rewriter] Add optimizer to fold Pad operators into Conv Jun 26, 2025
@Johansmm
Copy link
Contributor Author

@justinchuby is ok the new description and title ?

@justinchuby
Copy link
Collaborator

Thanks - maybe @gramalingam or @titaiwangms for another review?

Copy link
Contributor

@titaiwangms titaiwangms left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please run "lintrunner -a"

return check_result


class NormalizePadFormatConv(_NormalizePadFormatBase):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose this does not fallback when the rewritten conv still does not match FusePad rules? Do we still benefit from converting auto_pad attribute into 'NOTSET' if it's still a no match?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right, always NormalizePadFormat is applied, even if FusePad is not applied.

On the other hand, I consider that having a single pad format is coherent, since it makes it easier to be interpreted by different accelerators. (bug example in onnxruntime due to the auto_pad type).

Let me know if you still disagree with this information. What do you think @justinchuby.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

@Johansmm Johansmm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Last commit with @titaiwangms suggestions

return check_result


class NormalizePadFormatConv(_NormalizePadFormatBase):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right, always NormalizePadFormat is applied, even if FusePad is not applied.

On the other hand, I consider that having a single pad format is coherent, since it makes it easier to be interpreted by different accelerators. (bug example in onnxruntime due to the auto_pad type).

Let me know if you still disagree with this information. What do you think @justinchuby.

Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

Adds rewrite rules that fold preceding Pad operators into Conv/ConvInteger nodes and normalize Conv auto_pad attributes to explicit pads.

  • Define fuse_pad_into_conv and fuse_pad_into_conv_integer rules to merge Pad into Conv/ConvInteger
  • Introduce normalize_pad_format_conv/normalize_pad_format_conv_integer to convert auto_pad into explicit pads
  • Add and register comprehensive unit tests for fusion and normalization, and update __init__.py to expose the new rules

Reviewed Changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.

File Description
onnxscript/rewriter/fuse_pad_into_conv.py New rewrite rules for fusing Pad into Conv/ConvInteger and normalizing pad formats
onnxscript/rewriter/fuse_pad_into_conv_test.py Unit tests for both fusion and normalization behaviors
onnxscript/rewriter/init.py Imported and included the new rules in the global rule set
Comments suppressed due to low confidence (2)

onnxscript/rewriter/fuse_pad_into_conv.py:202

  • [nitpick] The error message in NotImplementedError has a grammar issue; consider updating it to 'Subclasses must implement this function' for clarity.
        raise NotImplementedError("Child have to implement this function")

onnxscript/rewriter/fuse_pad_into_conv_test.py:405

  • There are tests for NormalizePadFormatConv but none for NormalizePadFormatConvInteger; consider adding similar tests to cover the ConvInteger normalization rule.
if __name__ == "__main__":

@Johansmm Johansmm force-pushed the 2301-fold-pad-into-conv branch from 0b166c1 to 7f7d17e Compare July 5, 2025 16:55
@Johansmm
Copy link
Contributor Author

Johansmm commented Jul 5, 2025

Last push force rebasing on main and addig @justinchuby suggestions

@Johansmm Johansmm requested a review from justinchuby July 5, 2025 16:55

def check(self, context, x: ir.Value, pad: ir.Value, conv: ir.Value) -> orp.MatchResult:
check_result = super().check(context, x, pad, conv)
if check_result.reason:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I suggest if not check_result: ... a MatchResult can be treated as a boolean, with True meaning success, and False meaning failure.

conv_node = conv.producer()
if (
apad := conv_node.attributes.get("auto_pad", None)
) and apad.as_string() != "NOTSET":
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

minor nit: I think this condition can be simplified to conv_node.attributes.get_string("auto_pad", "NOTSET") != "NOTSET" .


# Retrieve the padding and axes
x_rank = len(x.shape)
pad_pads = pad_node.inputs[1].const_value.numpy().tolist()
Copy link
Collaborator

@gramalingam gramalingam Jul 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The above line needs to handle various special-conditions and error-situations. I suggest using onnxscript.rewriter._ir_utils.get_numpy_value(pad_node.inputs[1]) which will handle the cases where the input is None or is not a constant. And check that the value is not None before using it.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I see, the check is being done in the check method done below.

axes_list = list(range(x_rank))

# Pad constraints: values
pads_list = fill_pads_with_axes(pads.const_value.numpy(), axes_list, x_rank)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this is the same as the pad_pads used in the rewrite method, you can save it as self._pads_list and use it in the rewrite method to avoid redoing the entire computation. It involves implicitly passed state, but works fine.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Development

Successfully merging this pull request may close these issues.

4 participants