-
Notifications
You must be signed in to change notification settings - Fork 72
[torchlib] Fix reflection pad #2037
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
❌ 27 Tests Failed:
View the top 3 failed test(s) by shortest run time
To view more test analytics, go to the Test Analytics Dashboard |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copilot reviewed 2 out of 2 changed files in this pull request and generated no comments.
Comments suppressed due to low confidence (2)
onnxscript/function_libs/torch_lib/ops/nn.py:1511
- The new padding logic introduced by
_process_padding
should be covered by test cases to ensure it works as expected.
pad: Sequence[INT64],
onnxscript/function_libs/torch_lib/ops/nn.py:1515
- The parameter
pad
is described asSymInt[]
, but it should beSequence[INT64]
.
"""pad(Tensor self, SymInt[] pad, str mode=\"constant\", float? value=None) -> Tensor"""
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need a new reflection_pad test to guard the missing corner case?
Could you remind me which was the missing case? (will follow up if needed) |
Aren't we refactoring because it has this corner case? |
Let me test |
Thanks for the reminder. It's fixed. Would be helpful to create some cases. I will do that |
Fixes pytorch/pytorch#144382