-
Notifications
You must be signed in to change notification settings - Fork 64
Add op(unfold) | feat(torchlib) #893
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov Report
@@ Coverage Diff @@
## main #893 +/- ##
==========================================
+ Coverage 76.70% 76.73% +0.03%
==========================================
Files 112 112
Lines 13524 13549 +25
Branches 1366 1369 +3
==========================================
+ Hits 10373 10397 +24
Misses 2810 2810
- Partials 341 342 +1
|
…oft/onnxscript into xiaowu/addOp(aten_unfold)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Would be good to get another review from @BowenBao
self_rank = self_rank + 1 | ||
# perm need to be list[int], so have to be generated in trace_only mode | ||
perm = list(range(self_rank)) | ||
perm.append(perm.pop(dimension + 1)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be great to add a comment to explain this logic, thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we do this though? What does it mean? Is there a reference logic?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- similar but different in detail: https://github.com/pytorch/pytorch/blob/a8f40b39ce4f9fa9ffd90400b7d10ea4051d623a/torch/onnx/symbolic_opset12.py#L390
- the same logic:
perm.insert(dim, perm.pop(-1))
…oft/onnxscript into xiaowu/addOp(aten_unfold)
from: #534