-
Notifications
You must be signed in to change notification settings - Fork 64
Make aten::contiguous and device_put no-op | fix(torchlib) #835
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Make contiguous an no-op because memory format is not a notion in ONNX
Codecov Report
@@ Coverage Diff @@
## main #835 +/- ##
==========================================
+ Coverage 76.45% 76.47% +0.01%
==========================================
Files 112 112
Lines 13371 13371
Branches 1342 1341 -1
==========================================
+ Hits 10223 10225 +2
+ Misses 2816 2815 -1
+ Partials 332 331 -1
|
"memory_format value supports 'contiguous_format' or 'preserve_format' only." | ||
) | ||
# ONNX does not have the notion of memory_format. It is always treated as a no-op. | ||
return op.Identity(self) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm wondering if we do so, users calling this function will have a wrong misunderstanding that all of formats were processed successfully, which is not right.
If we can handle this op in an earlier phase by exporter, this should be fine, and we'd better leave a comment here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand the format is an internal representation and does not affect computation in terms of the result? I gathered that from https://pytorch.org/tutorials/intermediate/memory_format_tutorial.html Please feel free to correct me
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@BowenBao do you have more info on this op? Do you think we should filter it out in a fx pass?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Chatted offline. Handling in fx pass offers a more fundamental solution that asserts correctness, yet it requires much larger effort and targets only edge cases, which does not cut it in terms of priorities. Hence the approach in this PR is preferred.
@BowenBao PTAL. Thanks! |
"memory_format value supports 'contiguous_format' or 'preserve_format' only." | ||
) | ||
# ONNX does not have the notion of memory_format. It is always treated as a no-op. | ||
return op.Identity(self) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Chatted offline. Handling in fx pass offers a more fundamental solution that asserts correctness, yet it requires much larger effort and targets only edge cases, which does not cut it in terms of priorities. Hence the approach in this PR is preferred.
Make contiguous & device_put an no-op because memory format or device is not a notion in ONNX.