Skip to content

Make aten::contiguous and device_put no-op | fix(torchlib) #835

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Jul 12, 2023

Conversation

justinchuby
Copy link
Collaborator

@justinchuby justinchuby commented Jul 6, 2023

Make contiguous & device_put an no-op because memory format or device is not a notion in ONNX.

Make contiguous an no-op because memory format is not a notion in ONNX
@justinchuby justinchuby added the module: torchlib Related to the torch/aten function lib in development label Jul 6, 2023
@justinchuby justinchuby changed the title Update aten::contiguous | fix(torchlib) Make aten::contiguous no-op | fix(torchlib) Jul 6, 2023
@codecov
Copy link

codecov bot commented Jul 6, 2023

Codecov Report

Merging #835 (2325f94) into main (5812a4f) will increase coverage by 0.01%.
The diff coverage is 85.71%.

@@            Coverage Diff             @@
##             main     #835      +/-   ##
==========================================
+ Coverage   76.45%   76.47%   +0.01%     
==========================================
  Files         112      112              
  Lines       13371    13371              
  Branches     1342     1341       -1     
==========================================
+ Hits        10223    10225       +2     
+ Misses       2816     2815       -1     
+ Partials      332      331       -1     
Impacted Files Coverage Δ
...ipt/tests/function_libs/torch_lib/ops_test_data.py 96.72% <ø> (ø)
onnxscript/function_libs/torch_lib/ops/prims.py 51.98% <75.00%> (+0.38%) ⬆️
onnxscript/function_libs/torch_lib/ops/core.py 76.91% <100.00%> (+0.06%) ⬆️

"memory_format value supports 'contiguous_format' or 'preserve_format' only."
)
# ONNX does not have the notion of memory_format. It is always treated as a no-op.
return op.Identity(self)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm wondering if we do so, users calling this function will have a wrong misunderstanding that all of formats were processed successfully, which is not right.
If we can handle this op in an earlier phase by exporter, this should be fine, and we'd better leave a comment here.

Copy link
Collaborator Author

@justinchuby justinchuby Jul 6, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand the format is an internal representation and does not affect computation in terms of the result? I gathered that from https://pytorch.org/tutorials/intermediate/memory_format_tutorial.html Please feel free to correct me

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@BowenBao do you have more info on this op? Do you think we should filter it out in a fx pass?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Chatted offline. Handling in fx pass offers a more fundamental solution that asserts correctness, yet it requires much larger effort and targets only edge cases, which does not cut it in terms of priorities. Hence the approach in this PR is preferred.

@justinchuby justinchuby changed the title Make aten::contiguous no-op | fix(torchlib) Make aten::contiguous and device_put no-op | fix(torchlib) Jul 11, 2023
@justinchuby justinchuby mentioned this pull request Jul 11, 2023
@justinchuby
Copy link
Collaborator Author

@BowenBao PTAL. Thanks!

"memory_format value supports 'contiguous_format' or 'preserve_format' only."
)
# ONNX does not have the notion of memory_format. It is always treated as a no-op.
return op.Identity(self)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Chatted offline. Handling in fx pass offers a more fundamental solution that asserts correctness, yet it requires much larger effort and targets only edge cases, which does not cut it in terms of priorities. Hence the approach in this PR is preferred.

@justinchuby justinchuby merged commit 147ad9e into main Jul 12, 2023
@justinchuby justinchuby deleted the justinchu/cleanup-traceonly branch July 12, 2023 18:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: torchlib Related to the torch/aten function lib in development
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants