-
Notifications
You must be signed in to change notification settings - Fork 64
feat(atenlib): create tests with OpInfo #208
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
Codecov Report
@@ Coverage Diff @@
## gh/justinchuby/3/base #208 +/- ##
=========================================================
- Coverage 75.54% 71.55% -3.99%
=========================================================
Files 89 93 +4
Lines 7216 8779 +1563
=========================================================
+ Hits 5451 6282 +831
- Misses 1765 2497 +732
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. |
onnxscript/test/function_libs/torch_aten/ops_correctness_test.py
Outdated
Show resolved
Hide resolved
onnxscript/test/function_libs/torch_aten/ops_correctness_test.py
Outdated
Show resolved
Hide resolved
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
Typo in folder name "function_libs" |
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
onnxscript/test/function_libs/torch_aten/ops_correctness_test.py
Outdated
Show resolved
Hide resolved
onnxscript/test/function_libs/torch_aten/ops_correctness_test.py
Outdated
Show resolved
Hide resolved
# selu(Tensor self) -> Tensor | ||
|
||
raise NotImplementedError() | ||
return op.Selu(self) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we put the logic like this within ONNX Script? As we discussed before, the logic relative to PyTorch specifically should be left in PyTorch exporter side, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In a general symbolic function of current PyTorch exporter, it mainly focuses on 2 things:
- Transform the aten inputs to onnx inputs and attributes.
- Find out a proper ONNX op for current aten op, or combine several ONNX op to implement the same function of given aten op.
By doing this, we will only leave the part 1 in PyTorch exporter. Is this expected?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
More of (2) should be in the function lib, so only the glue logic (not expressed by functions) lives in the exporter. We should aim for minimal op logic on the exporter (but allow full control at the same time so changes can be made independently)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
About "minimal op logic on the exporter", may I know the reason why we are doing this? What's the difference between calling these troch_aten ops and onnx scripts ops from PyTorch exporter?
# device is provided by instantiate_device_type_tests, but we only want to run in cpu. | ||
assert device == "cpu" | ||
|
||
samples = op.sample_inputs( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why isn't there "shape" information for a sample input?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The shape is not explicitly used in this test because onnxscript can handle it in its evaluator. Any considerations?
requirements-dev.txt
Outdated
@@ -12,6 +12,8 @@ sphinx-gallery | |||
pydata_sphinx_theme | |||
|
|||
# ATen lib | |||
typing_extensions |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you re-sort them in alphabetical order?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
) | ||
|
||
# Use torch testing to ensure dtypes and shapes match | ||
torch.testing.assert_close( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to consider the case of multiply outputs?
How about this:
assert [torch.allclose(o, torch.tensor(o_ort)) for o, o_ort in zip(torch_output, function_output)]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch. I am inclined to keep it as is for now and expand to multi input when needed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks!
[ghstack-poisoned]
[ghstack-poisoned]
Accidentally merged this into #223. I think that's ok and I will close this. |
ghstack-source-id: 23faf0d Pull Request resolved: microsoft/onnxscript#208
Stack from ghstack (oldest at bottom):