Skip to content

feat(atenlib): create tests with OpInfo #208

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 20 commits into from

Conversation

justinchuby added a commit that referenced this pull request Nov 23, 2022
ghstack-source-id: 7dcc305
Pull Request resolved: #208
@justinchuby justinchuby changed the title feat(atenlib): Create sample functions and tests feat(atenlib): Create sample functions and tests with OpInfo Nov 23, 2022
justinchuby added a commit that referenced this pull request Nov 23, 2022
ghstack-source-id: 89a1377
Pull Request resolved: #208
@justinchuby justinchuby changed the title feat(atenlib): Create sample functions and tests with OpInfo feat(atenlib): Create tests with OpInfo Nov 23, 2022
@justinchuby justinchuby changed the title feat(atenlib): Create tests with OpInfo feat(atenlib): create tests with OpInfo Nov 23, 2022
justinchuby added a commit that referenced this pull request Nov 23, 2022
ghstack-source-id: 8ce7f0b
Pull Request resolved: #208
@codecov
Copy link

codecov bot commented Nov 23, 2022

Codecov Report

Merging #208 (2beb25e) into gh/justinchuby/3/base (9a565a8) will decrease coverage by 3.98%.
The diff coverage is 94.18%.

@@                    Coverage Diff                    @@
##           gh/justinchuby/3/base     #208      +/-   ##
=========================================================
- Coverage                  75.54%   71.55%   -3.99%     
=========================================================
  Files                         89       93       +4     
  Lines                       7216     8779    +1563     
=========================================================
+ Hits                        5451     6282     +831     
- Misses                      1765     2497     +732     
Impacted Files Coverage Δ
...t/function_libs/torch_aten/ops_correctness_test.py 93.33% <93.33%> (ø)
onnxscript/function_libs/torch_aten/ops/core.py 50.32% <100.00%> (ø)
onnxscript/function_libs/torch_aten/ops/nn.py 52.36% <100.00%> (ø)
onnxscript/function_libs/torch_aten/typing.py 100.00% <0.00%> (ø)
onnxscript/evaluator.py 92.81% <0.00%> (+1.19%) ⬆️
onnxscript/utils.py 60.31% <0.00%> (+1.58%) ⬆️

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

justinchuby added a commit that referenced this pull request Nov 23, 2022
ghstack-source-id: ef240eb
Pull Request resolved: #208
justinchuby added a commit that referenced this pull request Nov 23, 2022
ghstack-source-id: 580fb86
Pull Request resolved: #208
justinchuby added a commit that referenced this pull request Nov 23, 2022
ghstack-source-id: 22d5615
Pull Request resolved: #208
justinchuby added a commit that referenced this pull request Nov 23, 2022
ghstack-source-id: 7fa27d3
Pull Request resolved: #208
@gramalingam
Copy link
Collaborator

Typo in folder name "function_libs"

@justinchuby justinchuby added the module: torchlib Related to the torch/aten function lib in development label Nov 29, 2022
@justinchuby justinchuby requested a review from xiaowuhu November 29, 2022 04:12
justinchuby added a commit that referenced this pull request Nov 30, 2022
ghstack-source-id: 202e9d8
Pull Request resolved: #208
justinchuby added a commit that referenced this pull request Nov 30, 2022
ghstack-source-id: bf23e43
Pull Request resolved: #208
justinchuby added a commit that referenced this pull request Dec 5, 2022
ghstack-source-id: e18c71e
Pull Request resolved: #208
justinchuby added a commit that referenced this pull request Dec 5, 2022
ghstack-source-id: 16d5b08
Pull Request resolved: #208
justinchuby added a commit that referenced this pull request Dec 6, 2022
ghstack-source-id: 988716f
Pull Request resolved: #208
# selu(Tensor self) -> Tensor

raise NotImplementedError()
return op.Selu(self)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we put the logic like this within ONNX Script? As we discussed before, the logic relative to PyTorch specifically should be left in PyTorch exporter side, right?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In a general symbolic function of current PyTorch exporter, it mainly focuses on 2 things:

  1. Transform the aten inputs to onnx inputs and attributes.
  2. Find out a proper ONNX op for current aten op, or combine several ONNX op to implement the same function of given aten op.

By doing this, we will only leave the part 1 in PyTorch exporter. Is this expected?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

More of (2) should be in the function lib, so only the glue logic (not expressed by functions) lives in the exporter. We should aim for minimal op logic on the exporter (but allow full control at the same time so changes can be made independently)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

About "minimal op logic on the exporter", may I know the reason why we are doing this? What's the difference between calling these troch_aten ops and onnx scripts ops from PyTorch exporter?

# device is provided by instantiate_device_type_tests, but we only want to run in cpu.
assert device == "cpu"

samples = op.sample_inputs(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why isn't there "shape" information for a sample input?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The shape is not explicitly used in this test because onnxscript can handle it in its evaluator. Any considerations?

@@ -12,6 +12,8 @@ sphinx-gallery
pydata_sphinx_theme

# ATen lib
typing_extensions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you re-sort them in alphabetical order?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

)

# Use torch testing to ensure dtypes and shapes match
torch.testing.assert_close(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to consider the case of multiply outputs?

How about this:

assert [torch.allclose(o, torch.tensor(o_ort)) for o, o_ort in zip(torch_output, function_output)]

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch. I am inclined to keep it as is for now and expand to multi input when needed

Copy link
Contributor

@fatcat-z fatcat-z left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks!

justinchuby added a commit that referenced this pull request Dec 6, 2022
@justinchuby
Copy link
Collaborator Author

Accidentally merged this into #223. I think that's ok and I will close this.

@justinchuby justinchuby closed this Dec 7, 2022
@justinchuby justinchuby deleted the gh/justinchuby/3/head branch January 16, 2023 03:52
Indie365 pushed a commit to Indie365/onnxscript that referenced this pull request Oct 26, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: torchlib Related to the torch/aten function lib in development
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants