Skip to content

feat(atenlib): ops 3/n#255

Merged
justinchuby merged 40 commits intomainfrom
gh/justinchuby/13/head
Dec 15, 2022
Merged

feat(atenlib): ops 3/n#255
justinchuby merged 40 commits intomainfrom
gh/justinchuby/13/head

Conversation

@justinchuby
Copy link
Copy Markdown
Collaborator

@justinchuby justinchuby commented Dec 14, 2022

Stack from ghstack (oldest at bottom):

A bunch of matmul and math ops

To allow mypy to analyze typing for annotated functions. Otherwise it complains that "Untyped decorator makes function "ones_like" untyped  [misc]"

[ghstack-poisoned]
[ghstack-poisoned]
Implemented ops

- lt
- gt
- round
- clamp_min
- clamp_max
- clamp
- repeat
- ones_like


[ghstack-poisoned]
Implemented ops

- lt
- gt
- round
- clamp_min
- clamp_max
- clamp
- repeat
- ones_like


[ghstack-poisoned]
Implemented ops

- lt
- gt
- round
- clamp_min
- clamp_max
- clamp
- repeat
- ones_like


[ghstack-poisoned]
Implemented ops

- lt
- gt
- round
- clamp_min
- clamp_max
- clamp
- repeat
- ones_like


[ghstack-poisoned]
Implemented ops

- lt
- gt
- round
- clamp_min
- clamp_max
- clamp
- repeat
- ones_like


[ghstack-poisoned]
Implemented ops

- lt
- gt
- round
- clamp_min
- clamp_max
- clamp
- repeat
- ones_like

Test:
- Create the ability to skip particular sub tests
- Create a helper function to transform torch inputs into numpy for onnxscript to run on

[ghstack-poisoned]
Implemented ops

- lt
- gt
- round
- clamp_min
- clamp_max
- clamp
- repeat
- ones_like

Test:
- Create the ability to skip particular sub tests
- Create a helper function to transform torch inputs into numpy for onnxscript to run on

[ghstack-poisoned]
Implemented ops

- lt
- gt
- round
- clamp_min
- clamp_max
- clamp
- repeat
- ones_like

Test:
- Create the ability to skip particular sub tests
- Create a helper function to transform torch inputs into numpy for onnxscript to run on

[ghstack-poisoned]
Implemented ops

- lt
- gt
- round
- clamp_min
- clamp_max
- clamp
- repeat
- ones_like

Test:
- Create the ability to skip particular sub tests
- Create a helper function to transform torch inputs into numpy for onnxscript to run on

[ghstack-poisoned]
Implemented ops

- lt
- gt
- round
- clamp_min
- clamp_max
- clamp
- repeat
- ones_like

Test:
- Create the ability to skip particular sub tests
- Create a helper function to transform torch inputs into numpy for onnxscript to run on

[ghstack-poisoned]
Implemented ops

- lt
- gt
- round
- clamp_min
- clamp_max
- clamp
- repeat
- ones_like

Test:
- Create the ability to skip particular sub tests
- Create a helper function to transform torch inputs into numpy for onnxscript to run on

[ghstack-poisoned]
Implemented ops

- lt
- gt
- round
- clamp_min
- clamp_max
- clamp
- repeat
- ones_like

Test:
- Create the ability to skip particular sub tests
- Create a helper function to transform torch inputs into numpy for onnxscript to run on

[ghstack-poisoned]
Implemented ops

- lt
- gt
- round
- clamp_min
- clamp_max
- clamp
- repeat
- ones_like

Test:
- Create the ability to skip particular sub tests
- Create a helper function to transform torch inputs into numpy for onnxscript to run on

[ghstack-poisoned]
Implemented ops

- lt
- gt
- round
- clamp_min
- clamp_max
- clamp
- repeat
- ones_like

Test:
- Create the ability to skip particular sub tests
- Create a helper function to transform torch inputs into numpy for onnxscript to run on

[ghstack-poisoned]
Implemented ops

- lt
- gt
- round
- clamp_min
- clamp_max
- clamp
- repeat
- ones_like

Test:
- Create the ability to skip particular sub tests
- Create a helper function to transform torch inputs into numpy for onnxscript to run on

[ghstack-poisoned]
Implemented ops

- lt
- gt
- round
- clamp_min
- clamp_max
- clamp
- repeat
- ones_like

Test:
- Create the ability to skip particular sub tests
- Create a helper function to transform torch inputs into numpy for onnxscript to run on

[ghstack-poisoned]
Implemented ops

- lt
- gt
- round
- clamp_min
- clamp_max
- clamp
- repeat
- ones_like

Test:
- Create the ability to skip particular sub tests
- Create a helper function to transform torch inputs into numpy for onnxscript to run on

[ghstack-poisoned]
Implemented ops

- lt
- gt
- round
- clamp_min
- clamp_max
- clamp
- repeat
- ones_like

Test:
- Create the ability to skip particular sub tests
- Create a helper function to transform torch inputs into numpy for onnxscript to run on

[ghstack-poisoned]
Implemented ops

- lt
- gt
- round
- clamp_min
- clamp_max
- clamp
- repeat
- ones_like

Test:
- Create the ability to skip particular sub tests
- Create a helper function to transform torch inputs into numpy for onnxscript to run on

[ghstack-poisoned]
Implemented ops

- lt
- gt
- round
- clamp_min
- clamp_max
- clamp
- repeat
- ones_like

Test:
- Create the ability to skip particular sub tests
- Create a helper function to transform torch inputs into numpy for onnxscript to run on

[ghstack-poisoned]
Implemented ops

- lt
- gt
- round
- clamp_min
- clamp_max
- clamp
- repeat
- ones_like

Test:
- Create the ability to skip particular sub tests
- Create a helper function to transform torch inputs into numpy for onnxscript to run on

[ghstack-poisoned]
[ghstack-poisoned]
justinchuby added a commit that referenced this pull request Dec 14, 2022
A bunch of matmul ops

ghstack-source-id: da6585a
Pull Request resolved: #255
@justinchuby justinchuby requested a review from fatcat-z December 14, 2022 00:25
@justinchuby justinchuby added module: torchlib Related to the torch/aten function lib in development change base before merge Remember to change the merge base to main when the PR is ready to merge labels Dec 14, 2022
A bunch of matmul ops

[ghstack-poisoned]
justinchuby added a commit that referenced this pull request Dec 14, 2022
A bunch of matmul ops

ghstack-source-id: ad9d34b
Pull Request resolved: #255
@codecov
Copy link
Copy Markdown

codecov bot commented Dec 14, 2022

Codecov Report

Merging #255 (5c7fa63) into main (b2d3d27) will increase coverage by 0.05%.
The diff coverage is 100.00%.

@@            Coverage Diff             @@
##             main     #255      +/-   ##
==========================================
+ Coverage   71.93%   71.98%   +0.05%     
==========================================
  Files          93       93              
  Lines        8917     8918       +1     
==========================================
+ Hits         6414     6420       +6     
+ Misses       2503     2498       -5     
Impacted Files Coverage Δ
...t/function_libs/torch_aten/ops_correctness_test.py 95.93% <ø> (ø)
onnxscript/function_libs/torch_aten/ops/core.py 52.25% <100.00%> (+0.51%) ⬆️
onnxscript/test/converter_test.py 87.07% <0.00%> (-0.23%) ⬇️

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

@justinchuby justinchuby changed the title feat(atenlib): ops 3/n feat(atenlib): ops 4/n Dec 14, 2022
@justinchuby justinchuby changed the title feat(atenlib): ops 4/n feat(atenlib): ops 3/n Dec 14, 2022
dtypes=[torch.uint8, torch.int8, torch.int16],
reason="MatMul is not defined on int16/int8/uint8 tensors",
),
xfail(
Copy link
Copy Markdown
Contributor

@fatcat-z fatcat-z Dec 14, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the different between these 2 "addmm" here?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a “variant” param which represents a different test. OpInfos are identified by name and variant. The variant is usually empty. When there is a variant OpInfo we need to specify it so that they match.

Copy link
Copy Markdown
Contributor

@fatcat-z fatcat-z left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks!

A bunch of matmul and math ops

[ghstack-poisoned]
A bunch of matmul and math ops

[ghstack-poisoned]
A bunch of matmul and math ops

[ghstack-poisoned]
A bunch of matmul and math ops

[ghstack-poisoned]
justinchuby added a commit that referenced this pull request Dec 15, 2022
@justinchuby justinchuby changed the base branch from gh/justinchuby/13/base to main December 15, 2022 05:58
@justinchuby justinchuby merged commit f7ac851 into main Dec 15, 2022
@justinchuby justinchuby deleted the gh/justinchuby/13/head branch December 15, 2022 06:06
justinchuby added a commit that referenced this pull request Dec 15, 2022
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at
bottom):
* __->__ #256
* #255
* #252

Some more math and matrix ops
@justinchuby justinchuby mentioned this pull request Dec 15, 2022
justinchuby added a commit that referenced this pull request Jan 16, 2023
A bunch of matmul ops

ghstack-source-id: c65fda6
Pull Request resolved: #255
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

change base before merge Remember to change the merge base to main when the PR is ready to merge module: torchlib Related to the torch/aten function lib in development

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants