Skip to content

Conversation

justinchuby
Copy link
Collaborator

@justinchuby justinchuby commented Dec 13, 2022

To allow mypy to analyze typing for annotated functions. Otherwise it complains that "Untyped decorator makes function "ones_like" untyped  [misc]"

[ghstack-poisoned]
[ghstack-poisoned]
Implemented ops

- lt
- gt
- round
- clamp_min
- clamp_max
- clamp
- repeat
- ones_like


[ghstack-poisoned]
Implemented ops

- lt
- gt
- round
- clamp_min
- clamp_max
- clamp
- repeat
- ones_like


[ghstack-poisoned]
Implemented ops

- lt
- gt
- round
- clamp_min
- clamp_max
- clamp
- repeat
- ones_like


[ghstack-poisoned]
Implemented ops

- lt
- gt
- round
- clamp_min
- clamp_max
- clamp
- repeat
- ones_like


[ghstack-poisoned]
Implemented ops

- lt
- gt
- round
- clamp_min
- clamp_max
- clamp
- repeat
- ones_like


[ghstack-poisoned]
Implemented ops

- lt
- gt
- round
- clamp_min
- clamp_max
- clamp
- repeat
- ones_like

Test:
- Create the ability to skip particular sub tests
- Create a helper function to transform torch inputs into numpy for onnxscript to run on

[ghstack-poisoned]
Implemented ops

- lt
- gt
- round
- clamp_min
- clamp_max
- clamp
- repeat
- ones_like

Test:
- Create the ability to skip particular sub tests
- Create a helper function to transform torch inputs into numpy for onnxscript to run on

[ghstack-poisoned]
Implemented ops

- lt
- gt
- round
- clamp_min
- clamp_max
- clamp
- repeat
- ones_like

Test:
- Create the ability to skip particular sub tests
- Create a helper function to transform torch inputs into numpy for onnxscript to run on

[ghstack-poisoned]
Implemented ops

- lt
- gt
- round
- clamp_min
- clamp_max
- clamp
- repeat
- ones_like

Test:
- Create the ability to skip particular sub tests
- Create a helper function to transform torch inputs into numpy for onnxscript to run on

[ghstack-poisoned]
Implemented ops

- lt
- gt
- round
- clamp_min
- clamp_max
- clamp
- repeat
- ones_like

Test:
- Create the ability to skip particular sub tests
- Create a helper function to transform torch inputs into numpy for onnxscript to run on

[ghstack-poisoned]
Implemented ops

- lt
- gt
- round
- clamp_min
- clamp_max
- clamp
- repeat
- ones_like

Test:
- Create the ability to skip particular sub tests
- Create a helper function to transform torch inputs into numpy for onnxscript to run on

[ghstack-poisoned]
Implemented ops

- lt
- gt
- round
- clamp_min
- clamp_max
- clamp
- repeat
- ones_like

Test:
- Create the ability to skip particular sub tests
- Create a helper function to transform torch inputs into numpy for onnxscript to run on

[ghstack-poisoned]
Implemented ops

- lt
- gt
- round
- clamp_min
- clamp_max
- clamp
- repeat
- ones_like

Test:
- Create the ability to skip particular sub tests
- Create a helper function to transform torch inputs into numpy for onnxscript to run on

[ghstack-poisoned]
Implemented ops

- lt
- gt
- round
- clamp_min
- clamp_max
- clamp
- repeat
- ones_like

Test:
- Create the ability to skip particular sub tests
- Create a helper function to transform torch inputs into numpy for onnxscript to run on

[ghstack-poisoned]
Implemented ops

- lt
- gt
- round
- clamp_min
- clamp_max
- clamp
- repeat
- ones_like

Test:
- Create the ability to skip particular sub tests
- Create a helper function to transform torch inputs into numpy for onnxscript to run on

[ghstack-poisoned]
Implemented ops

- lt
- gt
- round
- clamp_min
- clamp_max
- clamp
- repeat
- ones_like

Test:
- Create the ability to skip particular sub tests
- Create a helper function to transform torch inputs into numpy for onnxscript to run on

[ghstack-poisoned]
Implemented ops

- lt
- gt
- round
- clamp_min
- clamp_max
- clamp
- repeat
- ones_like

Test:
- Create the ability to skip particular sub tests
- Create a helper function to transform torch inputs into numpy for onnxscript to run on

[ghstack-poisoned]
Implemented ops

- lt
- gt
- round
- clamp_min
- clamp_max
- clamp
- repeat
- ones_like

Test:
- Create the ability to skip particular sub tests
- Create a helper function to transform torch inputs into numpy for onnxscript to run on

[ghstack-poisoned]
Implemented ops

- lt
- gt
- round
- clamp_min
- clamp_max
- clamp
- repeat
- ones_like

Test:
- Create the ability to skip particular sub tests
- Create a helper function to transform torch inputs into numpy for onnxscript to run on

[ghstack-poisoned]
Implemented ops

- lt
- gt
- round
- clamp_min
- clamp_max
- clamp
- repeat
- ones_like

Test:
- Create the ability to skip particular sub tests
- Create a helper function to transform torch inputs into numpy for onnxscript to run on

[ghstack-poisoned]
Implemented ops

- lt
- gt
- round
- clamp_min
- clamp_max
- clamp
- repeat
- ones_like

Test:
- Create the ability to skip particular sub tests
- Create a helper function to transform torch inputs into numpy for onnxscript to run on

[ghstack-poisoned]
[ghstack-poisoned]
justinchuby added a commit that referenced this pull request Dec 13, 2022
ghstack-source-id: e22613a
Pull Request resolved: #252
justinchuby added a commit that referenced this pull request Dec 13, 2022
ghstack-source-id: 8a19993
Pull Request resolved: #252
@codecov
Copy link

codecov bot commented Dec 13, 2022

Codecov Report

Merging #252 (3967967) into main (d99e4b8) will increase coverage by 0.10%.
The diff coverage is 82.75%.

@@            Coverage Diff             @@
##             main     #252      +/-   ##
==========================================
+ Coverage   71.82%   71.93%   +0.10%     
==========================================
  Files          93       93              
  Lines        8899     8917      +18     
==========================================
+ Hits         6392     6414      +22     
+ Misses       2507     2503       -4     
Impacted Files Coverage Δ
onnxscript/function_libs/torch_aten/ops/nn.py 51.56% <33.33%> (-0.43%) ⬇️
onnxscript/function_libs/torch_aten/ops/core.py 51.74% <93.75%> (+0.58%) ⬆️
...t/function_libs/torch_aten/ops_correctness_test.py 95.93% <100.00%> (+0.24%) ⬆️
onnxscript/test/converter_test.py 87.30% <0.00%> (+0.22%) ⬆️
onnxscript/converter.py 91.22% <0.00%> (+0.26%) ⬆️

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

justinchuby added a commit that referenced this pull request Dec 13, 2022
ghstack-source-id: 23e8ab8
Pull Request resolved: #252
justinchuby added a commit that referenced this pull request Dec 13, 2022
ghstack-source-id: 41eef31
Pull Request resolved: #252
@justinchuby justinchuby added module: torchlib Related to the torch/aten function lib in development change base before merge Remember to change the merge base to main when the PR is ready to merge labels Dec 13, 2022
@justinchuby justinchuby requested a review from fatcat-z December 13, 2022 23:45
justinchuby added a commit that referenced this pull request Dec 14, 2022
ghstack-source-id: 5def494
Pull Request resolved: #252
justinchuby added a commit that referenced this pull request Dec 14, 2022
ghstack-source-id: ddd2c6a
Pull Request resolved: #252
This was referenced Dec 14, 2022
@@ -4461,7 +4459,13 @@ def aten_symeig(
def aten_t(self: TensorType) -> TensorType:
# t(Tensor(a) self) -> Tensor(a)

raise NotImplementedError()
# TODO(justinchuby): Make rank a function
rank = op.Shape(op.Shape(self)) # type: ignore[arg-type]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should it be: rank = op.Size(op.Shape(self)) ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good call! I didn’t know there’s Size. Will update

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

# linear(Tensor input, Tensor weight, Tensor? bias=None) -> Tensor

raise NotImplementedError()
# FIXME(justinchuby): Enable the test
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In current PyTorch exporter, if the rank of input is 2 and the node of bias is not None, it will return a result of addmm(). Code is here.

Why do we remove that logic here?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can leave this to downstream optimization to keep the graph as simple and expressive as possible. What do you think?

Also addmm is an aten op. I think we should avoid calling other ATen ops and instead extract logic to common functions only when needed (which for now is not well supported by onnxscript, so we don’t call other functions ever yet)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the optimization purpose, then I think we should add some comments here to mark this point so that we won't forget it when we start the optimization work.

Even these functions were named with an aten_ prefix, they are still ONNXScript functions which are essentially same to those op.Functions. I think we don't need to avoid calling each other within this aten functions lib. Do you have any examples/ideas to describe the cons?

Copy link
Collaborator Author

@justinchuby justinchuby Dec 14, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(1) Since they have ATen signatures, we have seen many cases where additional params need to be provided to make the call correct. (2) Conceptually, the ATen functions interface with ATen ops (external facing). And so calling them internally overloads the responsibility, which I hope to maintain a clear conceptual model/boundary of.

This also helps with the representation of the exported graph. When users see an ATen function, they will know there is a corresponding op in their model.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a comment

@justinchuby justinchuby changed the base branch from gh/justinchuby/12/base to main December 15, 2022 04:29
Copy link
Contributor

@fatcat-z fatcat-z left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks!

@justinchuby justinchuby merged commit b2d3d27 into main Dec 15, 2022
@justinchuby justinchuby deleted the gh/justinchuby/12/head branch December 15, 2022 05:58
justinchuby added a commit that referenced this pull request Dec 15, 2022
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at
bottom):
* #256
* __->__ #255
* #252

A bunch of matmul and math ops
justinchuby added a commit that referenced this pull request Dec 15, 2022
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at
bottom):
* __->__ #256
* #255
* #252

Some more math and matrix ops
@justinchuby justinchuby mentioned this pull request Dec 15, 2022
justinchuby added a commit that referenced this pull request Jan 16, 2023
ghstack-source-id: 798ac41
Pull Request resolved: #252
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
change base before merge Remember to change the merge base to main when the PR is ready to merge module: torchlib Related to the torch/aten function lib in development
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants