Skip to content

feat(atenlib): add, sub, mul #235

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 48 commits into from
Dec 7, 2022
Merged

feat(atenlib): add, sub, mul #235

merged 48 commits into from
Dec 7, 2022

Conversation

justinchuby
Copy link
Collaborator

@justinchuby justinchuby commented Dec 6, 2022

Stack from ghstack (oldest at bottom):

justinchuby added a commit that referenced this pull request Dec 6, 2022
ghstack-source-id: 9b83578
Pull Request resolved: #235
@justinchuby justinchuby added the module: torchlib Related to the torch/aten function lib in development label Dec 6, 2022
return op.Mul(self, other)


def aten_mul_bool(self: TensorType, other: TensorType) -> TensorType:
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to test this as well

@codecov
Copy link

codecov bot commented Dec 6, 2022

Codecov Report

Merging #235 (1184f6b) into main (c9b3ac6) will increase coverage by 0.04%.
The diff coverage is 92.30%.

@@            Coverage Diff             @@
##             main     #235      +/-   ##
==========================================
+ Coverage   71.65%   71.69%   +0.04%     
==========================================
  Files          93       93              
  Lines        8827     8833       +6     
==========================================
+ Hits         6325     6333       +8     
+ Misses       2502     2500       -2     
Impacted Files Coverage Δ
...t/function_libs/torch_aten/ops_correctness_test.py 93.33% <ø> (ø)
onnxscript/function_libs/torch_aten/ops/core.py 50.73% <92.30%> (+0.40%) ⬆️

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

Copy link

@github-advanced-security github-advanced-security bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lintrunner found more than 10 potential problems in the proposed changes. Check the Files changed tab for more details.

justinchuby added a commit that referenced this pull request Dec 6, 2022
ghstack-source-id: 89663f7
Pull Request resolved: #235
justinchuby added a commit that referenced this pull request Dec 6, 2022
ghstack-source-id: f80f02b
Pull Request resolved: #235
justinchuby added a commit that referenced this pull request Dec 6, 2022
return op.Mul(self, other)


def aten_mul_bool(self: BOOL, other: BOOL) -> BOOL:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BOOL[...] instead of BOOL ? Unless the type-annotation-design is going to change the meaning of BOOL, which is currently a tensor of rank-0

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah but mypy complains about BOOL[...]

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’ll see if I can turn it off

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Couldn't make it to work. I will leave it as is for now as update when things fit together. Currently the types for the ops are also BOOL etc. I think?

@@ -157,14 +157,19 @@ def wrapped(fn):

# Ops to be tested for numerical consistency between onnx and pytorch
OPINFO_FUNCTION_MAPPING = {
"add": core_ops.aten_add,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For my own knowledge: what is core_ops a reference to? I see core.py above.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I see it's just an alias for the same

Copy link
Contributor

@fatcat-z fatcat-z left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks!

justinchuby added a commit that referenced this pull request Dec 7, 2022
ghstack-source-id: f3fc07a
Pull Request resolved: #235
justinchuby added a commit that referenced this pull request Dec 7, 2022
@justinchuby justinchuby changed the base branch from gh/justinchuby/7/base to main December 7, 2022 18:24
justinchuby added a commit that referenced this pull request Dec 7, 2022
ghstack-source-id: ac01a5b
Pull Request resolved: #235
justinchuby added a commit that referenced this pull request Dec 7, 2022
@justinchuby justinchuby merged commit f802167 into main Dec 7, 2022
@justinchuby justinchuby deleted the gh/justinchuby/7/head branch December 7, 2022 19:11
@justinchuby justinchuby mentioned this pull request Dec 15, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: torchlib Related to the torch/aten function lib in development
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants