-
Notifications
You must be signed in to change notification settings - Fork 64
feat(atenlib): add ops (any, expand_as) #463
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(atenlib): add ops (any, expand_as) #463
Conversation
A couple of questions: why not use Also: does ReduceSum not work for scalar inputs? I am wondering where exactly the logic breaks (with onnx.ORT) if a zero-dimensional tensor is used. |
Codecov Report
@@ Coverage Diff @@
## main #463 +/- ##
==========================================
+ Coverage 72.46% 72.52% +0.06%
==========================================
Files 109 109
Lines 10607 10626 +19
Branches 1093 1096 +3
==========================================
+ Hits 7686 7707 +21
+ Misses 2614 2612 -2
Partials 307 307
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. |
Thanks for reminder. I changed to use ReduceMax now. |
@@ -269,10 +269,33 @@ def aten_angle(self: TensorType) -> TensorType: | |||
raise NotImplementedError() | |||
|
|||
|
|||
def aten_any(self: TensorType) -> TensorType: | |||
@torch_op("aten::any", trace_only=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need trace_only here? Looks like all of operations are safe.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
due to the Optional argument parsing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you say more on what optional argument parsing is? I would also add a comment on why trace_only was needed in code
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if not use trace_only, it will throw exception:
E TypeError: Required input 'dim: Attribute[None]' was not provide
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is the same as the issue we discussed in today's meeting. if (OptionalHasElement(x)) then use(x)
is an issue. We need to find a solution to this. One possibility is to extend OptionalGetElement, but wonder if that's enough for all cases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
@@ -269,10 +269,32 @@ def aten_angle(self: TensorType) -> TensorType: | |||
raise NotImplementedError() | |||
|
|||
|
|||
def aten_any(self: TensorType) -> TensorType: | |||
@torch_op("aten::any", trace_only=True) | |||
def aten_any(self: TTensor, dim: Optional[int] = None, keepdim: bool = True) -> BOOL: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think keepdim should default to False, or set as keepdim: Optional[bool] = None
, and explicitly set to False
if None
.
That way we can avoid hard coding keepdims=0
later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't the default value be determined by the spec of the aten op?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking at it again, I figured we are supporting 2 overloads of aten::any
with this function. So for - func: any(Tensor self) -> Tensor
we need actually figuring out what the behavior is since both dim
and keepdim
are not provided and have no defaults. So I think this is fine.
And for - func: any.dim(Tensor self, int dim, bool keepdim=False) -> Tensor
, yea you are right, the current default is wrong and it should be False.
- func: any(Tensor self) -> Tensor
device_check: NoCheck # TensorIterator
structured_delegate: any.all_out
variants: method, function
dispatch:
SparseCPU, SparseCUDA: any_sparse
- func: any.dim(Tensor self, int dim, bool keepdim=False) -> Tensor
device_check: NoCheck # TensorIterator
structured_delegate: any.out
variants: function, method
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @justinchuby it is weird the wrong default is not caught in op test, I wonder if it is just not enough coverage in original opinfo or something else.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should not have combined the two overloads, and yes - you have a great point that we should verify the opinfo to assert how much confidence it can provide us for each op before trusting it. I will add this to the authoring guide
No description provided.