feat(atenlib): add ops (any, expand_as)#463
feat(atenlib): add ops (any, expand_as)#463xiaowuhu merged 18 commits intomicrosoft:mainfrom xiaowuhu:xiaouw/addOps(any,-expand_as)
Conversation
|
A couple of questions: why not use Also: does ReduceSum not work for scalar inputs? I am wondering where exactly the logic breaks (with onnx.ORT) if a zero-dimensional tensor is used. |
Codecov Report
@@ Coverage Diff @@
## main #463 +/- ##
==========================================
+ Coverage 72.46% 72.52% +0.06%
==========================================
Files 109 109
Lines 10607 10626 +19
Branches 1093 1096 +3
==========================================
+ Hits 7686 7707 +21
+ Misses 2614 2612 -2
Partials 307 307
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. |
Thanks for reminder. I changed to use ReduceMax now. |
|
|
||
|
|
||
| def aten_any(self: TensorType) -> TensorType: | ||
| @torch_op("aten::any", trace_only=True) |
There was a problem hiding this comment.
Why do we need trace_only here? Looks like all of operations are safe.
There was a problem hiding this comment.
due to the Optional argument parsing.
There was a problem hiding this comment.
Could you say more on what optional argument parsing is? I would also add a comment on why trace_only was needed in code
There was a problem hiding this comment.
if not use trace_only, it will throw exception:
E TypeError: Required input 'dim: Attribute[None]' was not provide
There was a problem hiding this comment.
I think this is the same as the issue we discussed in today's meeting. if (OptionalHasElement(x)) then use(x) is an issue. We need to find a solution to this. One possibility is to extend OptionalGetElement, but wonder if that's enough for all cases.
|
|
||
| def aten_any(self: TensorType) -> TensorType: | ||
| @torch_op("aten::any", trace_only=True) | ||
| def aten_any(self: TTensor, dim: Optional[int] = None, keepdim: bool = True) -> BOOL: |
There was a problem hiding this comment.
I think keepdim should default to False, or set as keepdim: Optional[bool] = None, and explicitly set to False if None.
That way we can avoid hard coding keepdims=0 later.
There was a problem hiding this comment.
Shouldn't the default value be determined by the spec of the aten op?
There was a problem hiding this comment.
Looking at it again, I figured we are supporting 2 overloads of aten::any with this function. So for - func: any(Tensor self) -> Tensor we need actually figuring out what the behavior is since both dim and keepdim are not provided and have no defaults. So I think this is fine.
And for - func: any.dim(Tensor self, int dim, bool keepdim=False) -> Tensor, yea you are right, the current default is wrong and it should be False.
- func: any(Tensor self) -> Tensor
device_check: NoCheck # TensorIterator
structured_delegate: any.all_out
variants: method, function
dispatch:
SparseCPU, SparseCUDA: any_sparse
- func: any.dim(Tensor self, int dim, bool keepdim=False) -> Tensor
device_check: NoCheck # TensorIterator
structured_delegate: any.out
variants: function, method
There was a problem hiding this comment.
cc @justinchuby it is weird the wrong default is not caught in op test, I wonder if it is just not enough coverage in original opinfo or something else.
There was a problem hiding this comment.
I think we should not have combined the two overloads, and yes - you have a great point that we should verify the opinfo to assert how much confidence it can provide us for each op before trusting it. I will add this to the authoring guide
No description provided.