-
Notifications
You must be signed in to change notification settings - Fork 24.2k
[ONNX] Refactor perfect/nearest match criteria to allow optional inputs and disallow mismatch attributes #106478
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…ts and disallow mismatch attributes [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/106478
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ✅ 2 Unrelated FailuresAs of commit 631b5d9: UNSTABLE - The following jobs failed but were likely due to flakiness present on trunk and has been marked as unstable:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
…tional inputs and disallow mismatch attributes" [ghstack-poisoned]
…tional inputs and disallow mismatch attributes" [ghstack-poisoned]
…tional inputs and disallow mismatch attributes" Fix #106057, except **Attribute dtype mismatch. E.g., alpha of aten.add.Tensor. -> Attribute: alpha INT vs FLOAT**. Summarized the change * Fill in defaults of attribute when `param_schema` is applied. This relaxes the matching on default attributes. * Fill in None to optional input when `param_schema` is applied. * Keep extra kwargs in attributes to make matching strictly. * Allow input to be None when its dtype is `optiona[INPUT]` The change comes with the guarantee from torchlib that attribute would never be None. For example, if `memory_format` is needed. The function should specify like this: ```python torch_op("aten::clone") def aten_clone( self: TTensor, memory_format: str = "" # pylint: disable=unused-argument ) -> TTensor: """clone(Tensor self, *, MemoryFormat? memory_format=None) -> Tensor""" return op.Identity(self) ``` Previous to this PR, opSchema matching didn't strictly guard the number of inputs/attributes to allow nearest match, which introduces the bug of dispatching `aten::div.Tensor` to `aten::div.default` disregarding the fact that `aten::div.Tensor` has an extra attibute `rounding_mode`. This PR fixes the issue with the new logic to perfect/nearest match. Particularly, strictly restrict the qualification of being nearest match candidate. For each ONNX variants, we check these step by step: 1. Check if the function signature of inputs number is the same as the inputs. 2. Check if the function signature of attribute names is the same set of inputs. If either of the above two criteria fails to meet, the ONNX variant is not a perfect match, nor a nearest match candidate (match_score=None) 3. Check if input dtype matches 4. Check if attribute dtype matches If 3 and 4 are met, then this is a perfect match, otherwise, it's still considered a candidate of nearest match with a matching score. [ghstack-poisoned]
…tional inputs and disallow mismatch attributes" Fix #106057, except **Attribute dtype mismatch. E.g., alpha of aten.add.Tensor. -> Attribute: alpha INT vs FLOAT**. Summarized the change * Fill in defaults of attribute when `param_schema` is applied. This relaxes the matching on default attributes. * Fill in None to optional input when `param_schema` is applied. * Keep extra kwargs in attributes to make matching strictly. * Allow input to be None when its dtype is `optiona[INPUT]` The change comes with the guarantee from torchlib that attribute would never be None. For example, if `memory_format` is needed. The function should specify like this: ```python torch_op("aten::clone") def aten_clone( self: TTensor, memory_format: str = "" # pylint: disable=unused-argument ) -> TTensor: """clone(Tensor self, *, MemoryFormat? memory_format=None) -> Tensor""" return op.Identity(self) ``` Previous to this PR, opSchema matching didn't strictly guard the number of inputs/attributes to allow nearest match, which introduces the bug of dispatching `aten::div.Tensor` to `aten::div.default` disregarding the fact that `aten::div.Tensor` has an extra attibute `rounding_mode`. This PR fixes the issue with the new logic to perfect/nearest match. Particularly, strictly restrict the qualification of being nearest match candidate. For each ONNX variants, we check these step by step: 1. Check if the function signature of inputs number is the same as the inputs. 2. Check if the function signature of attribute names is the same set of inputs. If either of the above two criteria fails to meet, the ONNX variant is not a perfect match, nor a nearest match candidate (match_score=None) 3. Check if input dtype matches 4. Check if attribute dtype matches If 3 and 4 are met, then this is a perfect match, otherwise, it's still considered a candidate of nearest match with a matching score. [ghstack-poisoned]
@BowenBao PTAL |
r"""Calculate the inputs matching score of the OpSchema requirements to find the nearest match. | ||
|
||
Only the functions which have the same number of inputs and attributes as the | ||
OpSchema are eligible to be a nearest match candidate. Thus, we don't need to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe check and raise otherwise in the beginning of this function
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's guaranteed to have the same length of inputs and attributes because self._record_matching_score(function_inputs, function_attributes)
is only invoked when the check are done in perfect_match
function (step 1).
@@ -569,18 +610,10 @@ def perfect_match_inputs( | |||
f"Actual {torch_input_compatible_types} vs\n" | |||
f"expected {allowed_types}\n" | |||
) | |||
# NOTE: This is still a candidate for nearest match, as it only mismatches on dtypes. | |||
self._record_matching_score(function_inputs, function_attributes) | |||
diagnostic.with_additional_message(f"match score: {self.match_score}\n") | |||
return False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: we are short circuiting the diagnostic message for attr dtype mismatches, if there is input dtype mismatch...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I use a flag is_perfect_match
to postpone the return to let diagnostics catch all it needs.
…tional inputs and disallow mismatch attributes" Fix #106057, except **Attribute dtype mismatch. E.g., alpha of aten.add.Tensor. -> Attribute: alpha INT vs FLOAT**. Summarized the change * Fill in defaults of attribute when `param_schema` is applied. This relaxes the matching on default attributes. * Fill in None to optional input when `param_schema` is applied. * Keep extra kwargs in attributes to make matching strictly. * Allow input to be None when its dtype is `optiona[INPUT]` The change comes with the guarantee from torchlib that attribute would never be None. For example, if `memory_format` is needed. The function should specify like this: ```python torch_op("aten::clone") def aten_clone( self: TTensor, memory_format: str = "" # pylint: disable=unused-argument ) -> TTensor: """clone(Tensor self, *, MemoryFormat? memory_format=None) -> Tensor""" return op.Identity(self) ``` Previous to this PR, opSchema matching didn't strictly guard the number of inputs/attributes to allow nearest match, which introduces the bug of dispatching `aten::div.Tensor` to `aten::div.default` disregarding the fact that `aten::div.Tensor` has an extra attibute `rounding_mode`. This PR fixes the issue with the new logic to perfect/nearest match. Particularly, strictly restrict the qualification of being nearest match candidate. For each ONNX variants, we check these step by step: 1. Check if the function signature of inputs number is the same as the inputs. 2. Check if the function signature of attribute names is the same set of inputs. If either of the above two criteria fails to meet, the ONNX variant is not a perfect match, nor a nearest match candidate (match_score=None) 3. Check if input dtype matches 4. Check if attribute dtype matches If 3 and 4 are met, then this is a perfect match, otherwise, it's still considered a candidate of nearest match with a matching score. [ghstack-poisoned]
…tional inputs and disallow mismatch attributes" Fix #106057, except **Attribute dtype mismatch. E.g., alpha of aten.add.Tensor. -> Attribute: alpha INT vs FLOAT**. Summarized the change * Fill in defaults of attribute when `param_schema` is applied. This relaxes the matching on default attributes. * Fill in None to optional input when `param_schema` is applied. * Keep extra kwargs in attributes to make matching strictly. * Allow input to be None when its dtype is `optiona[INPUT]` The change comes with the guarantee from torchlib that attribute would never be None. For example, if `memory_format` is needed. The function should specify like this: ```python torch_op("aten::clone") def aten_clone( self: TTensor, memory_format: str = "" # pylint: disable=unused-argument ) -> TTensor: """clone(Tensor self, *, MemoryFormat? memory_format=None) -> Tensor""" return op.Identity(self) ``` Previous to this PR, opSchema matching didn't strictly guard the number of inputs/attributes to allow nearest match, which introduces the bug of dispatching `aten::div.Tensor` to `aten::div.default` disregarding the fact that `aten::div.Tensor` has an extra attibute `rounding_mode`. This PR fixes the issue with the new logic to perfect/nearest match. Particularly, strictly restrict the qualification of being nearest match candidate. For each ONNX variants, we check these step by step: 1. Check if the function signature of inputs number is the same as the inputs. 2. Check if the function signature of attribute names is the same set of inputs. If either of the above two criteria fails to meet, the ONNX variant is not a perfect match, nor a nearest match candidate (match_score=None) 3. Check if input dtype matches 4. Check if attribute dtype matches If 3 and 4 are met, then this is a perfect match, otherwise, it's still considered a candidate of nearest match with a matching score. [ghstack-poisoned]
…tional inputs and disallow mismatch attributes" Fix #106057, except **Attribute dtype mismatch. E.g., alpha of aten.add.Tensor. -> Attribute: alpha INT vs FLOAT**. Summarized the change * Fill in defaults of attribute when `param_schema` is applied. This relaxes the matching on default attributes. * Fill in None to optional input when `param_schema` is applied. * Keep extra kwargs in attributes to make matching strictly. * Allow input to be None when its dtype is `optiona[INPUT]` The change comes with the guarantee from torchlib that attribute would never be None. For example, if `memory_format` is needed. The function should specify like this: ```python torch_op("aten::clone") def aten_clone( self: TTensor, memory_format: str = "" # pylint: disable=unused-argument ) -> TTensor: """clone(Tensor self, *, MemoryFormat? memory_format=None) -> Tensor""" return op.Identity(self) ``` Previous to this PR, opSchema matching didn't strictly guard the number of inputs/attributes to allow nearest match, which introduces the bug of dispatching `aten::div.Tensor` to `aten::div.default` disregarding the fact that `aten::div.Tensor` has an extra attibute `rounding_mode`. This PR fixes the issue with the new logic to perfect/nearest match. Particularly, strictly restrict the qualification of being nearest match candidate. For each ONNX variants, we check these step by step: 1. Check if the function signature of inputs number is the same as the inputs. 2. Check if the function signature of attribute names is the same set of inputs. If either of the above two criteria fails to meet, the ONNX variant is not a perfect match, nor a nearest match candidate (match_score=None) 3. Check if input dtype matches 4. Check if attribute dtype matches If 3 and 4 are met, then this is a perfect match, otherwise, it's still considered a candidate of nearest match with a matching score. ## Case Study ### Optional Input The dispatcher recognizes optional inputs. However, the input can't be ignored. None must be provided. ```python # Perfect match is found inputs = (Tensor([2, 3]), None) aten_op(X: TTensor, Y: Optional[INT64]): ... ``` Real Case: aten::convolution NOTE: There is/will not optional attribute in torchlib. ### Different attributes If an attribute is provided with value, it's a must to match the attribute in function signature. ```python # Not perfect match, nor nearest match inputs = (Tensor([2, 3]),) attributes = {"a":1, "b":2} aten_op(X: TTensor, a: int): ... ``` Real Case: aten::div and aten::div.Tensor_mode ### Default attribute Default attribute will fill in the value into inputs/attributes ```python # Perfect match is found inputs = (Tensor([2, 3]),) attributes = {} aten_op(X: TTensor, a: int = 3): ... ``` Real case: aten::clone ### Ignore attribute with None value The attributes with None value will be ignored in matching. ```python # Perfect match inputs = (Tensor([2, 3]),) attributes = {"a": None} aten_op(X: TTensor): ... # Not perfect match, but eligible for nearest match inputs = (Tensor([2, 3]),) attributes = {"a": None} aten_op(X: TTensor, a: int = 3): ... ``` Real case: aten::div and aten::div.Tensor_mode [ghstack-poisoned]
@BowenBao added case study which should be enough to explain the situations that this PR improves. Essentially, most of the changes are to align with the expectations of attribute from torchlib. |
Thanks this is great. Could you put the case studies into comment and create a [NOTE: title] for it? There are also some comments I want to make. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Just make sure Bowen's comments are addressed before merging :)
…tional inputs and disallow mismatch attributes" Fix #106057, except **Attribute dtype mismatch. E.g., alpha of aten.add.Tensor. -> Attribute: alpha INT vs FLOAT**. Summarized the change * Fill in defaults of attribute when `param_schema` is applied. This relaxes the matching on default attributes. * Fill in None to optional input when `param_schema` is applied. * Keep extra kwargs in attributes to make matching strictly. * Allow input to be None when its dtype is `optiona[INPUT]` The change comes with the guarantee from torchlib that attribute would never be None. For example, if `memory_format` is needed. The function should specify like this: ```python torch_op("aten::clone") def aten_clone( self: TTensor, memory_format: str = "" # pylint: disable=unused-argument ) -> TTensor: """clone(Tensor self, *, MemoryFormat? memory_format=None) -> Tensor""" return op.Identity(self) ``` Previous to this PR, opSchema matching didn't strictly guard the number of inputs/attributes to allow nearest match, which introduces the bug of dispatching `aten::div.Tensor` to `aten::div.default` disregarding the fact that `aten::div.Tensor` has an extra attibute `rounding_mode`. This PR fixes the issue with the new logic to perfect/nearest match. Particularly, strictly restrict the qualification of being nearest match candidate. For each ONNX variants, we check these step by step: 1. Check if the function signature of inputs number is the same as the inputs. 2. Check if the function signature of attribute names is the same set of inputs. If either of the above two criteria fails to meet, the ONNX variant is not a perfect match, nor a nearest match candidate (match_score=None) 3. Check if input dtype matches 4. Check if attribute dtype matches If 3 and 4 are met, then this is a perfect match, otherwise, it's still considered a candidate of nearest match with a matching score. ## Case Study ### Optional Input The dispatcher recognizes optional inputs. However, the input can't be ignored. None must be provided. ```python # Perfect match is found inputs = (Tensor([2, 3]), None) aten_op(X: TTensor, Y: Optional[INT64]): ... ``` Real Case: aten::convolution NOTE: There is/will not optional attribute in torchlib. ### Different attributes If an attribute is provided with value, it's a must to match the attribute in function signature. ```python # Not perfect match, nor nearest match inputs = (Tensor([2, 3]),) attributes = {"a":1, "b":2} aten_op(X: TTensor, a: int): ... ``` Real Case: aten::div and aten::div.Tensor_mode ### Default attribute Default attribute will fill in the value into inputs/attributes ```python # Perfect match is found inputs = (Tensor([2, 3]),) attributes = {} aten_op(X: TTensor, a: int = 3): ... ``` Real case: aten::clone ### Ignore attribute with None value The attributes with None value will be ignored in matching. ```python # Perfect match inputs = (Tensor([2, 3]),) attributes = {"a": None} aten_op(X: TTensor): ... # Not perfect match, but eligible for nearest match inputs = (Tensor([2, 3]),) attributes = {"a": None} aten_op(X: TTensor, a: int = 3): ... ``` Real case: aten::div and aten::div.Tensor_mode [ghstack-poisoned]
Done. I extend/modify the description on _OpschemaWrapper |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LG w/ comments.
Result: Perfect match. | ||
|
||
2. [NOTE: Optional input]: The dispatcher recognizes optional inputs. However, | ||
the input can't be ignored. None must be provided. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: to check my understanding. Is this due to onnxscript limitation that None cannot be set as default value for Optional[xxx]
typed argument? Such that inputs = (Tensor([2, 3], )
will not be matched, but otherwise it should? If there is an issue to link here that will be great ...
I typed above but then I found convolution aten signature is - func: convolution(Tensor input, Tensor weight, Tensor? bias, ...
. So the optional input by design does not have default value. And in that case, the type is only meaning None | INT64
so it is syntactically incorrect to omit it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think PyTorch doesn't omit any inputs, because if they do, we should see a lot of breakings now in terms of the param_schema
separating. Not sure if there will be a change on this, but currently, we don't fill in default value to input, nor omit None.
Result: Perfect match. | ||
Real example: `aten::clone` | ||
|
||
5. [NOTE: Ignore attribute with None value]: The attributes with None value |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess this is due to the ONNXScript limitation I typed above? Cannot define optional attribute?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's something about AttributeProto, but yes, we don't have Optional attribute at the time, and this assumption is acknowledged by OpSchemaWrapper.
Semantically, besides dtype, another goal of creating ONNX variants is to overload the cases with different attributes, because they are usaully composed by different set of ops. Thus, attribute with None value simply means the function doesn't need this attribute.
cc @justinchuby
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will need to double check with Rama on optional attributes. IIRC we either provide them in the graph or not provide them in the graph. There is not a "None" as the attribute value. In the torchscript builder, we omit None attributes whenever we see them.
…tional inputs and disallow mismatch attributes" Fix #106057, except **Attribute dtype mismatch. E.g., alpha of aten.add.Tensor. -> Attribute: alpha INT vs FLOAT**. Summarized the change * Fill in defaults of attribute when `param_schema` is applied. This relaxes the matching on default attributes. * Fill in None to optional input when `param_schema` is applied. * Keep extra kwargs in attributes to make matching strictly. * Allow input to be None when its dtype is `optiona[INPUT]` The change comes with the guarantee from torchlib that attribute would never be None. For example, if `memory_format` is needed. The function should specify like this: ```python torch_op("aten::clone") def aten_clone( self: TTensor, memory_format: str = "" # pylint: disable=unused-argument ) -> TTensor: """clone(Tensor self, *, MemoryFormat? memory_format=None) -> Tensor""" return op.Identity(self) ``` Previous to this PR, opSchema matching didn't strictly guard the number of inputs/attributes to allow nearest match, which introduces the bug of dispatching `aten::div.Tensor` to `aten::div.default` disregarding the fact that `aten::div.Tensor` has an extra attibute `rounding_mode`. This PR fixes the issue with the new logic to perfect/nearest match. Particularly, strictly restrict the qualification of being nearest match candidate. For each ONNX variants, we check these step by step: 1. Check if the function signature of inputs number is the same as the inputs. 2. Check if the function signature of attribute names is the same set of inputs. If either of the above two criteria fails to meet, the ONNX variant is not a perfect match, nor a nearest match candidate (match_score=None) 3. Check if input dtype matches 4. Check if attribute dtype matches If 3 and 4 are met, then this is a perfect match, otherwise, it's still considered a candidate of nearest match with a matching score. ## Case Study ### Optional Input The dispatcher recognizes optional inputs. However, the input can't be ignored. None must be provided. ```python # Perfect match is found inputs = (Tensor([2, 3]), None) aten_op(X: TTensor, Y: Optional[INT64]): ... ``` Real Case: aten::convolution NOTE: There is/will not optional attribute in torchlib. ### Different attributes If an attribute is provided with value, it's a must to match the attribute in function signature. ```python # Not perfect match, nor nearest match inputs = (Tensor([2, 3]),) attributes = {"a":1, "b":2} aten_op(X: TTensor, a: int): ... ``` Real Case: aten::div and aten::div.Tensor_mode ### Default attribute Default attribute will fill in the value into inputs/attributes ```python # Perfect match is found inputs = (Tensor([2, 3]),) attributes = {} aten_op(X: TTensor, a: int = 3): ... ``` Real case: aten::clone ### Ignore attribute with None value The attributes with None value will be ignored in matching. ```python # Perfect match inputs = (Tensor([2, 3]),) attributes = {"a": None} aten_op(X: TTensor): ... # Not perfect match, but eligible for nearest match inputs = (Tensor([2, 3]),) attributes = {"a": None} aten_op(X: TTensor, a: int = 3): ... ``` Real case: aten::div and aten::div.Tensor_mode [ghstack-poisoned]
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
…ts and disallow mismatch attributes (pytorch#106478) Fix pytorch#106057, except **Attribute dtype mismatch. E.g., alpha of aten.add.Tensor. -> Attribute: alpha INT vs FLOAT**. Summarized the change * Fill in defaults of attribute when `param_schema` is applied. This relaxes the matching on default attributes. * Fill in None to optional input when `param_schema` is applied. * Keep extra kwargs in attributes to make matching strictly. * Allow input to be None when its dtype is `optiona[INPUT]` The change comes with the guarantee from torchlib that attribute would never be None. For example, if `memory_format` is needed. The function should specify like this: ```python @torch_op("aten::clone") def aten_clone( self: TTensor, memory_format: str = "" # pylint: disable=unused-argument ) -> TTensor: """clone(Tensor self, *, MemoryFormat? memory_format=None) -> Tensor""" return op.Identity(self) ``` Previous to this PR, opSchema matching didn't strictly guard the number of inputs/attributes to allow nearest match, which introduces the bug of dispatching `aten::div.Tensor` to `aten::div.default` disregarding the fact that `aten::div.Tensor` has an extra attibute `rounding_mode`. This PR fixes the issue with the new logic to perfect/nearest match. Particularly, strictly restrict the qualification of being nearest match candidate. For each ONNX variants, we check these step by step: 1. Check if the function signature of inputs number is the same as the inputs. 2. Check if the function signature of attribute names is the same set of inputs. If either of the above two criteria fails to meet, the ONNX variant is not a perfect match, nor a nearest match candidate (match_score=None) 3. Check if input dtype matches 4. Check if attribute dtype matches If 3 and 4 are met, then this is a perfect match, otherwise, it's still considered a candidate of nearest match with a matching score. ## Case Study ### Optional Input The dispatcher recognizes optional inputs. However, the input can't be ignored. None must be provided. ```python # Perfect match is found inputs = (Tensor([2, 3]), None) aten_op(X: TTensor, Y: Optional[INT64]): ... ``` Real Case: aten::convolution NOTE: There is/will not optional attribute in torchlib. ### Different attributes If an attribute is provided with value, it's a must to match the attribute in function signature. ```python # Not perfect match, nor nearest match inputs = (Tensor([2, 3]),) attributes = {"a":1, "b":2} aten_op(X: TTensor, a: int): ... ``` Real Case: aten::div and aten::div.Tensor_mode ### Default attribute Default attribute will fill in the value into inputs/attributes ```python # Perfect match is found inputs = (Tensor([2, 3]),) attributes = {} aten_op(X: TTensor, a: int = 3): ... ``` Real case: aten::clone ### Ignore attribute with None value The attributes with None value will be ignored in matching. ```python # Perfect match inputs = (Tensor([2, 3]),) attributes = {"a": None} aten_op(X: TTensor): ... # Not perfect match, but eligible for nearest match inputs = (Tensor([2, 3]),) attributes = {"a": None} aten_op(X: TTensor, a: int = 3): ... ``` Real case: aten::div and aten::div.Tensor_mode Pull Request resolved: pytorch#106478 Approved by: https://github.com/thiagocrepaldi, https://github.com/BowenBao
Stack from ghstack (oldest at bottom):
Fix #106057, except Attribute dtype mismatch. E.g., alpha of aten.add.Tensor. -> Attribute: alpha INT vs FLOAT.
Summarized the change
param_schema
is applied. This relaxes the matching on default attributes.param_schema
is applied.optiona[INPUT]
The change comes with the guarantee from torchlib that attribute would never be None. For example, if
memory_format
is needed. The function should specify like this:Previous to this PR, opSchema matching didn't strictly guard the number of inputs/attributes to allow nearest match, which introduces the bug of dispatching
aten::div.Tensor
toaten::div.default
disregarding the fact thataten::div.Tensor
has an extra attibuterounding_mode
. This PR fixes the issue with the new logic to perfect/nearest match. Particularly, strictly restrict the qualification of being nearest match candidate.For each ONNX variants, we check these step by step:
If either of the above two criteria fails to meet, the ONNX variant is not a perfect match, nor a nearest match candidate (match_score=None)
If 3 and 4 are met, then this is a perfect match, otherwise, it's still considered a candidate of nearest match with a matching score.
Case Study
Optional Input
The dispatcher recognizes optional inputs. However, the input can't be ignored. None must be provided.
Real Case: aten::convolution
NOTE: There is/will not optional attribute in torchlib.
Different attributes
If an attribute is provided with value, it's a must to match the attribute in function signature.
Real Case: aten::div and aten::div.Tensor_mode
Default attribute
Default attribute will fill in the value into inputs/attributes
Real case: aten::clone
Ignore attribute with None value
The attributes with None value will be ignored in matching.
Real case: aten::div and aten::div.Tensor_mode