-
Notifications
You must be signed in to change notification settings - Fork 65
Auto generate OpSchema for functions #594
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@gramalingam I also see |
Codecov Report
@@ Coverage Diff @@
## main #594 +/- ##
==========================================
- Coverage 74.20% 74.13% -0.08%
==========================================
Files 107 107
Lines 11302 11401 +99
Branches 1177 1197 +20
==========================================
+ Hits 8387 8452 +65
- Misses 2592 2619 +27
- Partials 323 330 +7
... and 1 file with indirect coverage changes Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. |
@@ -203,3 +239,22 @@ def onnx_type_to_onnxscript_repr(onnx_type: onnx.TypeProto) -> str: | |||
|
|||
# Currently, only tensor types are supported. Need to expand support for other ONNX types. | |||
ONNXType = TensorType | |||
|
|||
ALL_TENSOR_TYPES = ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you please sort them in alphabetical order for better readness?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
@@ -24,7 +24,7 @@ classifiers = [ | |||
"Programming Language :: Python :: 3.11", | |||
"License :: OSI Approved :: Apache Software License", | |||
] | |||
dependencies = ["numpy", "onnx", "typing_extensions"] | |||
dependencies = ["numpy", "onnx>=1.13", "typing_extensions"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we specify the minimum onnx version here?
@@ -24,7 +24,7 @@ classifiers = [ | |||
"Programming Language :: Python :: 3.11", | |||
"License :: OSI Approved :: Apache Software License", | |||
] | |||
dependencies = ["numpy", "onnx", "typing_extensions"] | |||
dependencies = ["numpy", "onnx>=1.13", "typing_extensions"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is this for? I see OpSchema is no older than 14?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am planning to bump this when 14 is released. For now I can revert I guess
self._param_schemas: Optional[tuple[ParamSchema, ...]] = None | ||
|
||
def __call__(self, *args, **kwargs): | ||
# FIXME(after #225): Move import to the top of the file. | ||
from onnxscript import evaluator # pylint: disable=import-outside-toplevel | ||
|
||
return evaluator.default().eval(self.get_schema(), args, kwargs) | ||
schema = self.get_schema() | ||
assert schema is not None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could this be None for an Op? What is the case here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When the op doesn't have a schema this will fail I think. That would be the custom ops. Here we expect schema is always not None
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, I forgot custom ops. How does custom op get in here? I thought onnx-script didn't support non-ONNX ops. If it's a custom op built with non onnx-script way (not compmosed of ONNX standard op), how does onnx-script process it? I thought we could only handle it in exporter side.
fn: IRFunction, | ||
varname: str, | ||
attribute_type: onnx.AttributeProto.AttributeType, | ||
default_value: Any, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a better annotation we can use than Any?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think so. Let me refine that
proto = onnx.AttributeProto() | ||
proto.name = varname | ||
proto.type = attribute_type | ||
fn.add_attr_parameter(IRAttributeValue(proto), has_default=False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think has_defualt can be an attribute of IRAttributeValue? I think that's more straightforward.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a great suggestion - I thought of this too. Will make an update
@@ -203,3 +239,22 @@ def onnx_type_to_onnxscript_repr(onnx_type: onnx.TypeProto) -> str: | |||
|
|||
# Currently, only tensor types are supported. Need to expand support for other ONNX types. | |||
ONNXType = TensorType |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we still need this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure. Maybe?
Closed in favor of #626 |
This change adds the capability to auto generate
OpSchema
.Changes
opschema
property inOnnxFunction
attrs
inIRFunction
toself.attrs: list[IRAttributeValue]
so that it contains type informationtypeinfo
inadd_attr_parameter
version_utils
to_internal
to use it invalues
TODO
Test on all torch_lib functions
Example
Results
Fixes #476