Skip to content

Auto OpSchema for trace_only functions | feat(op_schema) #674

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 94 commits into from
Apr 28, 2023

Conversation

justinchuby
Copy link
Collaborator

@justinchuby justinchuby commented Apr 25, 2023

Stack from ghstack (oldest at bottom):

This PR implements auto OpSchema generation for trace_only function as well. It leverages onnxscript's converter to create the same function IR as we do in OnnxFunction, but without translating the body.

We created an TracedOnnxFunction class to expose a similar interface as OnnxFunction, and an OpLike protocol to standardize them.

  • Creates the OpLike protocol that defines common attributes and methods for Op, OnnxFunction and TracedOnnxFunction so we can assume a common interface.
  • Implement param_schemas for TracedOnnxFunction
  • Refactor param_schemas to extract common logic.
  • Removes is_single_op from Op because it is unused.
  • Refactor ast logic from main.py to onnxscript/_internal/ast_utils.py
  • The change is tested on all the existing trace_only functions.

Fixes #630

Move version_utils to `_internal` so that it can be used my onnxscript

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`

### TODO

Test on all torch_lib functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`

### TODO

Test on all torch_lib functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`

### TODO

Test on all torch_lib functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`

### TODO

Test on all torch_lib functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`

### TODO

Test on all torch_lib functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`

### TODO

Test on all torch_lib functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`

### TODO

Test on all torch_lib functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`

### TODO

Test on all torch_lib functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`

### TODO

Test on all torch_lib functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`

### TODO

Test on all torch_lib functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`

### TODO

Test on all torch_lib functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`

### TODO

Test on all torch_lib functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`

### TODO

Test on all torch_lib functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`

### TODO

Test on all torch_lib functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`

### TODO

Test on all torch_lib functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`

### TODO

Test on all torch_lib functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`

### TODO

Test on all torch_lib functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`

### TODO

Test on all torch_lib functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`

### TODO

Test on all torch_lib functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`

### TODO

Test on all torch_lib functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
…t(op_schema)"


This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`
- Test on all torch_lib functions

### Next PR

Support trace_only functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`
- Test on all torch_lib functions

### Next PR

Support trace_only functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
…t(op_schema)"


This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`
- Test on all torch_lib functions

### Next PR

Support trace_only functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`
- Test on all torch_lib functions

### Next PR

Support trace_only functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
…t(op_schema)"


This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`
- Test on all torch_lib functions

### Next PR

Support trace_only functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`
- Test on all torch_lib functions

### Next PR

Support trace_only functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
…t(op_schema)"


This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`
- Test on all torch_lib functions

### Next PR

Support trace_only functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
…eat(op_schema)"


This PR implements auto OpSchema generation for trace_only function as well. It leverages onnxscript's converter to create the same function IR as we do in `OnnxFunction`, but without translating the body.

We created an `TraceOnlyFunction` class to expose a similar interface as `OnnxFunction`, and an `OpLike` protocol to standardize them.

The change is tested on all the existing trace_only functions.

Fixes #630

[ghstack-poisoned]
This PR implements auto OpSchema generation for trace_only function as well. It leverages onnxscript's converter to create the same function IR as we do in `OnnxFunction`, but without translating the body.

We created an `TraceOnlyFunction` class to expose a similar interface as `OnnxFunction`, and an `OpLike` protocol to standardize them.

The change is tested on all the existing trace_only functions.

Fixes #630

[ghstack-poisoned]
@justinchuby justinchuby changed the base branch from main to gh/justinchuby/18/base April 28, 2023 17:33
justinchuby added a commit that referenced this pull request Apr 28, 2023
ghstack-source-id: b25bb51
Pull Request resolved: #674

Signed-off-by: Justin Chu <[email protected]>
…eat(op_schema)"


This PR implements auto OpSchema generation for trace_only function as well. It leverages onnxscript's converter to create the same function IR as we do in `OnnxFunction`, but without translating the body.

We created an `TraceOnlyFunction` class to expose a similar interface as `OnnxFunction`, and an `OpLike` protocol to standardize them.

The change is tested on all the existing trace_only functions.

Fixes #630

[ghstack-poisoned]
This PR implements auto OpSchema generation for trace_only function as well. It leverages onnxscript's converter to create the same function IR as we do in `OnnxFunction`, but without translating the body.

We created an `TraceOnlyFunction` class to expose a similar interface as `OnnxFunction`, and an `OpLike` protocol to standardize them.

The change is tested on all the existing trace_only functions.

Fixes #630

[ghstack-poisoned]
justinchuby added a commit that referenced this pull request Apr 28, 2023
ghstack-source-id: c72ff73
Pull Request resolved: #674

Signed-off-by: Justin Chu <[email protected]>
@@ -370,11 +456,6 @@ def __init__(
self._param_schemas: Optional[tuple[ParamSchema, ...]] = None
self._opschema: Optional[onnx.defs.OpSchema] = None

@property
def name(self):
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Defined by parent

…eat(op_schema)"


This PR implements auto OpSchema generation for trace_only function as well. It leverages onnxscript's converter to create the same function IR as we do in `OnnxFunction`, but without translating the body.

We created an `TracedOnnxFunction` class to expose a similar interface as `OnnxFunction`, and an `OpLike` protocol to standardize them.

- Creates the `OpLike` protocol that defines common attributes and methods for `Op`, `OnnxFunction` and `TracedOnnxFunction` so we can assume a common interface.
- Implement `param_schemas` for `TracedOnnxFunction`
- Refactor `param_schemas` to extract common logic.
- Removes `is_single_op` from `Op` because it is unused.
- Refactor ast logic from `main.py` to `onnxscript/_internal/ast_utils.py`
- The change is tested on all the existing trace_only functions.

Fixes #630

[ghstack-poisoned]
This PR implements auto OpSchema generation for trace_only function as well. It leverages onnxscript's converter to create the same function IR as we do in `OnnxFunction`, but without translating the body.

We created an `TracedOnnxFunction` class to expose a similar interface as `OnnxFunction`, and an `OpLike` protocol to standardize them.

- Creates the `OpLike` protocol that defines common attributes and methods for `Op`, `OnnxFunction` and `TracedOnnxFunction` so we can assume a common interface.
- Implement `param_schemas` for `TracedOnnxFunction`
- Refactor `param_schemas` to extract common logic.
- Removes `is_single_op` from `Op` because it is unused.
- Refactor ast logic from `main.py` to `onnxscript/_internal/ast_utils.py`
- The change is tested on all the existing trace_only functions.

Fixes #630

[ghstack-poisoned]
justinchuby added a commit that referenced this pull request Apr 28, 2023
ghstack-source-id: 3b97c3f
Pull Request resolved: #674

Signed-off-by: Justin Chu <[email protected]>
…eat(op_schema)"


This PR implements auto OpSchema generation for trace_only function as well. It leverages onnxscript's converter to create the same function IR as we do in `OnnxFunction`, but without translating the body.

We created an `TracedOnnxFunction` class to expose a similar interface as `OnnxFunction`, and an `OpLike` protocol to standardize them.

- Creates the `OpLike` protocol that defines common attributes and methods for `Op`, `OnnxFunction` and `TracedOnnxFunction` so we can assume a common interface.
- Implement `param_schemas` for `TracedOnnxFunction`
- Refactor `param_schemas` to extract common logic.
- Removes `is_single_op` from `Op` because it is unused.
- Refactor ast logic from `main.py` to `onnxscript/_internal/ast_utils.py`
- The change is tested on all the existing trace_only functions.

Fixes #630

[ghstack-poisoned]
This PR implements auto OpSchema generation for trace_only function as well. It leverages onnxscript's converter to create the same function IR as we do in `OnnxFunction`, but without translating the body.

We created an `TracedOnnxFunction` class to expose a similar interface as `OnnxFunction`, and an `OpLike` protocol to standardize them.

- Creates the `OpLike` protocol that defines common attributes and methods for `Op`, `OnnxFunction` and `TracedOnnxFunction` so we can assume a common interface.
- Implement `param_schemas` for `TracedOnnxFunction`
- Refactor `param_schemas` to extract common logic.
- Removes `is_single_op` from `Op` because it is unused.
- Refactor ast logic from `main.py` to `onnxscript/_internal/ast_utils.py`
- The change is tested on all the existing trace_only functions.

Fixes #630

[ghstack-poisoned]
justinchuby added a commit that referenced this pull request Apr 28, 2023
ghstack-source-id: 771b415
Pull Request resolved: #674

Signed-off-by: Justin Chu <[email protected]>
…eat(op_schema)"


This PR implements auto OpSchema generation for trace_only function as well. It leverages onnxscript's converter to create the same function IR as we do in `OnnxFunction`, but without translating the body.

We created an `TracedOnnxFunction` class to expose a similar interface as `OnnxFunction`, and an `OpLike` protocol to standardize them.

- Creates the `OpLike` protocol that defines common attributes and methods for `Op`, `OnnxFunction` and `TracedOnnxFunction` so we can assume a common interface.
- Implement `param_schemas` for `TracedOnnxFunction`
- Refactor `param_schemas` to extract common logic.
- Removes `is_single_op` from `Op` because it is unused.
- Refactor ast logic from `main.py` to `onnxscript/_internal/ast_utils.py`
- The change is tested on all the existing trace_only functions.

Fixes #630

[ghstack-poisoned]
This PR implements auto OpSchema generation for trace_only function as well. It leverages onnxscript's converter to create the same function IR as we do in `OnnxFunction`, but without translating the body.

We created an `TracedOnnxFunction` class to expose a similar interface as `OnnxFunction`, and an `OpLike` protocol to standardize them.

- Creates the `OpLike` protocol that defines common attributes and methods for `Op`, `OnnxFunction` and `TracedOnnxFunction` so we can assume a common interface.
- Implement `param_schemas` for `TracedOnnxFunction`
- Refactor `param_schemas` to extract common logic.
- Removes `is_single_op` from `Op` because it is unused.
- Refactor ast logic from `main.py` to `onnxscript/_internal/ast_utils.py`
- The change is tested on all the existing trace_only functions.

Fixes #630

[ghstack-poisoned]
…eat(op_schema)"


This PR implements auto OpSchema generation for trace_only function as well. It leverages onnxscript's converter to create the same function IR as we do in `OnnxFunction`, but without translating the body.

We created an `TracedOnnxFunction` class to expose a similar interface as `OnnxFunction`, and an `OpLike` protocol to standardize them.

- Creates the `OpLike` protocol that defines common attributes and methods for `Op`, `OnnxFunction` and `TracedOnnxFunction` so we can assume a common interface.
- Implement `param_schemas` for `TracedOnnxFunction`
- Refactor `param_schemas` to extract common logic.
- Removes `is_single_op` from `Op` because it is unused.
- Refactor ast logic from `main.py` to `onnxscript/_internal/ast_utils.py`
- The change is tested on all the existing trace_only functions.

Fixes #630

[ghstack-poisoned]
This PR implements auto OpSchema generation for trace_only function as well. It leverages onnxscript's converter to create the same function IR as we do in `OnnxFunction`, but without translating the body.

We created an `TracedOnnxFunction` class to expose a similar interface as `OnnxFunction`, and an `OpLike` protocol to standardize them.

- Creates the `OpLike` protocol that defines common attributes and methods for `Op`, `OnnxFunction` and `TracedOnnxFunction` so we can assume a common interface.
- Implement `param_schemas` for `TracedOnnxFunction`
- Refactor `param_schemas` to extract common logic.
- Removes `is_single_op` from `Op` because it is unused.
- Refactor ast logic from `main.py` to `onnxscript/_internal/ast_utils.py`
- The change is tested on all the existing trace_only functions.

Fixes #630

[ghstack-poisoned]
…eat(op_schema)"


This PR implements auto OpSchema generation for trace_only function as well. It leverages onnxscript's converter to create the same function IR as we do in `OnnxFunction`, but without translating the body.

We created an `TracedOnnxFunction` class to expose a similar interface as `OnnxFunction`, and an `OpLike` protocol to standardize them.

- Creates the `OpLike` protocol that defines common attributes and methods for `Op`, `OnnxFunction` and `TracedOnnxFunction` so we can assume a common interface.
- Implement `param_schemas` for `TracedOnnxFunction`
- Refactor `param_schemas` to extract common logic.
- Removes `is_single_op` from `Op` because it is unused.
- Refactor ast logic from `main.py` to `onnxscript/_internal/ast_utils.py`
- The change is tested on all the existing trace_only functions.

Fixes #630

[ghstack-poisoned]
This PR implements auto OpSchema generation for trace_only function as well. It leverages onnxscript's converter to create the same function IR as we do in `OnnxFunction`, but without translating the body.

We created an `TracedOnnxFunction` class to expose a similar interface as `OnnxFunction`, and an `OpLike` protocol to standardize them.

- Creates the `OpLike` protocol that defines common attributes and methods for `Op`, `OnnxFunction` and `TracedOnnxFunction` so we can assume a common interface.
- Implement `param_schemas` for `TracedOnnxFunction`
- Refactor `param_schemas` to extract common logic.
- Removes `is_single_op` from `Op` because it is unused.
- Refactor ast logic from `main.py` to `onnxscript/_internal/ast_utils.py`
- The change is tested on all the existing trace_only functions.

Fixes #630

[ghstack-poisoned]
@justinchuby justinchuby changed the base branch from gh/justinchuby/18/base to main April 28, 2023 19:17
…eat(op_schema)"


This PR implements auto OpSchema generation for trace_only function as well. It leverages onnxscript's converter to create the same function IR as we do in `OnnxFunction`, but without translating the body.

We created an `TracedOnnxFunction` class to expose a similar interface as `OnnxFunction`, and an `OpLike` protocol to standardize them.

- Creates the `OpLike` protocol that defines common attributes and methods for `Op`, `OnnxFunction` and `TracedOnnxFunction` so we can assume a common interface.
- Implement `param_schemas` for `TracedOnnxFunction`
- Refactor `param_schemas` to extract common logic.
- Removes `is_single_op` from `Op` because it is unused.
- Refactor ast logic from `main.py` to `onnxscript/_internal/ast_utils.py`
- The change is tested on all the existing trace_only functions.

Fixes #630

[ghstack-poisoned]
This PR implements auto OpSchema generation for trace_only function as well. It leverages onnxscript's converter to create the same function IR as we do in `OnnxFunction`, but without translating the body.

We created an `TracedOnnxFunction` class to expose a similar interface as `OnnxFunction`, and an `OpLike` protocol to standardize them.

- Creates the `OpLike` protocol that defines common attributes and methods for `Op`, `OnnxFunction` and `TracedOnnxFunction` so we can assume a common interface.
- Implement `param_schemas` for `TracedOnnxFunction`
- Refactor `param_schemas` to extract common logic.
- Removes `is_single_op` from `Op` because it is unused.
- Refactor ast logic from `main.py` to `onnxscript/_internal/ast_utils.py`
- The change is tested on all the existing trace_only functions.

Fixes #630

[ghstack-poisoned]
...

def param_schemas(self) -> Optional[tuple[ParamSchema, ...]]:
...

Check notice

Code scanning / CodeQL

Statement has no effect

This statement has no effect.

import onnx
import onnx.defs

from onnxscript import converter as converter_module

Check notice

Code scanning / CodeQL

Cyclic import

Import of module [onnxscript.converter](1) begins an import cycle.

@property
def name(self) -> str:
...

Check notice

Code scanning / CodeQL

Statement has no effect

This statement has no effect.

@property
def opset(self) -> Opset:
...

Check notice

Code scanning / CodeQL

Statement has no effect

This statement has no effect.

@property
def opschema(self) -> Optional[onnx.defs.OpSchema]:
...

Check notice

Code scanning / CodeQL

Statement has no effect

This statement has no effect.
@justinchuby justinchuby merged commit 52613fb into main Apr 28, 2023
@justinchuby justinchuby deleted the gh/justinchuby/18/head branch April 28, 2023 19:42
Indie365 pushed a commit to Indie365/onnxscript that referenced this pull request Oct 26, 2023
ghstack-source-id: 248a01d
Pull Request resolved: microsoft/onnxscript#674

Signed-off-by: Justin Chu <[email protected]>
Indie365 pushed a commit to Indie365/onnxscript that referenced this pull request Oct 26, 2023
ghstack-source-id: b25bb51
Pull Request resolved: microsoft/onnxscript#674

Signed-off-by: Justin Chu <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
change base before merge Remember to change the merge base to main when the PR is ready to merge module: torchlib Related to the torch/aten function lib in development
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Generate opschemas for traced only functions
4 participants