Skip to content

Refactor converter to isolate translate_function_signature logic | feat(converter) #684

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 70 commits into from
Apr 28, 2023

Conversation

justinchuby
Copy link
Collaborator

@justinchuby justinchuby commented Apr 27, 2023

Stack from ghstack (oldest at bottom):

This change refactors the translate_function_def method in Converter to isolate the signature handling logic to translate_function_signature. translate_function_signature is used in #674 to handle function signatures so we do not need to translate the function body for general python functions incompatible with ONNX.

Signed-off-by: Justin Chu [email protected]

Move version_utils to `_internal` so that it can be used my onnxscript

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`

### TODO

Test on all torch_lib functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`

### TODO

Test on all torch_lib functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`

### TODO

Test on all torch_lib functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`

### TODO

Test on all torch_lib functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`

### TODO

Test on all torch_lib functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`

### TODO

Test on all torch_lib functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`

### TODO

Test on all torch_lib functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`

### TODO

Test on all torch_lib functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`

### TODO

Test on all torch_lib functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`

### TODO

Test on all torch_lib functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`

### TODO

Test on all torch_lib functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`

### TODO

Test on all torch_lib functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`

### TODO

Test on all torch_lib functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`

### TODO

Test on all torch_lib functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`

### TODO

Test on all torch_lib functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`

### TODO

Test on all torch_lib functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`

### TODO

Test on all torch_lib functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`

### TODO

Test on all torch_lib functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`

### TODO

Test on all torch_lib functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`

### TODO

Test on all torch_lib functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
…t(op_schema)"


This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`
- Test on all torch_lib functions

### Next PR

Support trace_only functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`
- Test on all torch_lib functions

### Next PR

Support trace_only functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
…t(op_schema)"


This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`
- Test on all torch_lib functions

### Next PR

Support trace_only functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`
- Test on all torch_lib functions

### Next PR

Support trace_only functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
…t(op_schema)"


This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`
- Test on all torch_lib functions

### Next PR

Support trace_only functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`
- Test on all torch_lib functions

### Next PR

Support trace_only functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
…t(op_schema)"


This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`
- Test on all torch_lib functions

### Next PR

Support trace_only functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476

[ghstack-poisoned]
@justinchuby justinchuby added the change base before merge Remember to change the merge base to main when the PR is ready to merge label Apr 27, 2023
…nction_signature logic | feat(converter)"


This change refactors the `translate_function_def` method in `Converter` to isolate the signature handling logic to `translate_function_signature`. `translate_function_signature` is used in #674 to handle function signatures so we do not need to translate the function body for general python functions incompatible with ONNX.

Signed-off-by: Justin Chu <justinchumicrosoft.com>

[ghstack-poisoned]
… logic | feat(converter)"


This change refactors the `translate_function_def` method in `Converter` to isolate the signature handling logic to `translate_function_signature`. `translate_function_signature` is used in #674 to handle function signatures so we do not need to translate the function body for general python functions incompatible with ONNX.

Signed-off-by: Justin Chu <justinchumicrosoft.com>

[ghstack-poisoned]
…nction_signature logic | feat(converter)"


This change refactors the `translate_function_def` method in `Converter` to isolate the signature handling logic to `translate_function_signature`. `translate_function_signature` is used in #674 to handle function signatures so we do not need to translate the function body for general python functions incompatible with ONNX.

Signed-off-by: Justin Chu <justinchumicrosoft.com>

[ghstack-poisoned]
… logic | feat(converter)"


This change refactors the `translate_function_def` method in `Converter` to isolate the signature handling logic to `translate_function_signature`. `translate_function_signature` is used in #674 to handle function signatures so we do not need to translate the function body for general python functions incompatible with ONNX.

Signed-off-by: Justin Chu <justinchumicrosoft.com>

[ghstack-poisoned]
justinchuby added a commit that referenced this pull request Apr 27, 2023
…at(converter)

Signed-off-by: Justin Chu <justinchumicrosoft.com>

ghstack-source-id: 82ec025
Pull Request resolved: #684
Copy link
Contributor

@titaiwangms titaiwangms left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The purpose of extracting the code out from an existing function should be

…nction_signature logic | feat(converter)"


This change refactors the `translate_function_def` method in `Converter` to isolate the signature handling logic to `translate_function_signature`. `translate_function_signature` is used in #674 to handle function signatures so we do not need to translate the function body for general python functions incompatible with ONNX.

Signed-off-by: Justin Chu <justinchumicrosoft.com>

[ghstack-poisoned]
… logic | feat(converter)"


This change refactors the `translate_function_def` method in `Converter` to isolate the signature handling logic to `translate_function_signature`. `translate_function_signature` is used in #674 to handle function signatures so we do not need to translate the function body for general python functions incompatible with ONNX.

Signed-off-by: Justin Chu <justinchumicrosoft.com>

[ghstack-poisoned]
justinchuby added a commit that referenced this pull request Apr 27, 2023
…nction_signature logic | feat(converter)"


This change refactors the `translate_function_def` method in `Converter` to isolate the signature handling logic to `translate_function_signature`. `translate_function_signature` is used in #674 to handle function signatures so we do not need to translate the function body for general python functions incompatible with ONNX.

Signed-off-by: Justin Chu <justinchumicrosoft.com>

[ghstack-poisoned]
… logic | feat(converter)"


This change refactors the `translate_function_def` method in `Converter` to isolate the signature handling logic to `translate_function_signature`. `translate_function_signature` is used in #674 to handle function signatures so we do not need to translate the function body for general python functions incompatible with ONNX.

Signed-off-by: Justin Chu <justinchumicrosoft.com>

[ghstack-poisoned]
…nction_signature logic | feat(converter)"


This change refactors the `translate_function_def` method in `Converter` to isolate the signature handling logic to `translate_function_signature`. `translate_function_signature` is used in #674 to handle function signatures so we do not need to translate the function body for general python functions incompatible with ONNX.

Signed-off-by: Justin Chu <justinchumicrosoft.com>

[ghstack-poisoned]
… logic | feat(converter)"


This change refactors the `translate_function_def` method in `Converter` to isolate the signature handling logic to `translate_function_signature`. `translate_function_signature` is used in #674 to handle function signatures so we do not need to translate the function body for general python functions incompatible with ONNX.

Signed-off-by: Justin Chu <justinchumicrosoft.com>

[ghstack-poisoned]
…nction_signature logic | feat(converter)"


This change refactors the `translate_function_def` method in `Converter` to isolate the signature handling logic to `translate_function_signature`. `translate_function_signature` is used in #674 to handle function signatures so we do not need to translate the function body for general python functions incompatible with ONNX.

Signed-off-by: Justin Chu <justinchumicrosoft.com>

[ghstack-poisoned]
… logic | feat(converter)"


This change refactors the `translate_function_def` method in `Converter` to isolate the signature handling logic to `translate_function_signature`. `translate_function_signature` is used in #674 to handle function signatures so we do not need to translate the function body for general python functions incompatible with ONNX.

Signed-off-by: Justin Chu <justinchumicrosoft.com>

[ghstack-poisoned]
justinchuby added a commit that referenced this pull request Apr 28, 2023
@justinchuby justinchuby changed the base branch from gh/justinchuby/19/base to main April 28, 2023 03:46
…nction_signature logic | feat(converter)"


This change refactors the `translate_function_def` method in `Converter` to isolate the signature handling logic to `translate_function_signature`. `translate_function_signature` is used in #674 to handle function signatures so we do not need to translate the function body for general python functions incompatible with ONNX.

Signed-off-by: Justin Chu <justinchumicrosoft.com>

[ghstack-poisoned]
… logic | feat(converter)"


This change refactors the `translate_function_def` method in `Converter` to isolate the signature handling logic to `translate_function_signature`. `translate_function_signature` is used in #674 to handle function signatures so we do not need to translate the function body for general python functions incompatible with ONNX.

Signed-off-by: Justin Chu <justinchumicrosoft.com>

[ghstack-poisoned]
@justinchuby justinchuby merged commit d3ce597 into main Apr 28, 2023
@justinchuby justinchuby deleted the gh/justinchuby/19/head branch April 28, 2023 04:25
@justinchuby justinchuby restored the gh/justinchuby/19/head branch April 28, 2023 04:52
justinchuby added a commit that referenced this pull request Apr 28, 2023
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at
bottom):
* #692
* #674
* __->__ #626
* #684

This change adds the capability to auto generate `OpSchema`.

### Changes

- Implement the `opschema` property in `OnnxFunction`
- Test on all torch_lib functions

### Next PR

Support trace_only functions

## Example

```python
from onnxscript.function_libs.torch_aten.ops import core, nn


print("core.aten_abs.opschema: ", core.aten_abs.opschema)

print("nn.aten_cross_entropy_loss.opschema: ", nn.aten_cross_entropy_loss.opschema)
```

Results

```
core.aten_abs.opschema:  OpSchema(
    name='aten_abs',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='abs(Tensor self) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TReal', allowed_type_strs=['tensor(float)', 'tensor(int8)', 'tensor(int16)', 'tensor(int32)', 'tensor(int64)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='return_val', type_str='TReal', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={}
)
nn.aten_cross_entropy_loss.opschema:  OpSchema(
    name='aten_cross_entropy_loss',
    domain='onnxscript.atenlib',
    since_version=1,
    doc='cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor',
    type_constraints=[OpSchema.TypeConstraintParam(type_param_str='TFloatOrBFloat16', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description=''), OpSchema.TypeConstraintParam(type_param_str='T1', allowed_type_strs=['tensor(float)', 'tensor(float16)', 'tensor(double)', 'tensor(bfloat16)'], description='')],
    inputs=[OpSchema.FormalParameter(name='self', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>), OpSchema.FormalParameter(name='weight', type_str='T1', description='', param_option=<FormalParameterOption.Optional: 1>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    outputs=[OpSchema.FormalParameter(name='result_10', type_str='TFloatOrBFloat16', description='', param_option=<FormalParameterOption.Single: 0>, is_homogeneous=True, min_arity=1, differentiation_category=<DifferentiationCategory.Unknown: 0>)],
    attributes={'ignore_index': OpSchema.Attribute(name='ignore_index', type=<AttrType.INT: 2>, description='', default_value=name: "ignore_index"
i: -100
type: INT
, required=False), 'label_smoothing': OpSchema.Attribute(name='label_smoothing', type=<AttrType.FLOAT: 1>, description='', default_value=name: "label_smoothing"
f: 0.0
type: FLOAT
, required=False), 'reduction': OpSchema.Attribute(name='reduction', type=<AttrType.INT: 2>, description='', default_value=name: "reduction"
i: 1
type: INT
, required=False), 'target': OpSchema.Attribute(name='target', type=<AttrType.INTS: 7>, description='', default_value=, required=True)}
)
```

Fixes #476
@justinchuby justinchuby deleted the gh/justinchuby/19/head branch March 13, 2024 01:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
change base before merge Remember to change the merge base to main when the PR is ready to merge topic: api
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants