Skip to content

Generate opset 20 | feat #1302

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 3 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions noxfile.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,8 +26,8 @@
"pytest!=7.1.0",
"pyyaml",
)
ONNX = "onnx==1.14.1"
ONNX_RUNTIME = "onnxruntime==1.16.1"
ONNX = "onnx==1.15.0"
ONNX_RUNTIME = "onnxruntime==1.17.1"
PYTORCH = "torch==2.1.0"
TORCHVISON = "torchvision==0.16"
ONNX_RUNTIME_NIGHTLY_DEPENDENCIES = (
Expand Down
14 changes: 14 additions & 0 deletions onnxscript/onnx_opset/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,9 +36,11 @@
from onnxscript.onnx_opset._impl.opset17 import Opset17
from onnxscript.onnx_opset._impl.opset18 import Opset18
from onnxscript.onnx_opset._impl.opset19 import Opset19
from onnxscript.onnx_opset._impl.opset20 import Opset20
from onnxscript.onnx_opset._impl.opset_ai_onnx_ml1 import Opset_ai_onnx_ml1
from onnxscript.onnx_opset._impl.opset_ai_onnx_ml2 import Opset_ai_onnx_ml2
from onnxscript.onnx_opset._impl.opset_ai_onnx_ml3 import Opset_ai_onnx_ml3
from onnxscript.onnx_opset._impl.opset_ai_onnx_ml4 import Opset_ai_onnx_ml4
from onnxscript.onnx_opset._impl.opset_ai_onnx_preview_training1 import (
Opset_ai_onnx_preview_training1,
)
Expand All @@ -65,9 +67,11 @@
"opset17",
"opset18",
"opset19",
"opset20",
"opset_ai_onnx_ml1",
"opset_ai_onnx_ml2",
"opset_ai_onnx_ml3",
"opset_ai_onnx_ml4",
"opset_ai_onnx_preview_training1",
]

Expand Down Expand Up @@ -97,9 +101,11 @@
opset17 = Opset17()
opset18 = Opset18()
opset19 = Opset19()
opset20 = Opset20()
opset_ai_onnx_ml1 = Opset_ai_onnx_ml1()
opset_ai_onnx_ml2 = Opset_ai_onnx_ml2()
opset_ai_onnx_ml3 = Opset_ai_onnx_ml3()
opset_ai_onnx_ml4 = Opset_ai_onnx_ml4()
opset_ai_onnx_preview_training1 = Opset_ai_onnx_preview_training1()
all_opsets: Mapping[Tuple[str, int], Opset] = {
(
Expand Down Expand Up @@ -178,6 +184,10 @@
"",
19,
): opset19,
(
"",
20,
): opset20,
(
"ai.onnx.ml",
1,
Expand All @@ -190,6 +200,10 @@
"ai.onnx.ml",
3,
): opset_ai_onnx_ml3,
(
"ai.onnx.ml",
4,
): opset_ai_onnx_ml4,
(
"ai.onnx.preview.training",
1,
Expand Down
40 changes: 25 additions & 15 deletions onnxscript/onnx_opset/_impl/opset1.py
Original file line number Diff line number Diff line change
Expand Up @@ -1495,7 +1495,7 @@ def If(self, cond: B_If, *, else_branch: GraphProto, then_branch: GraphProto) ->
If conditional

Args:
cond: Condition for the if
cond: Condition for the if. The tensor must contain a single element.

else_branch: Graph to run if condition is false. Has N outputs: values you
wish to be live-out to the enclosing scope. The number of outputs must
Expand Down Expand Up @@ -3011,7 +3011,8 @@ def ReduceL1(

Computes the L1 norm of the input tensor's element along the provided axes. The resulting
tensor has the same rank as the input if keepdims equals 1. If keepdims equal 0, then
the resulted tensor have the reduced dimension pruned.
the resulted tensor have the reduced dimension pruned. Input tensors of rank zero are
valid. Reduction over an empty set of values yields 0.

The above behavior is similar to numpy, with the exception that numpy defaults keepdims to
False instead of True.
Expand Down Expand Up @@ -3040,7 +3041,8 @@ def ReduceL2(

Computes the L2 norm of the input tensor's element along the provided axes. The resulting
tensor has the same rank as the input if keepdims equals 1. If keepdims equal 0, then
the resulted tensor have the reduced dimension pruned.
the resulted tensor have the reduced dimension pruned. Input tensors of rank zero are
valid. Reduction over an empty set of values yields 0.

The above behavior is similar to numpy, with the exception that numpy defaults keepdims to
False instead of True.
Expand Down Expand Up @@ -3071,7 +3073,8 @@ def ReduceLogSum(

Computes the log sum of the input tensor's element along the provided axes. The resulting
tensor has the same rank as the input if keepdims equals 1. If keepdims equal 0, then
the resulted tensor have the reduced dimension pruned.
the resulted tensor have the reduced dimension pruned. Input tensors of rank zero are
valid. Reduction over an empty set of values yields minus infinity (if supported by the datatype) or undefined otherwise.

The above behavior is similar to numpy, with the exception that numpy defaults keepdims to
False instead of True.
Expand Down Expand Up @@ -3106,7 +3109,8 @@ def ReduceLogSumExp(

Computes the log sum exponent of the input tensor's element along the provided axes. The resulting
tensor has the same rank as the input if keepdims equals 1. If keepdims equal 0, then
the resulted tensor have the reduced dimension pruned.
the resulted tensor have the reduced dimension pruned. Input tensors of rank zero are
valid. Reduction over an empty set of values yields minus infinity (if supported by the datatype) or undefined otherwise.

The above behavior is similar to numpy, with the exception that numpy defaults keepdims to
False instead of True.
Expand Down Expand Up @@ -3135,7 +3139,8 @@ def ReduceMax(

Computes the max of the input tensor's element along the provided axes. The resulting
tensor has the same rank as the input if keepdims equals 1. If keepdims equal 0, then
the resulted tensor have the reduced dimension pruned.
the resulted tensor have the reduced dimension pruned. Input tensors of rank zero are
valid. Reduction over an empty set of values yields minus infinity (if supported by the datatype) or the minimum value of the data type otherwise.

The above behavior is similar to numpy, with the exception that numpy defaults keepdims to
False instead of True.
Expand Down Expand Up @@ -3166,7 +3171,8 @@ def ReduceMean(

Computes the mean of the input tensor's element along the provided axes. The resulting
tensor has the same rank as the input if keepdims equals 1. If keepdims equal 0, then
the resulted tensor have the reduced dimension pruned.
the resulted tensor have the reduced dimension pruned. Input tensors of rank zero are
valid. Reduction over an empty set of values yields undefined.

The above behavior is similar to numpy, with the exception that numpy defaults keepdims to
False instead of True.
Expand Down Expand Up @@ -3195,7 +3201,8 @@ def ReduceMin(

Computes the min of the input tensor's element along the provided axes. The resulting
tensor has the same rank as the input if keepdims equals 1. If keepdims equal 0, then
the resulted tensor have the reduced dimension pruned.
the resulted tensor have the reduced dimension pruned. Input tensors of rank zero are
valid. Reduction over an empty set of values yields plus infinity (if supported by the datatype) or the maximum value of the data type otherwise.

The above behavior is similar to numpy, with the exception that numpy defaults keepdims to
False instead of True.
Expand Down Expand Up @@ -3226,7 +3233,8 @@ def ReduceProd(

Computes the product of the input tensor's element along the provided axes. The resulting
tensor has the same rank as the input if keepdims equals 1. If keepdims equal 0, then
the resulted tensor have the reduced dimension pruned.
the resulted tensor have the reduced dimension pruned. Input tensors of rank zero are
valid. Reduction over an empty set of values yields 1.

The above behavior is similar to numpy, with the exception that numpy defaults keepdims to
False instead of True.
Expand Down Expand Up @@ -3255,7 +3263,8 @@ def ReduceSum(

Computes the sum of the input tensor's element along the provided axes. The resulting
tensor has the same rank as the input if keepdims equals 1. If keepdims equal 0, then
the resulted tensor have the reduced dimension pruned.
the resulted tensor have the reduced dimension pruned. Input tensors of rank zero are
valid. Reduction over an empty set of values yields 0.

The above behavior is similar to numpy, with the exception that numpy defaults keepdims to
False instead of True.
Expand Down Expand Up @@ -3290,7 +3299,8 @@ def ReduceSumSquare(

Computes the sum square of the input tensor's element along the provided axes. The resulting
tensor has the same rank as the input if keepdims equals 1. If keepdims equal 0, then
the resulted tensor have the reduced dimension pruned.
the resulted tensor have the reduced dimension pruned. Input tensors of rank zero are
valid. Reduction over an empty set of values yields 0.

The above behavior is similar to numpy, with the exception that numpy defaults keepdims to
False instead of True.
Expand Down Expand Up @@ -3903,18 +3913,18 @@ def TopK(self, X: T_TopK, *, axis: int = -1, k: int) -> Tuple[T_TopK, I_TopK]:


Retrieve the top-K elements along a specified axis. Given an input tensor of
shape [a_1, a_2, ..., a_n, r] and integer argument k, return two outputs:
-Value tensor of shape [a_1, a_2, ..., a_{axis-1}, k, a_{axis+1}, ... a_n]
shape [a_0, a_1, ..., a_{n-1}] and integer argument k, return two outputs:
-Value tensor of shape [a_0, a_1, ..., a_{axis-1}, k, a_{axis+1}, ... a_{n-1}]
which contains the values of the top k elements along the specified axis
-Index tensor of shape [a_1, a_2, ..., a_{axis-1}, k, a_{axis+1}, ... a_n] which
-Index tensor of shape [a_0, a_1, ..., a_{axis-1}, k, a_{axis+1}, ... a_{n-1}] which
contains the indices of the top k elements (original indices from the input
tensor).
Given two equivalent values, this operator uses the indices along the axis as
a tiebreaker. That is, the element with the lower index will appear first.


Args:
X: Tensor of shape [a_1, a_2, ..., a_n, r]
X: Tensor of shape [a_0, a_1, ..., a_{n-1}]

axis: Dimension on which to do the sort.

Expand Down
8 changes: 4 additions & 4 deletions onnxscript/onnx_opset/_impl/opset10.py
Original file line number Diff line number Diff line change
Expand Up @@ -1202,10 +1202,10 @@ def TopK(self, X: T_TopK, K: INT64, *, axis: int = -1) -> Tuple[T_TopK, I_TopK]:


Retrieve the top-K elements along a specified axis. Given an input tensor of
shape [a_1, a_2, ..., a_n, r] and integer argument k, return two outputs:
-Value tensor of shape [a_1, a_2, ..., a_{axis-1}, k, a_{axis+1}, ... a_n]
shape [a_0, a_1, ..., a_{n-1}] and integer argument k, return two outputs:
-Value tensor of shape [a_0, a_1, ..., a_{axis-1}, k, a_{axis+1}, ... a_{n-1}]
which contains the values of the top k elements along the specified axis
-Index tensor of shape [a_1, a_2, ..., a_{axis-1}, k, a_{axis+1}, ... a_n] which
-Index tensor of shape [a_0, a_1, ..., a_{axis-1}, k, a_{axis+1}, ... a_{n-1}] which
contains the indices of the top k elements (original indices from the input
tensor).

Expand All @@ -1214,7 +1214,7 @@ def TopK(self, X: T_TopK, K: INT64, *, axis: int = -1) -> Tuple[T_TopK, I_TopK]:


Args:
X: Tensor of shape [a_1, a_2, ..., a_n, r]
X: Tensor of shape [a_0, a_1, ..., a_{n-1}]

K: A 1-D tensor containing a single positive value corresponding to the
number of top elements to retrieve
Expand Down
Loading