Skip to content

(5.x) Merge 4.x #997

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 28 commits into from
Aug 21, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
703e2f5
add output registration test data
rogday Apr 5, 2022
b0f0de5
test data for qconv asymmetric padding support
fengyuentau May 16, 2022
05cd515
move test network to in-memory buffer
rogday May 18, 2022
9c33634
Merge pull request #973 from fengyuentau:qconv_asympad
alalek May 19, 2022
daaab3a
Merge pull request #968 from rogday:revert_renaming
alalek May 23, 2022
c2cf721
Merge remote-tracking branch 'upstream/3.4' into merge-3.4
alalek May 23, 2022
cfca620
add qr issue_21287.png
May 23, 2022
fb82ef9
Merge pull request #974 from rogday:21947_fix
alalek May 24, 2022
5bad582
Merge pull request #976 from AleksandrPanov:fix_samplingForVersion_mu…
alalek Jun 4, 2022
936854e
Merge remote-tracking branch 'upstream/3.4' into merge-3.4
alalek Jun 4, 2022
6629a4a
freetype: add Mplus1-Regular.ttf
Kumataro Jun 19, 2022
a95b7d1
generate gemm onnx sample by onnx
zihaomu Jun 21, 2022
e6acfa4
update
zihaomu Jun 21, 2022
abe4d1d
Merge pull request #982 from zihaomu:gemm_onnx_bug_fix
alalek Jun 22, 2022
de7b75a
Merge pull request #980 from WanliZhong:issue_22015
WanliZhong Jun 22, 2022
fdba14b
Merge pull request #983 from zihaomu:gemm_onnx_bug_fix_branch34
zihaomu Jun 23, 2022
1bf78cc
Merge pull request #981 from Kumataro:4.x-issue_contrib3276
alalek Jun 25, 2022
4e72d02
Merge pull request #978 from iago-suarez:4.x
iago-suarez Jun 29, 2022
81c2c97
update the test case of Div
zihaomu Jul 11, 2022
32f664f
update ReduceSum with two input
zihaomu Jul 11, 2022
7020088
update dynamic batch of reduce layer
zihaomu Jul 13, 2022
d16aeef
Merge pull request #986 from zihaomu:bug_fix_22195_3_4
alalek Jul 14, 2022
c12f1a7
update reduceSum test case
zihaomu Jul 27, 2022
3d5b610
add onns mish without softplus
zihaomu Jul 28, 2022
23a54da
Merge pull request #990 from zihaomu:layer_fused_optmized_mish
asmorkalov Aug 5, 2022
5992b2f
Merge pull request #987 from zihaomu:bug_fix_22195
asmorkalov Aug 12, 2022
4cc63b1
Merge remote-tracking branch 'upstream/3.4' into merge-3.4
alalek Aug 14, 2022
78cdaef
Merge branch 4.x
alalek Aug 21, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file not shown.
Binary file added testdata/cv/freetype/mplus/Mplus1-Regular.ttf
Binary file not shown.
93 changes: 93 additions & 0 deletions testdata/cv/freetype/mplus/OFL.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,93 @@
Copyright 2021 The M+ FONTS Project Authors (https://github.com/coz-m/MPLUS_FONTS)

This Font Software is licensed under the SIL Open Font License, Version 1.1.
This license is copied below, and is also available with a FAQ at:
https://scripts.sil.org/OFL


-----------------------------------------------------------
SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
-----------------------------------------------------------

PREAMBLE
The goals of the Open Font License (OFL) are to stimulate worldwide
development of collaborative font projects, to support the font creation
efforts of academic and linguistic communities, and to provide a free and
open framework in which fonts may be shared and improved in partnership
with others.

The OFL allows the licensed fonts to be used, studied, modified and
redistributed freely as long as they are not sold by themselves. The
fonts, including any derivative works, can be bundled, embedded,
redistributed and/or sold with any software provided that any reserved
names are not used by derivative works. The fonts and derivatives,
however, cannot be released under any other type of license. The
requirement for fonts to remain under this license does not apply
to any document created using the fonts or their derivatives.

DEFINITIONS
"Font Software" refers to the set of files released by the Copyright
Holder(s) under this license and clearly marked as such. This may
include source files, build scripts and documentation.

"Reserved Font Name" refers to any names specified as such after the
copyright statement(s).

"Original Version" refers to the collection of Font Software components as
distributed by the Copyright Holder(s).

"Modified Version" refers to any derivative made by adding to, deleting,
or substituting -- in part or in whole -- any of the components of the
Original Version, by changing formats or by porting the Font Software to a
new environment.

"Author" refers to any designer, engineer, programmer, technical
writer or other person who contributed to the Font Software.

PERMISSION & CONDITIONS
Permission is hereby granted, free of charge, to any person obtaining
a copy of the Font Software, to use, study, copy, merge, embed, modify,
redistribute, and sell modified and unmodified copies of the Font
Software, subject to the following conditions:

1) Neither the Font Software nor any of its individual components,
in Original or Modified Versions, may be sold by itself.

2) Original or Modified Versions of the Font Software may be bundled,
redistributed and/or sold with any software, provided that each copy
contains the above copyright notice and this license. These can be
included either as stand-alone text files, human-readable headers or
in the appropriate machine-readable metadata fields within text or
binary files as long as those fields can be easily viewed by the user.

3) No Modified Version of the Font Software may use the Reserved Font
Name(s) unless explicit written permission is granted by the corresponding
Copyright Holder. This restriction only applies to the primary font name as
presented to the users.

4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
Software shall not be used to promote, endorse or advertise any
Modified Version, except to acknowledge the contribution(s) of the
Copyright Holder(s) and the Author(s) or with their explicit written
permission.

5) The Font Software, modified or unmodified, in part or in whole,
must be distributed entirely under this license, and must not be
distributed under any other license. The requirement for fonts to
remain under this license does not apply to any document created
using the Font Software.

TERMINATION
This license becomes null and void if any of the above conditions are
not met.

DISCLAIMER
THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
OTHER DEALINGS IN THE FONT SOFTWARE.
Binary file added testdata/cv/qrcode/issue_21287.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added testdata/dnn/onnx/data/input_clip_init_max.npy
Binary file not shown.
Binary file added testdata/dnn/onnx/data/input_clip_init_min.npy
Binary file not shown.
Binary file added testdata/dnn/onnx/data/input_clip_init_min_max.npy
Binary file not shown.
Binary file added testdata/dnn/onnx/data/input_div_test_1x1_0.npy
Binary file not shown.
Binary file added testdata/dnn/onnx/data/input_div_test_1x1_1.npy
Binary file not shown.
Binary file added testdata/dnn/onnx/data/input_gemm_no_transB.npy
Binary file not shown.
Binary file added testdata/dnn/onnx/data/input_gemm_transB_0.npy
Binary file not shown.
Binary file added testdata/dnn/onnx/data/input_mish_no_softplus.npy
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file added testdata/dnn/onnx/data/output_clip_init_max.npy
Binary file not shown.
Binary file added testdata/dnn/onnx/data/output_clip_init_min.npy
Binary file not shown.
Binary file added testdata/dnn/onnx/data/output_clip_init_min_max.npy
Binary file not shown.
Binary file added testdata/dnn/onnx/data/output_div_test_1x1.npy
Binary file not shown.
Binary file added testdata/dnn/onnx/data/output_gemm_no_transB.npy
Binary file not shown.
Binary file added testdata/dnn/onnx/data/output_gemm_transB_0.npy
Binary file not shown.
Binary file added testdata/dnn/onnx/data/output_mish_no_softplus.npy
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
217 changes: 209 additions & 8 deletions testdata/dnn/onnx/generate_onnx_models.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
import onnxsim
import google.protobuf.text_format
import io

from typing import Optional

def assertExpected(s):
if not (isinstance(s, str) or (sys.version_info[0] == 2 and isinstance(s, unicode))):
Expand Down Expand Up @@ -75,6 +75,41 @@ def save_onnx_data_and_model(input, output, name, operation, *args, **kwargs):
model = onnx.helper.make_model(graph, producer_name=name)
onnx.save(model, models_files)

def save_data_and_onnx_model(name, input_np, output_np, onnx_model):
print(name + " input has sizes", input_np.shape)
input_files = os.path.join("data", "input_" + name)
np.save(input_files, input_np.data)

print(name + " output has sizes", output_np.shape)
print()
output_files = os.path.join("data", "output_" + name)
np.save(output_files, np.ascontiguousarray(output_np.data))

models_files = os.path.join("models", name + ".onnx")

onnx_model_pb = onnx._serialize(onnx_model)
model_def = assertONNXExpected(onnx_model_pb)
with open(models_files, 'wb') as file:
file.write(model_def.SerializeToString())

def save_data_and_onnx_model_multy_inputs(name, input_list, output_np, onnx_model):
for index in range(len(input_list)):
print(name + " input "+str(index)+" has sizes", input_list[index].shape)
input_files = os.path.join("data", "input_" + name + "_" + str(index))
np.save(input_files, input_list[index])

print(name + " output has sizes", output_np.shape)
print()
output_files = os.path.join("data", "output_" + name)
np.save(output_files, np.ascontiguousarray(output_np.data))

models_files = os.path.join("models", name + ".onnx")

onnx_model_pb = onnx._serialize(onnx_model)
model_def = assertONNXExpected(onnx_model_pb)
with open(models_files, 'wb') as file:
file.write(model_def.SerializeToString())

def simplify(name, rename=False, **kwargs):
model, check = onnxsim.simplify(name, **kwargs)
assert check, "couldn't valide"
Expand Down Expand Up @@ -575,6 +610,66 @@ def forward(self, x):
input = Variable(torch.rand(1, 10, 2, 2))
save_data_and_model('clip', input, model)

########### clip_init ###########

operation = "Clip"
min = -0.5
max = 0.5

input = np.random.randn(3, 4, 5).astype(np.float32)
output = np.clip(input, min, max)

X = onnx.helper.make_tensor_value_info('input', onnx.TensorProto.FLOAT, [3, 4, 5])
MIN = onnx.helper.make_tensor_value_info('min', onnx.TensorProto.FLOAT, [1])
MAX = onnx.helper.make_tensor_value_info('max', onnx.TensorProto.FLOAT, [1])
Y = onnx.helper.make_tensor_value_info('output', onnx.TensorProto.FLOAT, [3, 4, 5])
MIN_INIT = onnx.helper.make_tensor("min", onnx.TensorProto.FLOAT, [1], np.array([min]))
MAX_INIT = onnx.helper.make_tensor("max", onnx.TensorProto.FLOAT, [1], np.array([max]))

name = "clip_init_min_max"
input = np.random.randn(3, 4, 5).astype(np.float32)
output = np.clip(input, min, max)

input_files = os.path.join("data", "input_" + name)
np.save(input_files, input.data)
output_files = os.path.join("data", "output_" + name)
np.save(output_files, np.ascontiguousarray(output.data))

node = onnx.helper.make_node(operation, inputs=['input', "min", "max"], outputs=['output'])
graph = onnx.helper.make_graph([node], name, [X, MIN, MAX], [Y], [MIN_INIT, MAX_INIT])
model = onnx.helper.make_model(graph, producer_name=name)
onnx.save(model, os.path.join("models", name + ".onnx"))

name = "clip_init_min"
input = np.random.randn(3, 4, 5).astype(np.float32)
output = np.clip(input, min, None)

input_files = os.path.join("data", "input_" + name)
np.save(input_files, input.data)
output_files = os.path.join("data", "output_" + name)
np.save(output_files, np.ascontiguousarray(output.data))

node = onnx.helper.make_node(operation, inputs=['input', "min", ""], outputs=['output'])
graph = onnx.helper.make_graph([node], name, [X, MIN], [Y], [MIN_INIT])
model = onnx.helper.make_model(graph, producer_name=name)
onnx.save(model, os.path.join("models", name + ".onnx"))

name = "clip_init_max"
input = np.random.randn(3, 4, 5).astype(np.float32)
output = np.clip(input, None, max)

input_files = os.path.join("data", "input_" + name)
np.save(input_files, input.data)
output_files = os.path.join("data", "output_" + name)
np.save(output_files, np.ascontiguousarray(output.data))

node = onnx.helper.make_node(operation, inputs=['input', "", "max"], outputs=['output'])
graph = onnx.helper.make_graph([node], name, [X, MAX], [Y], [MAX_INIT])
model = onnx.helper.make_model(graph, producer_name=name)
onnx.save(model, os.path.join("models", name + ".onnx"))

#################################

input = Variable(torch.randn(1, 3, 6, 6, 6))
deconv = nn.ConvTranspose3d(3, 3, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(0, 0, 0), bias=False)
save_data_and_model("deconv3d", input, deconv)
Expand Down Expand Up @@ -1426,14 +1521,14 @@ def forward(self, x):
save_data_and_model("reduce_max", x, model)

class ReduceMax(nn.Module):
def __init__(self, axes):
super(ReduceMax, self).__init__()
self.axes = axes
def __init__(self, axes):
super(ReduceMax, self).__init__()
self.axes = axes

def forward(self, x):
# torch.return_types.max(values, indices)
out = torch.max(x, dim=self.axes, keepdim=False)[0]
return out
def forward(self, x):
# torch.return_types.max(values, indices)
out = torch.max(x, dim=self.axes, keepdim=False)[0]
return out

x = Variable(torch.randn(1, 3, 2, 2))

Expand Down Expand Up @@ -1743,6 +1838,14 @@ def forward(self, x):
model = Mish()
save_data_and_model("mish", x, model)

class Mish2(nn.Module):
def forward(self, x):
return x * (torch.tanh(torch.log(torch.exp(x) + 1)))

x = Variable(torch.randn([1, 2, 2, 2]))
model = Mish2()
save_data_and_model("mish_no_softplus", x, model)

class PadCalculation(nn.Module):
def forward(self, x):
y = F.max_pool2d(x, kernel_size=2)
Expand Down Expand Up @@ -1927,3 +2030,101 @@ def forward(self, x):
onnx.save(model, models_files)

########################## const / x ##########################

class OutputRegistration(nn.Module):
def __init__(self):
super(OutputRegistration, self).__init__()
self.c = torch.randn(2, 2)

def forward(self, a, b):
return (a + b) + self.c

a = Variable(torch.randn(2, 2))
b = Variable(torch.randn(2, 2))
model = OutputRegistration()
save_data_and_model_multy_inputs('output_registration', model, a, b)
model = onnx.load('models/output_registration.onnx')
model.graph.node[0].name = model.graph.output[0].name
onnx.save(model, 'models/output_registration.onnx')

# ########################## GEMM ##########################
# The original code is : https://github.com/onnx/onnx/blob/main/onnx/backend/test/case/node/gemm.py
def gemm_reference_implementation(A: np.ndarray, B: np.ndarray, C: Optional[np.ndarray] = None, alpha: float = 1., beta: float = 1., transA: int = 0,
transB: int = 0) -> np.ndarray:
A = A if transA == 0 else A.T
B = B if transB == 0 else B.T
C = C if C is not None else np.array(0)

Y = alpha * np.dot(A, B) + beta * C

return Y

## gemm without transB
input_np = np.random.rand(2, 10).astype("float32")
inputs = [onnx.helper.make_tensor_value_info("input1", onnx.mapping.NP_TYPE_TO_TENSOR_TYPE[input_np.dtype], shape=input_np.shape)]

weight_np = np.random.rand(10, 3).astype("float32")
weight_tensor = onnx.helper.make_tensor('weight_tensor', data_type=onnx.mapping.NP_TYPE_TO_TENSOR_TYPE[weight_np.dtype], dims=weight_np.shape, vals=weight_np)

outputs = [onnx.helper.make_tensor_value_info("output", onnx.TensorProto.FLOAT, shape=(2, 3))]

nodes = [onnx.helper.make_node("Gemm", ["input1", "weight_tensor"], ["output"])]

graph = onnx.helper.make_graph(nodes,
"gemm_test",
inputs,
outputs, initializer=[weight_tensor])
gemm_model = onnx.helper.make_model(graph)
output_np = gemm_reference_implementation(input_np, weight_np)
save_data_and_onnx_model("gemm_no_transB", input_np, output_np, gemm_model)

## gemm with transB = 0

nodes2 = [onnx.helper.make_node("Gemm", ["input1", "weight_tensor"], ["output"], transB=0)]
graph2 = onnx.helper.make_graph(nodes2,
"gemm_test",
inputs,
outputs, initializer=[weight_tensor])
gemm_model2 = onnx.helper.make_model(graph2)
output_np = gemm_reference_implementation(input_np, weight_np)
save_data_and_onnx_model("gemm_transB_0", input_np, output_np, gemm_model2)

# ########################## ReduceSum with Dynamic Batch ##########################
input_np = np.random.rand(2, 4, 4, 4).astype("float32")
inputs = [onnx.helper.make_tensor_value_info("input1", onnx.mapping.NP_TYPE_TO_TENSOR_TYPE[input_np.dtype], shape=('?', 4, 4, 4))]

axis_np = np.array([1]).astype(np.int64)
axis_tensor = onnx.helper.make_tensor('axis_tensor', data_type=onnx.mapping.NP_TYPE_TO_TENSOR_TYPE[axis_np.dtype], dims=axis_np.shape, vals=axis_np)

outputs = [onnx.helper.make_tensor_value_info("output", onnx.TensorProto.FLOAT, shape=(2, 1, 4, 4))]

nodes = [onnx.helper.make_node("ReduceSum", ["input1", "axis_tensor"], ["output"], keepdims=1)]

graph = onnx.helper.make_graph(nodes,
"reduce_sum",
inputs,
outputs, initializer=[axis_tensor])
onnx_model = onnx.helper.make_model(graph)

output_np = np.sum(input_np, axis=1, keepdims=1)
save_data_and_onnx_model("reduce_sum_axis_dynamic_batch", input_np, output_np, onnx_model)


# ########################## DivBroadcast ##########################
input_np = np.random.rand(1, 4).astype("float32")
input2_np = np.random.rand(1, 1).astype(np.float32)
inputs = [onnx.helper.make_tensor_value_info("input1", onnx.mapping.NP_TYPE_TO_TENSOR_TYPE[input_np.dtype], shape=input_np.shape), \
onnx.helper.make_tensor_value_info("input2", onnx.mapping.NP_TYPE_TO_TENSOR_TYPE[input2_np.dtype], shape=input2_np.shape)]

outputs = [onnx.helper.make_tensor_value_info("output", onnx.TensorProto.FLOAT, shape=(1, 4))]

nodes = [onnx.helper.make_node("Div", ["input1", "input2"], ["output"])]

graph = onnx.helper.make_graph(nodes,
"div_test",
inputs,
outputs)
onnx_model = onnx.helper.make_model(graph)

output_np = input_np/input2_np
save_data_and_onnx_model_multy_inputs("div_test_1x1", [input_np, input2_np], output_np, onnx_model)
Binary file added testdata/dnn/onnx/models/clip_init_max.onnx
Binary file not shown.
Binary file added testdata/dnn/onnx/models/clip_init_min.onnx
Binary file not shown.
Binary file added testdata/dnn/onnx/models/clip_init_min_max.onnx
Binary file not shown.
16 changes: 16 additions & 0 deletions testdata/dnn/onnx/models/div_test_1x1.onnx
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
:w

input1
input2output"Divdiv_testZ
input1


Z
input2


b
output


B
16 changes: 16 additions & 0 deletions testdata/dnn/onnx/models/gemm_no_transB.onnx
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
:�
�weight_node_outinput22"Constant*�
value*�
"x��z?��L?G�>��G?�9�=��#?4�>��q?ڗ?�N�>�s�>.4F?���>�?�<\?N�?c�?y�q?Ƌ.?k�>���>��2?��v=9�*?�+?�nW>A>��>L8�>Bweight_tensor�
'
input1
weight_node_outoutput"Gemm gemm_testZ
input1



b
output


B
Expand Down
Binary file added testdata/dnn/onnx/models/gemm_transB_0.onnx
Binary file not shown.
Binary file added testdata/dnn/onnx/models/mish_no_softplus.onnx
Binary file not shown.
Loading