diff --git a/testdata/cv/features2d/descriptor_extractors/descriptor-teblid b/testdata/cv/features2d/descriptor_extractors/descriptor-teblid new file mode 100644 index 000000000..54af62290 Binary files /dev/null and b/testdata/cv/features2d/descriptor_extractors/descriptor-teblid differ diff --git a/testdata/cv/freetype/mplus/Mplus1-Regular.ttf b/testdata/cv/freetype/mplus/Mplus1-Regular.ttf new file mode 100644 index 000000000..7a48c8e89 Binary files /dev/null and b/testdata/cv/freetype/mplus/Mplus1-Regular.ttf differ diff --git a/testdata/cv/freetype/mplus/OFL.txt b/testdata/cv/freetype/mplus/OFL.txt new file mode 100644 index 000000000..b038fd111 --- /dev/null +++ b/testdata/cv/freetype/mplus/OFL.txt @@ -0,0 +1,93 @@ +Copyright 2021 The M+ FONTS Project Authors (https://github.com/coz-m/MPLUS_FONTS) + +This Font Software is licensed under the SIL Open Font License, Version 1.1. +This license is copied below, and is also available with a FAQ at: +https://scripts.sil.org/OFL + + +----------------------------------------------------------- +SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007 +----------------------------------------------------------- + +PREAMBLE +The goals of the Open Font License (OFL) are to stimulate worldwide +development of collaborative font projects, to support the font creation +efforts of academic and linguistic communities, and to provide a free and +open framework in which fonts may be shared and improved in partnership +with others. + +The OFL allows the licensed fonts to be used, studied, modified and +redistributed freely as long as they are not sold by themselves. The +fonts, including any derivative works, can be bundled, embedded, +redistributed and/or sold with any software provided that any reserved +names are not used by derivative works. The fonts and derivatives, +however, cannot be released under any other type of license. The +requirement for fonts to remain under this license does not apply +to any document created using the fonts or their derivatives. + +DEFINITIONS +"Font Software" refers to the set of files released by the Copyright +Holder(s) under this license and clearly marked as such. This may +include source files, build scripts and documentation. + +"Reserved Font Name" refers to any names specified as such after the +copyright statement(s). + +"Original Version" refers to the collection of Font Software components as +distributed by the Copyright Holder(s). + +"Modified Version" refers to any derivative made by adding to, deleting, +or substituting -- in part or in whole -- any of the components of the +Original Version, by changing formats or by porting the Font Software to a +new environment. + +"Author" refers to any designer, engineer, programmer, technical +writer or other person who contributed to the Font Software. + +PERMISSION & CONDITIONS +Permission is hereby granted, free of charge, to any person obtaining +a copy of the Font Software, to use, study, copy, merge, embed, modify, +redistribute, and sell modified and unmodified copies of the Font +Software, subject to the following conditions: + +1) Neither the Font Software nor any of its individual components, +in Original or Modified Versions, may be sold by itself. + +2) Original or Modified Versions of the Font Software may be bundled, +redistributed and/or sold with any software, provided that each copy +contains the above copyright notice and this license. These can be +included either as stand-alone text files, human-readable headers or +in the appropriate machine-readable metadata fields within text or +binary files as long as those fields can be easily viewed by the user. + +3) No Modified Version of the Font Software may use the Reserved Font +Name(s) unless explicit written permission is granted by the corresponding +Copyright Holder. This restriction only applies to the primary font name as +presented to the users. + +4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font +Software shall not be used to promote, endorse or advertise any +Modified Version, except to acknowledge the contribution(s) of the +Copyright Holder(s) and the Author(s) or with their explicit written +permission. + +5) The Font Software, modified or unmodified, in part or in whole, +must be distributed entirely under this license, and must not be +distributed under any other license. The requirement for fonts to +remain under this license does not apply to any document created +using the Font Software. + +TERMINATION +This license becomes null and void if any of the above conditions are +not met. + +DISCLAIMER +THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF +MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT +OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE +COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, +INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL +DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING +FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM +OTHER DEALINGS IN THE FONT SOFTWARE. diff --git a/testdata/cv/qrcode/issue_21287.png b/testdata/cv/qrcode/issue_21287.png new file mode 100644 index 000000000..039ce4a78 Binary files /dev/null and b/testdata/cv/qrcode/issue_21287.png differ diff --git a/testdata/dnn/onnx/data/input_clip_init_max.npy b/testdata/dnn/onnx/data/input_clip_init_max.npy new file mode 100644 index 000000000..cfe650c4b Binary files /dev/null and b/testdata/dnn/onnx/data/input_clip_init_max.npy differ diff --git a/testdata/dnn/onnx/data/input_clip_init_min.npy b/testdata/dnn/onnx/data/input_clip_init_min.npy new file mode 100644 index 000000000..ea34d4562 Binary files /dev/null and b/testdata/dnn/onnx/data/input_clip_init_min.npy differ diff --git a/testdata/dnn/onnx/data/input_clip_init_min_max.npy b/testdata/dnn/onnx/data/input_clip_init_min_max.npy new file mode 100644 index 000000000..68c2f70ff Binary files /dev/null and b/testdata/dnn/onnx/data/input_clip_init_min_max.npy differ diff --git a/testdata/dnn/onnx/data/input_div_test_1x1_0.npy b/testdata/dnn/onnx/data/input_div_test_1x1_0.npy new file mode 100644 index 000000000..487769bcf Binary files /dev/null and b/testdata/dnn/onnx/data/input_div_test_1x1_0.npy differ diff --git a/testdata/dnn/onnx/data/input_div_test_1x1_1.npy b/testdata/dnn/onnx/data/input_div_test_1x1_1.npy new file mode 100644 index 000000000..e3ffd0e06 Binary files /dev/null and b/testdata/dnn/onnx/data/input_div_test_1x1_1.npy differ diff --git a/testdata/dnn/onnx/data/input_gemm_no_transB.npy b/testdata/dnn/onnx/data/input_gemm_no_transB.npy new file mode 100644 index 000000000..b56cdfaa5 Binary files /dev/null and b/testdata/dnn/onnx/data/input_gemm_no_transB.npy differ diff --git a/testdata/dnn/onnx/data/input_gemm_transB_0.npy b/testdata/dnn/onnx/data/input_gemm_transB_0.npy new file mode 100644 index 000000000..b56cdfaa5 Binary files /dev/null and b/testdata/dnn/onnx/data/input_gemm_transB_0.npy differ diff --git a/testdata/dnn/onnx/data/input_mish_no_softplus.npy b/testdata/dnn/onnx/data/input_mish_no_softplus.npy new file mode 100644 index 000000000..5aba7419c Binary files /dev/null and b/testdata/dnn/onnx/data/input_mish_no_softplus.npy differ diff --git a/testdata/dnn/onnx/data/input_output_registration_0.npy b/testdata/dnn/onnx/data/input_output_registration_0.npy new file mode 100644 index 000000000..a9e6eab81 Binary files /dev/null and b/testdata/dnn/onnx/data/input_output_registration_0.npy differ diff --git a/testdata/dnn/onnx/data/input_output_registration_1.npy b/testdata/dnn/onnx/data/input_output_registration_1.npy new file mode 100644 index 000000000..e25a5d2a5 Binary files /dev/null and b/testdata/dnn/onnx/data/input_output_registration_1.npy differ diff --git a/testdata/dnn/onnx/data/input_quantized_conv_asymmetric_pads_int8_weights.npy b/testdata/dnn/onnx/data/input_quantized_conv_asymmetric_pads_int8_weights.npy new file mode 100644 index 000000000..c3a122a51 Binary files /dev/null and b/testdata/dnn/onnx/data/input_quantized_conv_asymmetric_pads_int8_weights.npy differ diff --git a/testdata/dnn/onnx/data/input_reduce_sum_axis_dynamic_batch.npy b/testdata/dnn/onnx/data/input_reduce_sum_axis_dynamic_batch.npy new file mode 100644 index 000000000..15bec543a Binary files /dev/null and b/testdata/dnn/onnx/data/input_reduce_sum_axis_dynamic_batch.npy differ diff --git a/testdata/dnn/onnx/data/output_clip_init_max.npy b/testdata/dnn/onnx/data/output_clip_init_max.npy new file mode 100644 index 000000000..19a77d1ed Binary files /dev/null and b/testdata/dnn/onnx/data/output_clip_init_max.npy differ diff --git a/testdata/dnn/onnx/data/output_clip_init_min.npy b/testdata/dnn/onnx/data/output_clip_init_min.npy new file mode 100644 index 000000000..71d21f902 Binary files /dev/null and b/testdata/dnn/onnx/data/output_clip_init_min.npy differ diff --git a/testdata/dnn/onnx/data/output_clip_init_min_max.npy b/testdata/dnn/onnx/data/output_clip_init_min_max.npy new file mode 100644 index 000000000..3148c382e Binary files /dev/null and b/testdata/dnn/onnx/data/output_clip_init_min_max.npy differ diff --git a/testdata/dnn/onnx/data/output_div_test_1x1.npy b/testdata/dnn/onnx/data/output_div_test_1x1.npy new file mode 100644 index 000000000..0192e0d45 Binary files /dev/null and b/testdata/dnn/onnx/data/output_div_test_1x1.npy differ diff --git a/testdata/dnn/onnx/data/output_gemm_no_transB.npy b/testdata/dnn/onnx/data/output_gemm_no_transB.npy new file mode 100644 index 000000000..f9ea2ed37 Binary files /dev/null and b/testdata/dnn/onnx/data/output_gemm_no_transB.npy differ diff --git a/testdata/dnn/onnx/data/output_gemm_transB_0.npy b/testdata/dnn/onnx/data/output_gemm_transB_0.npy new file mode 100644 index 000000000..f9ea2ed37 Binary files /dev/null and b/testdata/dnn/onnx/data/output_gemm_transB_0.npy differ diff --git a/testdata/dnn/onnx/data/output_mish_no_softplus.npy b/testdata/dnn/onnx/data/output_mish_no_softplus.npy new file mode 100644 index 000000000..2c92250c7 Binary files /dev/null and b/testdata/dnn/onnx/data/output_mish_no_softplus.npy differ diff --git a/testdata/dnn/onnx/data/output_output_registration.npy b/testdata/dnn/onnx/data/output_output_registration.npy new file mode 100644 index 000000000..08d8b42cc Binary files /dev/null and b/testdata/dnn/onnx/data/output_output_registration.npy differ diff --git a/testdata/dnn/onnx/data/output_quantized_conv_asymmetric_pads_int8_weights.npy b/testdata/dnn/onnx/data/output_quantized_conv_asymmetric_pads_int8_weights.npy new file mode 100644 index 000000000..90e08a870 Binary files /dev/null and b/testdata/dnn/onnx/data/output_quantized_conv_asymmetric_pads_int8_weights.npy differ diff --git a/testdata/dnn/onnx/data/output_reduce_sum_axis_dynamic_batch.npy b/testdata/dnn/onnx/data/output_reduce_sum_axis_dynamic_batch.npy new file mode 100644 index 000000000..b43fc8657 Binary files /dev/null and b/testdata/dnn/onnx/data/output_reduce_sum_axis_dynamic_batch.npy differ diff --git a/testdata/dnn/onnx/generate_onnx_models.py b/testdata/dnn/onnx/generate_onnx_models.py index fa63b2682..8fa233464 100644 --- a/testdata/dnn/onnx/generate_onnx_models.py +++ b/testdata/dnn/onnx/generate_onnx_models.py @@ -13,7 +13,7 @@ import onnxsim import google.protobuf.text_format import io - +from typing import Optional def assertExpected(s): if not (isinstance(s, str) or (sys.version_info[0] == 2 and isinstance(s, unicode))): @@ -75,6 +75,41 @@ def save_onnx_data_and_model(input, output, name, operation, *args, **kwargs): model = onnx.helper.make_model(graph, producer_name=name) onnx.save(model, models_files) +def save_data_and_onnx_model(name, input_np, output_np, onnx_model): + print(name + " input has sizes", input_np.shape) + input_files = os.path.join("data", "input_" + name) + np.save(input_files, input_np.data) + + print(name + " output has sizes", output_np.shape) + print() + output_files = os.path.join("data", "output_" + name) + np.save(output_files, np.ascontiguousarray(output_np.data)) + + models_files = os.path.join("models", name + ".onnx") + + onnx_model_pb = onnx._serialize(onnx_model) + model_def = assertONNXExpected(onnx_model_pb) + with open(models_files, 'wb') as file: + file.write(model_def.SerializeToString()) + +def save_data_and_onnx_model_multy_inputs(name, input_list, output_np, onnx_model): + for index in range(len(input_list)): + print(name + " input "+str(index)+" has sizes", input_list[index].shape) + input_files = os.path.join("data", "input_" + name + "_" + str(index)) + np.save(input_files, input_list[index]) + + print(name + " output has sizes", output_np.shape) + print() + output_files = os.path.join("data", "output_" + name) + np.save(output_files, np.ascontiguousarray(output_np.data)) + + models_files = os.path.join("models", name + ".onnx") + + onnx_model_pb = onnx._serialize(onnx_model) + model_def = assertONNXExpected(onnx_model_pb) + with open(models_files, 'wb') as file: + file.write(model_def.SerializeToString()) + def simplify(name, rename=False, **kwargs): model, check = onnxsim.simplify(name, **kwargs) assert check, "couldn't valide" @@ -575,6 +610,66 @@ def forward(self, x): input = Variable(torch.rand(1, 10, 2, 2)) save_data_and_model('clip', input, model) +########### clip_init ########### + +operation = "Clip" +min = -0.5 +max = 0.5 + +input = np.random.randn(3, 4, 5).astype(np.float32) +output = np.clip(input, min, max) + +X = onnx.helper.make_tensor_value_info('input', onnx.TensorProto.FLOAT, [3, 4, 5]) +MIN = onnx.helper.make_tensor_value_info('min', onnx.TensorProto.FLOAT, [1]) +MAX = onnx.helper.make_tensor_value_info('max', onnx.TensorProto.FLOAT, [1]) +Y = onnx.helper.make_tensor_value_info('output', onnx.TensorProto.FLOAT, [3, 4, 5]) +MIN_INIT = onnx.helper.make_tensor("min", onnx.TensorProto.FLOAT, [1], np.array([min])) +MAX_INIT = onnx.helper.make_tensor("max", onnx.TensorProto.FLOAT, [1], np.array([max])) + +name = "clip_init_min_max" +input = np.random.randn(3, 4, 5).astype(np.float32) +output = np.clip(input, min, max) + +input_files = os.path.join("data", "input_" + name) +np.save(input_files, input.data) +output_files = os.path.join("data", "output_" + name) +np.save(output_files, np.ascontiguousarray(output.data)) + +node = onnx.helper.make_node(operation, inputs=['input', "min", "max"], outputs=['output']) +graph = onnx.helper.make_graph([node], name, [X, MIN, MAX], [Y], [MIN_INIT, MAX_INIT]) +model = onnx.helper.make_model(graph, producer_name=name) +onnx.save(model, os.path.join("models", name + ".onnx")) + +name = "clip_init_min" +input = np.random.randn(3, 4, 5).astype(np.float32) +output = np.clip(input, min, None) + +input_files = os.path.join("data", "input_" + name) +np.save(input_files, input.data) +output_files = os.path.join("data", "output_" + name) +np.save(output_files, np.ascontiguousarray(output.data)) + +node = onnx.helper.make_node(operation, inputs=['input', "min", ""], outputs=['output']) +graph = onnx.helper.make_graph([node], name, [X, MIN], [Y], [MIN_INIT]) +model = onnx.helper.make_model(graph, producer_name=name) +onnx.save(model, os.path.join("models", name + ".onnx")) + +name = "clip_init_max" +input = np.random.randn(3, 4, 5).astype(np.float32) +output = np.clip(input, None, max) + +input_files = os.path.join("data", "input_" + name) +np.save(input_files, input.data) +output_files = os.path.join("data", "output_" + name) +np.save(output_files, np.ascontiguousarray(output.data)) + +node = onnx.helper.make_node(operation, inputs=['input', "", "max"], outputs=['output']) +graph = onnx.helper.make_graph([node], name, [X, MAX], [Y], [MAX_INIT]) +model = onnx.helper.make_model(graph, producer_name=name) +onnx.save(model, os.path.join("models", name + ".onnx")) + +################################# + input = Variable(torch.randn(1, 3, 6, 6, 6)) deconv = nn.ConvTranspose3d(3, 3, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(0, 0, 0), bias=False) save_data_and_model("deconv3d", input, deconv) @@ -1426,14 +1521,14 @@ def forward(self, x): save_data_and_model("reduce_max", x, model) class ReduceMax(nn.Module): - def __init__(self, axes): - super(ReduceMax, self).__init__() - self.axes = axes + def __init__(self, axes): + super(ReduceMax, self).__init__() + self.axes = axes - def forward(self, x): - # torch.return_types.max(values, indices) - out = torch.max(x, dim=self.axes, keepdim=False)[0] - return out + def forward(self, x): + # torch.return_types.max(values, indices) + out = torch.max(x, dim=self.axes, keepdim=False)[0] + return out x = Variable(torch.randn(1, 3, 2, 2)) @@ -1743,6 +1838,14 @@ def forward(self, x): model = Mish() save_data_and_model("mish", x, model) +class Mish2(nn.Module): + def forward(self, x): + return x * (torch.tanh(torch.log(torch.exp(x) + 1))) + +x = Variable(torch.randn([1, 2, 2, 2])) +model = Mish2() +save_data_and_model("mish_no_softplus", x, model) + class PadCalculation(nn.Module): def forward(self, x): y = F.max_pool2d(x, kernel_size=2) @@ -1927,3 +2030,101 @@ def forward(self, x): onnx.save(model, models_files) ########################## const / x ########################## + +class OutputRegistration(nn.Module): + def __init__(self): + super(OutputRegistration, self).__init__() + self.c = torch.randn(2, 2) + + def forward(self, a, b): + return (a + b) + self.c + +a = Variable(torch.randn(2, 2)) +b = Variable(torch.randn(2, 2)) +model = OutputRegistration() +save_data_and_model_multy_inputs('output_registration', model, a, b) +model = onnx.load('models/output_registration.onnx') +model.graph.node[0].name = model.graph.output[0].name +onnx.save(model, 'models/output_registration.onnx') + +# ########################## GEMM ########################## +# The original code is : https://github.com/onnx/onnx/blob/main/onnx/backend/test/case/node/gemm.py +def gemm_reference_implementation(A: np.ndarray, B: np.ndarray, C: Optional[np.ndarray] = None, alpha: float = 1., beta: float = 1., transA: int = 0, + transB: int = 0) -> np.ndarray: + A = A if transA == 0 else A.T + B = B if transB == 0 else B.T + C = C if C is not None else np.array(0) + + Y = alpha * np.dot(A, B) + beta * C + + return Y + +## gemm without transB +input_np = np.random.rand(2, 10).astype("float32") +inputs = [onnx.helper.make_tensor_value_info("input1", onnx.mapping.NP_TYPE_TO_TENSOR_TYPE[input_np.dtype], shape=input_np.shape)] + +weight_np = np.random.rand(10, 3).astype("float32") +weight_tensor = onnx.helper.make_tensor('weight_tensor', data_type=onnx.mapping.NP_TYPE_TO_TENSOR_TYPE[weight_np.dtype], dims=weight_np.shape, vals=weight_np) + +outputs = [onnx.helper.make_tensor_value_info("output", onnx.TensorProto.FLOAT, shape=(2, 3))] + +nodes = [onnx.helper.make_node("Gemm", ["input1", "weight_tensor"], ["output"])] + +graph = onnx.helper.make_graph(nodes, + "gemm_test", + inputs, + outputs, initializer=[weight_tensor]) +gemm_model = onnx.helper.make_model(graph) +output_np = gemm_reference_implementation(input_np, weight_np) +save_data_and_onnx_model("gemm_no_transB", input_np, output_np, gemm_model) + +## gemm with transB = 0 + +nodes2 = [onnx.helper.make_node("Gemm", ["input1", "weight_tensor"], ["output"], transB=0)] +graph2 = onnx.helper.make_graph(nodes2, + "gemm_test", + inputs, + outputs, initializer=[weight_tensor]) +gemm_model2 = onnx.helper.make_model(graph2) +output_np = gemm_reference_implementation(input_np, weight_np) +save_data_and_onnx_model("gemm_transB_0", input_np, output_np, gemm_model2) + +# ########################## ReduceSum with Dynamic Batch ########################## +input_np = np.random.rand(2, 4, 4, 4).astype("float32") +inputs = [onnx.helper.make_tensor_value_info("input1", onnx.mapping.NP_TYPE_TO_TENSOR_TYPE[input_np.dtype], shape=('?', 4, 4, 4))] + +axis_np = np.array([1]).astype(np.int64) +axis_tensor = onnx.helper.make_tensor('axis_tensor', data_type=onnx.mapping.NP_TYPE_TO_TENSOR_TYPE[axis_np.dtype], dims=axis_np.shape, vals=axis_np) + +outputs = [onnx.helper.make_tensor_value_info("output", onnx.TensorProto.FLOAT, shape=(2, 1, 4, 4))] + +nodes = [onnx.helper.make_node("ReduceSum", ["input1", "axis_tensor"], ["output"], keepdims=1)] + +graph = onnx.helper.make_graph(nodes, + "reduce_sum", + inputs, + outputs, initializer=[axis_tensor]) +onnx_model = onnx.helper.make_model(graph) + +output_np = np.sum(input_np, axis=1, keepdims=1) +save_data_and_onnx_model("reduce_sum_axis_dynamic_batch", input_np, output_np, onnx_model) + + +# ########################## DivBroadcast ########################## +input_np = np.random.rand(1, 4).astype("float32") +input2_np = np.random.rand(1, 1).astype(np.float32) +inputs = [onnx.helper.make_tensor_value_info("input1", onnx.mapping.NP_TYPE_TO_TENSOR_TYPE[input_np.dtype], shape=input_np.shape), \ + onnx.helper.make_tensor_value_info("input2", onnx.mapping.NP_TYPE_TO_TENSOR_TYPE[input2_np.dtype], shape=input2_np.shape)] + +outputs = [onnx.helper.make_tensor_value_info("output", onnx.TensorProto.FLOAT, shape=(1, 4))] + +nodes = [onnx.helper.make_node("Div", ["input1", "input2"], ["output"])] + +graph = onnx.helper.make_graph(nodes, + "div_test", + inputs, + outputs) +onnx_model = onnx.helper.make_model(graph) + +output_np = input_np/input2_np +save_data_and_onnx_model_multy_inputs("div_test_1x1", [input_np, input2_np], output_np, onnx_model) diff --git a/testdata/dnn/onnx/models/clip_init_max.onnx b/testdata/dnn/onnx/models/clip_init_max.onnx new file mode 100644 index 000000000..c2cd202f7 Binary files /dev/null and b/testdata/dnn/onnx/models/clip_init_max.onnx differ diff --git a/testdata/dnn/onnx/models/clip_init_min.onnx b/testdata/dnn/onnx/models/clip_init_min.onnx new file mode 100644 index 000000000..fe9f597e9 Binary files /dev/null and b/testdata/dnn/onnx/models/clip_init_min.onnx differ diff --git a/testdata/dnn/onnx/models/clip_init_min_max.onnx b/testdata/dnn/onnx/models/clip_init_min_max.onnx new file mode 100644 index 000000000..75e5a6545 Binary files /dev/null and b/testdata/dnn/onnx/models/clip_init_min_max.onnx differ diff --git a/testdata/dnn/onnx/models/div_test_1x1.onnx b/testdata/dnn/onnx/models/div_test_1x1.onnx new file mode 100644 index 000000000..52eee842e --- /dev/null +++ b/testdata/dnn/onnx/models/div_test_1x1.onnx @@ -0,0 +1,16 @@ +:w + +input1 +input2output"Divdiv_testZ +input1 +  + +Z +input2 +  + +b +output +  + +B \ No newline at end of file diff --git a/testdata/dnn/onnx/models/gemm_no_transB.onnx b/testdata/dnn/onnx/models/gemm_no_transB.onnx new file mode 100644 index 000000000..07e47ff22 --- /dev/null +++ b/testdata/dnn/onnx/models/gemm_no_transB.onnx @@ -0,0 +1,16 @@ +:� +�weight_node_outinput22"Constant*� +value*� +"x��z?��L?G�>��G?�9�=��#?4�>��q?ڗ?�N�>�s�>.4F?���>�?�<\?N�?c�?y�q?Ƌ.?k�>���>��2?��v=9�*?�+?�nW>A>��>L8�>B weight_tensor� +' +input1 +weight_node_outoutput"Gemm gemm_testZ +input1 +  + + +b +output +  + +B \ No newline at end of file diff --git a/testdata/dnn/onnx/models/gemm_transB_0.onnx b/testdata/dnn/onnx/models/gemm_transB_0.onnx new file mode 100644 index 000000000..46bf7fe4a Binary files /dev/null and b/testdata/dnn/onnx/models/gemm_transB_0.onnx differ diff --git a/testdata/dnn/onnx/models/mish_no_softplus.onnx b/testdata/dnn/onnx/models/mish_no_softplus.onnx new file mode 100644 index 000000000..510a99e85 Binary files /dev/null and b/testdata/dnn/onnx/models/mish_no_softplus.onnx differ diff --git a/testdata/dnn/onnx/models/output_registration.onnx b/testdata/dnn/onnx/models/output_registration.onnx new file mode 100644 index 000000000..12b67adb4 --- /dev/null +++ b/testdata/dnn/onnx/models/output_registration.onnx @@ -0,0 +1,22 @@ +pytorch1.9:� + +0 +124"Add +?3 +Constant_1"Constant*$ +value*J��˿�D�>��A?m(�>� + +2 +34Add_2"Addtorch-jit-exportZ +0 +  + +Z +1 +  + +b +4 +  + +B \ No newline at end of file diff --git a/testdata/dnn/onnx/models/quantized_conv_asymmetric_pads_int8_weights.onnx b/testdata/dnn/onnx/models/quantized_conv_asymmetric_pads_int8_weights.onnx new file mode 100644 index 000000000..21448447d Binary files /dev/null and b/testdata/dnn/onnx/models/quantized_conv_asymmetric_pads_int8_weights.onnx differ diff --git a/testdata/dnn/onnx/models/reduce_sum_axis_dynamic_batch.onnx b/testdata/dnn/onnx/models/reduce_sum_axis_dynamic_batch.onnx new file mode 100644 index 000000000..078f835cb --- /dev/null +++ b/testdata/dnn/onnx/models/reduce_sum_axis_dynamic_batch.onnx @@ -0,0 +1,18 @@ +:� +9 +input1 + axis_tensoroutput" ReduceSum* +keepdims� +reduce_sum*:B axis_tensorZ! +input1 + +? + + +b! +output + +? + + +B \ No newline at end of file diff --git a/testdata/dnn/tensorflow/tf_graph_simplifier_buffer_overflow_net.pb b/testdata/dnn/tensorflow/tf_graph_simplifier_buffer_overflow_net.pb deleted file mode 100644 index 9a6c03d5b..000000000 Binary files a/testdata/dnn/tensorflow/tf_graph_simplifier_buffer_overflow_net.pb and /dev/null differ