Skip to content

Commit aa0d235

Browse files
committed
Merge pull request #989 from zihaomu:qgemm_and_squeeze_opset13_onnximporter
2 parents eafd787 + 4893662 commit aa0d235

9 files changed

+33
-1
lines changed

testdata/dnn/onnx/data/README.md

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
### OpenCV: Open Source Computer Vision Library
2+
3+
This repository contains extra data for the OpenCV library.
4+
5+
#### Resources
6+
* Homepage: http://opencv.org
7+
* Docs: http://docs.opencv.org
8+
* Q&A forum: https://forum.opencv.org
9+
* previous forum (read only): http://answers.opencv.org
10+
* Issue tracking: https://github.com/opencv/opencv/issues
11+
12+
#### Contributing
13+
14+
Please read before starting work on a pull request: https://github.com/opencv/opencv/wiki/How_to_contribute
15+
16+
Summary of guidelines:
17+
18+
* One pull request per issue;
19+
* Choose the right base branch;
20+
* Include tests and documentation;
21+
* Clean up "oops" commits before submitting;
22+
* Follow the coding style guide.
140 Bytes
Binary file not shown.
224 Bytes
Binary file not shown.
140 Bytes
Binary file not shown.
224 Bytes
Binary file not shown.

testdata/dnn/onnx/generate_onnx_models.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -774,6 +774,7 @@ def forward(self, x):
774774
model = Squeeze()
775775
model.eval()
776776
save_data_and_model("squeeze", input, model)
777+
save_data_and_model("squeeze_axes_op13", input, model, version=13)
777778

778779
class Div(nn.Module):
779780

testdata/dnn/onnx/generate_quantized_onnx_models.py

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -269,4 +269,13 @@ def forward(self, x):
269269
nn.Linear(84, 10)
270270
)
271271
input = Variable(torch.randn(1, 3, 32, 32))
272-
quantize_and_save_model("quantized_constant", input, model, wt_type="int8", per_channel=True)
272+
quantize_and_save_model("quantized_constant", input, model, wt_type="int8", per_channel=True)
273+
274+
class Gemm(nn.Module):
275+
def forward(self, x):
276+
mat1 =torch.ones(3, 3)
277+
return torch.mm(x, mat1)
278+
279+
input = Variable(torch.randn(1, 3))
280+
model = Gemm()
281+
quantize_and_save_model("quantized_gemm", input, model, act_type="int8", wt_type="int8", per_channel=False)
806 Bytes
Binary file not shown.
182 Bytes
Binary file not shown.

0 commit comments

Comments
 (0)