Skip to content

Commit d6cacdd

Browse files
authored
Merge pull request #14 from kneron/add_script
Update toolchain to v0.20.1
2 parents 481e437 + 97bcb9c commit d6cacdd

File tree

13 files changed

+322
-63
lines changed

13 files changed

+322
-63
lines changed

.gitignore

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,6 @@
1+
__pycache__
12
site/
3+
scripts/generated_tests/
24
nohup.out
35
error.log
6+
*.pyc

docs/toolchain/appendix/history.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,11 @@
2121

2222
## Toolchain Change log
2323

24+
* **[v0.20.1]**
25+
* Update toolchain example to MobileNet v2.
26+
* Fix knerex bias adjustment.
27+
* Fix knerex shared weight with same name support.
28+
* Fix other bugs.
2429
* **[v0.20.0]**
2530
* Support text procssing models.
2631
* Set flatbuffer as the default 720 compiling mode.

docs/toolchain/manual_1_overview.md

Lines changed: 22 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -4,8 +4,8 @@
44

55
# 1. Toolchain Overview
66

7-
**2022 Nov**
8-
**Toolchain v0.20.0**
7+
**2023 Feb**
8+
**Toolchain v0.20.1**
99

1010
## 1.1. Introduction
1111

@@ -20,6 +20,11 @@ In this document, you'll learn:
2020

2121
**Major changes of the current version**
2222

23+
* **[v0.20.1]**
24+
* Update toolchain example to MobileNet v2.
25+
* Fix knerex bias adjustment.
26+
* Fix knerex shared weight with same name support.
27+
* Fix other bugs.
2328
* **[v0.20.0]**
2429
* Support text procssing models.
2530
* Set flatbuffer as the default 720 compiling mode.
@@ -45,7 +50,7 @@ To keep the diagram as clear as possible, some details are omitted. But it is en
4550
2. Fixed-point model generation. Quantize the floating-point model and generate bie file. Test the bie file and compare the result with the previous step.
4651
3. Compilation. Batch compile multiple bie models into a nef format binary file. Test the nef file and compare the result with the previous step.
4752

48-
In the following parts, we will use LittleNet as the example. Details will be explained later in other sections.
53+
In the following parts, we will use MobileNet V2 as the example. Details will be explained later in other sections.
4954
And all the code below in this section can be found inside the docker at `/workspace/examples/test_python_api.py`.
5055

5156
### 1.3. Toolchain Docker Deployment
@@ -76,7 +81,7 @@ In the following sections, we'll introduce the API and their usage. You can also
7681

7782
**Note that this package is only available in the docker due to the dependency issue.**
7883

79-
Here the LittleNet model is already in ONNX format. So, we only need to optimize the ONNX model to fix our toolchain.
84+
Here the MobileNet V2 model is already in ONNX format. So, we only need to optimize the ONNX model to fix our toolchain.
8085
The following model optimization code is in Python since we are using the Python API.
8186

8287
```python
@@ -85,7 +90,7 @@ import onnx
8590
import ktc
8691

8792
# Load the model.
88-
original_m = onnx.load("/workspace/examples/LittleNet/LittleNet.onnx")
93+
original_m = onnx.load("/workspace/examples/mobilenetv2/mobilenetv2_zeroq.origin.onnx")
8994
# Optimize the model using optimizer for onnx model.
9095
optimized_m = ktc.onnx_optimizer.onnx2onnx_flow(original_m)
9196
# Save the onnx object optimized_m to path /data1/optimized.onnx.
@@ -99,9 +104,9 @@ IP evaluator is such a tool which can estimate the performance of your model and
99104

100105
```python
101106
# Create a ModelConfig object. For details about this class, please check Appendix Python API.
102-
# Here we set the model ID to 32769, the version to 0001 and the target platform to 520
107+
# Here we set the model ID to 32769, the version to 8b28 and the target platform to 720
103108
# The `optimized_m` is from the previous code block.
104-
km = ktc.ModelConfig(32769, "0001", "520", onnx_model=optimized_m)
109+
km = ktc.ModelConfig(32769, "8b28", "720", onnx_model=optimized_m)
105110

106111
# Evaluate the model. The evaluation result is saved as string into `eval_result`.
107112
eval_result = km.evaluate()
@@ -128,18 +133,18 @@ import numpy as np
128133
def preprocess(input_file):
129134
image = Image.open(input_file)
130135
image = image.convert("RGB")
131-
img_data = np.array(image.resize((112, 96), Image.BILINEAR)) / 255
136+
img_data = np.array(image.resize((224, 224), Image.BILINEAR)) / 255
132137
img_data = np.transpose(img_data, (1, 0, 2))
133138
return img_data
134139

135140
# Use the previous function to preprocess an example image as the input.
136-
input_data = [preprocess("/workspace/examples/LittleNet/pytorch_imgs/Abdullah_0001.png")]
141+
input_data = [preprocess("/workspace/examples/mobilenetv2/images/000007.jpg")]
137142

138143
# The `onnx_file` is generated and saved in the previous code block.
139144
# The `input_names` are the input names of the model.
140145
# The `input_data` order should be kept corresponding to the input names. It should be in channel last format (HWC).
141146
# The inference result will be save as a list of array.
142-
floating_point_inf_results = ktc.kneron_inference(input_data, onnx_file='/data1/optimized.onnx', input_names=["data_out"])
147+
floating_point_inf_results = ktc.kneron_inference(input_data, onnx_file='/data1/optimized.onnx', input_names=["images"])
143148
```
144149

145150
After getting the `floating_point_inf_results` and post-process it, you may want to compare the result with the one generated by the source model.
@@ -160,15 +165,12 @@ This is a very simple example usage. There are many more parameters for fine-tun
160165

161166
```python
162167
# Preprocess images as the quantization inputs. The preprocess function is defined in the previous section.
163-
input_images = [
164-
preprocess("/workspace/examples/LittleNet/pytorch_imgs/Abdullah_0001.png"),
165-
preprocess("/workspace/examples/LittleNet/pytorch_imgs/Abdullah_0002.png"),
166-
preprocess("/workspace/examples/LittleNet/pytorch_imgs/Abdullah_0003.png"),
167-
preprocess("/workspace/examples/LittleNet/pytorch_imgs/Abdullah_0004.png"),
168-
]
168+
import os
169+
raw_images = os.listdir("/workspace/examples/mobilenetv2/images")
170+
input_images = [preprocess("/workspace/examples/mobilenetv2/images/" + image_name) for image_name in raw_images]
169171

170172
# We need to prepare a dictionary, which mapping the input name to a list of preprocessed arrays.
171-
input_mapping = {"data_out": input_images}
173+
input_mapping = {"images": input_images}
172174

173175
# Quantization the model. `km` is the ModelConfig object defined in the previous section.
174176
# The quantized model is saved as a bie file. The path to the bie file is returned as a string.
@@ -185,10 +187,10 @@ The python code would be like:
185187
```python
186188
# Use the previous function to preprocess an example image as the input.
187189
# Here the input image is the same as in section 1.4.3.
188-
input_data = [preprocess("/workspace/examples/LittleNet/pytorch_imgs/Abdullah_0001.png")]
190+
input_data = [preprocess("/workspace/examples/mobilenetv2/images/000007.jpg")]
189191

190192
# Inference with a bie file. `bie_path` is defined in section 1.5.1.
191-
fixed_point_inf_results = ktc.kneron_inference(input_data, bie_file=bie_path, input_names=["data_out"])
193+
fixed_point_inf_results = ktc.kneron_inference(input_data, bie_file=bie_path, input_names=["images"], platform=720)
192194

193195
# Compare `fixed_point_inf_results` and `floating_point_inf_results` to check the precision loss.
194196
```
@@ -222,7 +224,7 @@ We would use `ktc.kneron_inference` here again. And we are using the generated n
222224

223225
```python
224226
# `nef_path` is defined in section 1.6.1.
225-
binary_inf_results = ktc.kneron_inference(input_data, nef_file=nef_path)
227+
binary_inf_results = ktc.kneron_inference(input_data, nef_file=nef_path, platform=720)
226228

227229
# Compare binary_inf_results and fixed_point_inf_results. They should be almost the same.
228230
```

docs/toolchain/manual_2_deploy.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -37,8 +37,8 @@ You can use the following command to pull the latest toolchain docker.
3737
docker pull kneron/toolchain:latest
3838
```
3939

40-
Note that the latest toolchain version v0.20.0. You can find the version of the toolchain in
41-
`/workspace/version.txt` inside the docker. If you find your toolchain is earlier than v0.20.0, you may need to find the
40+
Note that the latest toolchain version v0.20.1. You can find the version of the toolchain in
41+
`/workspace/version.txt` inside the docker. If you find your toolchain is earlier than v0.20.1, you may need to find the
4242
document from the [manual history](appendix/history.md).
4343

4444
## 2.3. Toolchain Docker Overview

0 commit comments

Comments
 (0)