Skip to content

Commit 8a59bae

Browse files
authored
Translate onnxruntime.md and tensorrt.md (#2320)
1 parent a279464 commit 8a59bae

File tree

2 files changed

+67
-67
lines changed

2 files changed

+67
-67
lines changed
Lines changed: 37 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -1,42 +1,42 @@
11
# onnxruntime 支持情况
22

3-
## Introduction of ONNX Runtime
3+
## ONNX Runtime 介绍
44

5-
**ONNX Runtime** is a cross-platform inference and training accelerator compatible with many popular ML/DNN frameworks. Check its [github](https://github.com/microsoft/onnxruntime) for more information.
5+
**ONNX Runtime** 是一个跨平台的推理和训练加速器,与许多流行的ML/DNN框架兼容。查看其[github](https://github.com/microsoft/onnxruntime)以获取更多信息。
66

7-
## Installation
7+
## 安装
88

9-
*Please note that only **onnxruntime>=1.8.1** of on Linux platform is supported by now.*
9+
*请注意,目前Linux平台只支持 **onnxruntime>=1.8.1** *
1010

11-
### Install ONNX Runtime python package
11+
### 安装ONNX Runtime python包
1212

13-
- CPU Version
13+
- CPU 版本
1414

1515
```bash
16-
pip install onnxruntime==1.8.1 # if you want to use cpu version
16+
pip install onnxruntime==1.8.1 # 如果你想用cpu版本
1717
```
1818

19-
- GPU Version
19+
- GPU 版本
2020

2121
```bash
22-
pip install onnxruntime-gpu==1.8.1 # if you want to use gpu version
22+
pip install onnxruntime-gpu==1.8.1 # 如果你想用gpu版本
2323
```
2424

25-
### Install float16 conversion tool (optional)
25+
### 安装float16转换工具(可选)
2626

27-
If you want to use float16 precision, install the tool by running the following script:
27+
如果你想用float16精度,请执行以下脚本安装工具:
2828

2929
```bash
3030
pip install onnx onnxconverter-common
3131
```
3232

33-
## Build custom ops
33+
## 构建自定义算子
3434

35-
### Download ONNXRuntime Library
35+
### 下载ONNXRuntime库
3636

37-
Download `onnxruntime-linux-*.tgz` library from ONNX Runtime [releases](https://github.com/microsoft/onnxruntime/releases/tag/v1.8.1), extract it, expose `ONNXRUNTIME_DIR` and finally add the lib path to `LD_LIBRARY_PATH` as below:
37+
从ONNX Runtime[发布版本](https://github.com/microsoft/onnxruntime/releases/tag/v1.8.1)下载`onnxruntime-linux-*.tgz`库,并解压,将onnxruntime所在路径添加到`ONNXRUNTIME_DIR`环境变量,最后将lib路径添加到`LD_LIBRARY_PATH`环境变量中,操作如下:
3838

39-
- CPU Version
39+
- CPU 版本
4040

4141
```bash
4242
wget https://github.com/microsoft/onnxruntime/releases/download/v1.8.1/onnxruntime-linux-x64-1.8.1.tgz
@@ -47,7 +47,7 @@ export ONNXRUNTIME_DIR=$(pwd)
4747
export LD_LIBRARY_PATH=$ONNXRUNTIME_DIR/lib:$LD_LIBRARY_PATH
4848
```
4949

50-
- GPU Version
50+
- GPU 版本
5151

5252
```bash
5353
wget https://github.com/microsoft/onnxruntime/releases/download/v1.8.1/onnxruntime-linux-x64-gpu-1.8.1.tgz
@@ -58,49 +58,49 @@ export ONNXRUNTIME_DIR=$(pwd)
5858
export LD_LIBRARY_PATH=$ONNXRUNTIME_DIR/lib:$LD_LIBRARY_PATH
5959
```
6060

61-
### Build on Linux
61+
### 在Linux上构建
6262

63-
- CPU Version
63+
- CPU 版本
6464

6565
```bash
66-
cd ${MMDEPLOY_DIR} # To MMDeploy root directory
66+
cd ${MMDEPLOY_DIR} # 进入MMDeploy根目录
6767
mkdir -p build && cd build
6868
cmake -DMMDEPLOY_TARGET_DEVICES='cpu' -DMMDEPLOY_TARGET_BACKENDS=ort -DONNXRUNTIME_DIR=${ONNXRUNTIME_DIR} ..
6969
make -j$(nproc) && make install
7070
```
7171

72-
- GPU Version
72+
- GPU 版本
7373

7474
```bash
75-
cd ${MMDEPLOY_DIR} # To MMDeploy root directory
75+
cd ${MMDEPLOY_DIR} # 进入MMDeploy根目录
7676
mkdir -p build && cd build
7777
cmake -DMMDEPLOY_TARGET_DEVICES='cuda' -DMMDEPLOY_TARGET_BACKENDS=ort -DONNXRUNTIME_DIR=${ONNXRUNTIME_DIR} ..
7878
make -j$(nproc) && make install
7979
```
8080

81-
## How to convert a model
81+
## 如何转换模型
8282

83-
- You could follow the instructions of tutorial [How to convert model](../02-how-to-run/convert_model.md)
83+
- 你可以按照教程[如何转换模型](../02-how-to-run/convert_model.md)的说明去做
8484

85-
## How to add a new custom op
85+
## 如何添加新的自定义算子
8686

87-
## Reminder
87+
## 提示
8888

89-
- The custom operator is not included in [supported operator list](https://github.com/microsoft/onnxruntime/blob/master/docs/OperatorKernels.md) in ONNX Runtime.
90-
- The custom operator should be able to be exported to ONNX.
89+
- 自定义算子不包含在ONNX Runtime[支持的算子列表](https://github.com/microsoft/onnxruntime/blob/master/docs/OperatorKernels.md)中。
90+
- 自定义算子应该能够导出到ONNX。
9191

92-
#### Main procedures
92+
#### 主要过程
9393

94-
Take custom operator `roi_align` for example.
94+
以自定义操作符`roi_align`为例。
9595

96-
1. Create a `roi_align` directory in ONNX Runtime source directory `${MMDEPLOY_DIR}/csrc/backend_ops/onnxruntime/`
97-
2. Add header and source file into `roi_align` directory `${MMDEPLOY_DIR}/csrc/backend_ops/onnxruntime/roi_align/`
98-
3. Add unit test into `tests/test_ops/test_ops.py`
99-
Check [here](../../../tests/test_ops/test_ops.py) for examples.
96+
1. 在ONNX Runtime源目录`${MMDEPLOY_DIR}/csrc/backend_ops/onnxruntime/`中创建一个`roi_align`目录
97+
2. 添加头文件和源文件到`roi_align`目录`${MMDEPLOY_DIR}/csrc/backend_ops/onnxruntime/roi_align/`
98+
3. 将单元测试添加到`tests/test_ops/test_ops.py`中。
99+
查看[这里](../../../tests/test_ops/test_ops.py)的例子。
100100

101-
**Finally, welcome to send us PR of adding custom operators for ONNX Runtime in MMDeploy.** :nerd_face:
101+
\**最后,欢迎发送为MMDeploy添加ONNX Runtime自定义算子的PR。* \*: nerd_face:
102102

103-
## References
103+
## 参考
104104

105-
- [How to export Pytorch model with custom op to ONNX and run it in ONNX Runtime](https://github.com/onnx/tutorials/blob/master/PyTorchCustomOperator/README.md)
106-
- [How to add a custom operator/kernel in ONNX Runtime](https://onnxruntime.ai/docs/reference/operators/add-custom-op.html)
105+
- [如何将具有自定义op的Pytorch模型导出为ONNX并在ONNX Runtime运行](https://github.com/onnx/tutorials/blob/master/PyTorchCustomOperator/README.md)
106+
- [如何在ONNX Runtime添加自定义算子/内核](https://onnxruntime.ai/docs/reference/operators/add-custom-op.html)

docs/zh_cn/05-supported-backends/tensorrt.md

Lines changed: 30 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -1,57 +1,57 @@
11
# TensorRT 支持情况
22

3-
## Installation
3+
## 安装
44

5-
### Install TensorRT
5+
### 安装TensorRT
66

7-
Please install TensorRT 8 follow [install-guide](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html#installing).
7+
请按照[安装指南](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html#installing)安装TensorRT8。
88

9-
**Note**:
9+
**注意**:
1010

11-
- `pip Wheel File Installation` is not supported yet in this repo.
11+
- 此版本不支持`pip Wheel File Installation`
1212

13-
- We strongly suggest you install TensorRT through [tar file](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html#installing-tar)
13+
- 我们强烈建议通过[tar包](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html#installing-tar)的方式安装TensorRT。
1414

15-
- After installation, you'd better add TensorRT environment variables to bashrc by:
15+
- 安装完成后,最好通过以下方式将TensorRT环境变量添加到bashrc:
1616

1717
```bash
18-
cd ${TENSORRT_DIR} # To TensorRT root directory
18+
cd ${TENSORRT_DIR} # 进入TensorRT根目录
1919
echo '# set env for TensorRT' >> ~/.bashrc
2020
echo "export TENSORRT_DIR=${TENSORRT_DIR}" >> ~/.bashrc
2121
echo 'export LD_LIBRARY_PATH=$TENSORRT_DIR/lib:$TENSORRT_DIR' >> ~/.bashrc
2222
source ~/.bashrc
2323
```
2424

25-
### Build custom ops
25+
### 构建自定义算子
2626

27-
Some custom ops are created to support models in OpenMMLab, and the custom ops can be built as follow:
27+
OpenMMLab中创建了一些自定义算子来支持模型,自定义算子可以如下构建:
2828

2929
```bash
30-
cd ${MMDEPLOY_DIR} # To MMDeploy root directory
30+
cd ${MMDEPLOY_DIR} # 进入TensorRT根目录
3131
mkdir -p build && cd build
3232
cmake -DMMDEPLOY_TARGET_BACKENDS=trt ..
3333
make -j$(nproc)
3434
```
3535

36-
If you haven't installed TensorRT in the default path, Please add `-DTENSORRT_DIR` flag in CMake.
36+
如果你没有在默认路径下安装TensorRT,请在CMake中添加`-DTENSORRT_DIR`标志。
3737

3838
```bash
3939
cmake -DMMDEPLOY_TARGET_BACKENDS=trt -DTENSORRT_DIR=${TENSORRT_DIR} ..
4040
make -j$(nproc) && make install
4141
```
4242

43-
## Convert model
43+
## 转换模型
4444

45-
Please follow the tutorial in [How to convert model](../02-how-to-run/convert_model.md). **Note** that the device must be `cuda` device.
45+
请遵循[如何转换模型](../02-how-to-run/convert_model.md)中的教程。**注意**设备必须是`cuda` 设备。
4646

47-
### Int8 Support
47+
### Int8 支持
4848

49-
Since TensorRT supports INT8 mode, a custom dataset config can be given to calibrate the model. Following is an example for MMDetection:
49+
由于TensorRT支持INT8模式,因此可以提供自定义数据集配置来校准模型。MMDetection的示例如下:
5050

5151
```python
5252
# calibration_dataset.py
5353

54-
# dataset settings, same format as the codebase in OpenMMLab
54+
# 数据集设置,格式与OpenMMLab中的代码库相同
5555
dataset_type = 'CalibrationDataset'
5656
data_root = 'calibration/dataset/root'
5757
img_norm_cfg = dict(
@@ -85,32 +85,32 @@ data = dict(
8585
evaluation = dict(interval=1, metric='bbox')
8686
```
8787

88-
Convert your model with this calibration dataset:
88+
使用此校准数据集转换您的模型:
8989

9090
```python
9191
python tools/deploy.py \
9292
...
9393
--calib-dataset-cfg calibration_dataset.py
9494
```
9595

96-
If the calibration dataset is not given, the data will be calibrated with the dataset in model config.
96+
如果没有提供校准数据集,则使用模型配置中的数据集进行校准。
9797

9898
## FAQs
9999

100-
- Error `Cannot found TensorRT headers` or `Cannot found TensorRT libs`
100+
- 错误 `Cannot found TensorRT headers``Cannot found TensorRT libs`
101101

102-
Try cmake with flag `-DTENSORRT_DIR`:
102+
可以尝试在cmake时使用`-DTENSORRT_DIR`标志:
103103

104104
```bash
105105
cmake -DBUILD_TENSORRT_OPS=ON -DTENSORRT_DIR=${TENSORRT_DIR} ..
106106
make -j$(nproc)
107107
```
108108

109-
Please make sure there are libs and headers in `${TENSORRT_DIR}`.
109+
请确保 `${TENSORRT_DIR}`中有库和头文件。
110110

111-
- Error `error: parameter check failed at: engine.cpp::setBindingDimensions::1046, condition: profileMinDims.d[i] <= dimensions.d[i]`
111+
- 错误 `error: parameter check failed at: engine.cpp::setBindingDimensions::1046, condition: profileMinDims.d[i] <= dimensions.d[i]`
112112

113-
There is an input shape limit in deployment config:
113+
在部署配置中有一个输入形状的限制:
114114

115115
```python
116116
backend_config = dict(
@@ -126,14 +126,14 @@ If the calibration dataset is not given, the data will be calibrated with the da
126126
# other configs
127127
```
128128

129-
The shape of the tensor `input` must be limited between `input_shapes["input"]["min_shape"]` and `input_shapes["input"]["max_shape"]`.
129+
`input` 张量的形状必须限制在`input_shapes["input"]["min_shape"]``input_shapes["input"]["max_shape"]`之间。
130130

131-
- Error `error: [TensorRT] INTERNAL ERROR: Assertion failed: cublasStatus == CUBLAS_STATUS_SUCCESS`
131+
- 错误 `error: [TensorRT] INTERNAL ERROR: Assertion failed: cublasStatus == CUBLAS_STATUS_SUCCESS`
132132

133-
TRT 7.2.1 switches to use cuBLASLt (previously it was cuBLAS). cuBLASLt is the default choice for SM version >= 7.0. However, you may need CUDA-10.2 Patch 1 (Released Aug 26, 2020) to resolve some cuBLASLt issues. Another option is to use the new TacticSource API and disable cuBLASLt tactics if you don't want to upgrade.
133+
TRT 7.2.1切换到使用cuBLASLt(以前是cuBLAS)。cuBLASLt是SM版本>= 7.0的默认选择。但是,您可能需要CUDA-10.2补丁1(2020年8月26日发布)来解决一些cuBLASLt问题。如果不想升级,另一个选择是使用新的TacticSource API并禁用cuBLASLt策略。
134134

135-
Read [this](https://forums.developer.nvidia.com/t/matrixmultiply-failed-on-tensorrt-7-2-1/158187/4) for detail.
135+
请阅读[本文](https://forums.developer.nvidia.com/t/matrixmultiply-failed-on-tensorrt-7-2-1/158187/4)了解详情。
136136

137-
- Install mmdeploy on Jetson
137+
- 在Jetson上安装mmdeploy
138138

139-
We provide a tutorial to get start on Jetsons [here](../01-how-to-build/jetsons.md).
139+
我们在[这里](../01-how-to-build/jetsons.md)提供了一个Jetsons入门教程。

0 commit comments

Comments
 (0)