You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This tutorial takes `mmdeploy-0.13.0-windows-amd64.zip` and `mmdeploy-0.13.0-windows-amd64-cuda11.3.zip` as examples to show how to use the prebuilt packages. The former supports onnxruntime cpu inference, the latter supports onnxruntime-gpu and tensorrt inference.
24
+
This tutorial takes `mmdeploy-0.14.0-windows-amd64.zip` and `mmdeploy-0.14.0-windows-amd64-cuda11.3.zip` as examples to show how to use the prebuilt packages. The former supports onnxruntime cpu inference, the latter supports onnxruntime-gpu and tensorrt inference.
25
25
26
26
The directory structure of the prebuilt package is as follows, where the `dist` folder is about model converter, and the `sdk` folder is related to model inference.
27
27
@@ -81,8 +81,8 @@ In order to use `ONNX Runtime` backend, you should also do the following steps.
81
81
5. Install `mmdeploy` (Model Converter) and `mmdeploy_runtime` (SDK Python API).
82
82
83
83
```bash
84
-
pip install mmdeploy==0.13.0
85
-
pip install mmdeploy-runtime==0.13.0
84
+
pip install mmdeploy==0.14.0
85
+
pip install mmdeploy-runtime==0.14.0
86
86
```
87
87
88
88
:point_right: If you have installed it before, please uninstall it first.
@@ -100,7 +100,7 @@ In order to use `ONNX Runtime` backend, you should also do the following steps.
:exclamation: Restart powershell to make the environment variables setting take effect. You can check whether the settings are in effect by `echo $env:PATH`.
0 commit comments