@@ -58,18 +58,18 @@ The supported Device-Platform-InferenceBackend matrix is presented as following,
58
58
59
59
The benchmark can be found from [ here] ( docs/en/03-benchmark/benchmark.md )
60
60
61
- | Device / Platform | Linux | Windows | macOS | Android |
62
- | ----------------- | --------------------------------------------------------------- | --------------------------------------- | -------- | ---------------- |
63
- | x86_64 CPU | ✔️ONNX Runtime<br >✔️pplnn<br >✔️ncnn<br >✔️OpenVINO<br >✔️LibTorch | ✔️ONNX Runtime<br >✔️OpenVINO | - | - |
64
- | ARM CPU | ✔️ncnn | - | - | ✔️ncnn |
65
- | RISC-V | ✔️ncnn | - | - | - |
66
- | NVIDIA GPU | ✔️ONNX Runtime<br >✔️TensorRT<br >✔️pplnn<br >✔️LibTorch | ✔️ONNX Runtime<br >✔️TensorRT<br >✔️pplnn | - | - |
67
- | NVIDIA Jetson | ✔️TensorRT | ✔️TensorRT | - | - |
68
- | Huawei ascend310 | ✔️CANN | - | - | - |
69
- | Rockchip | ✔️RKNN | - | - | - |
70
- | Apple M1 | - | - | ✔️CoreML | - |
71
- | Adreno GPU | - | - | - | ✔️ncnn<br >✔️SNPE |
72
- | Hexagon DSP | - | - | - | ✔️SNPE |
61
+ | Device / Platform | Linux | Windows | macOS | Android |
62
+ | ----------------- | ------------------------------------------------------------------------ | --------------------------------------- | -------- | ---------------- |
63
+ | x86_64 CPU | ✔️ONNX Runtime<br >✔️pplnn<br >✔️ncnn<br >✔️OpenVINO<br >✔️LibTorch< br >✔️TVM | ✔️ONNX Runtime<br >✔️OpenVINO | - | - |
64
+ | ARM CPU | ✔️ncnn | - | - | ✔️ncnn |
65
+ | RISC-V | ✔️ncnn | - | - | - |
66
+ | NVIDIA GPU | ✔️ONNX Runtime<br >✔️TensorRT<br >✔️pplnn<br >✔️LibTorch< br >✔️TVM | ✔️ONNX Runtime<br >✔️TensorRT<br >✔️pplnn | - | - |
67
+ | NVIDIA Jetson | ✔️TensorRT | ✔️TensorRT | - | - |
68
+ | Huawei ascend310 | ✔️CANN | - | - | - |
69
+ | Rockchip | ✔️RKNN | - | - | - |
70
+ | Apple M1 | - | - | ✔️CoreML | - |
71
+ | Adreno GPU | - | - | - | ✔️ncnn<br >✔️SNPE |
72
+ | Hexagon DSP | - | - | - | ✔️SNPE |
73
73
74
74
### Efficient and scalable C/C++ SDK Framework
75
75
0 commit comments