This sample contains code that performs TensorRT inference on Jetson.
- Download ONNX U^2-Net Model from PINTO_model_zoo.
- Convert ONNX Model to Serialize engine and inference on Jetson.
- Jetson
- JetPack 4.6
Clone PINTO_model_zoo repository and download MIRNet model.
git clone https://github.com/PINTO0309/PINTO_model_zoo.git
cd PINTO_model_zoo/061_U-2-Net/30_human_segmentation/
./download_320x320.sh
Check trtexec
/usr/src/tensorrt/bin/trtexec --onnx=./saved_model_320x320/u2net_human_seg_320x320.onnx
Install pycuda.
See details:
sudo apt install python3-dev
pip3 install --user cython
pip3 install --global-option=build_ext --global-option="-I/usr/local/cuda/include" --global-option="-L/usr/local/cuda/lib64" pycuda
Clone repository.
cd ~
git clone https://github.com/NobuoTsukamoto/tensorrt-examples
cd tensorrt-examples
git submodule update --init --recursive
Copy u2net_human_seg_320x320.onnx to tensorrt-examples/models.
cp ~/PINTO_model_zoo/061_U-2-Net/30_human_segmentation/saved_model_320x320/u2net_human_seg_320x320.onnx ~/tensorrt-examples/models/
If you want to convert to FP16 model, add --fp16 to the argument of convert_onnxgs2trt.py.
cd ~/tensorrt-examples/python/utils
python3 convert_onnxgs2trt.py \
--model /home/jetson/tensorrt-examples/models/u2net_human_seg_320x320.onnx \
--output /home/jetson/tensorrt-examples/models/u2net_human_seg_320x320.trt \
--fp16
Finally you can run the demo.
python3 trt_u2net.py \
--model ../../models/u2net_human_seg_320x320.trt