-
Notifications
You must be signed in to change notification settings - Fork 2.3k
Description
Description
I tried following the instructions to build TensorRT-OSS using the Ubuntu 22.04 container and eventually got a CMake error and a linker error both complaining that libnvinfer couldn't be found.
CMake complained that it couldn't find nvinfer_LIB_PATH-NOTFOUND
. The build error mentioned
cannot find -lnvinfer_LIB_PATH-NOTFOUND: No such file or directory
.
This error appears to have been introduced in a471b2a which changed docker/ubuntu-22.04.Dockerfile to copy the downloaded files to /usr/lib64
instead of /usr/lib/x86_64-linux-gnu
without updating the value of TRT_LIBPATH
:
git diff a471b2aed8c82b4e0a2386630785bda9ee0fb4e0^ a471b2aed8c82b4e0a2386630785bda9ee0fb4e0 docker/ubuntu-22.04.Dockerfile
diff --git a/docker/ubuntu-22.04.Dockerfile b/docker/ubuntu-22.04.Dockerfile
index e3370077..98c1ac7b 100644
--- a/docker/ubuntu-22.04.Dockerfile
+++ b/docker/ubuntu-22.04.Dockerfile
@@ -15,12 +15,12 @@
...SNIP...
# Install TensorRT
-RUN if [ "${CUDA_VERSION:0:2}" = "11" ]; then \
- wget https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.13.0/tars/TensorRT-10.13.0.35.Linux.x86_64-gnu.cuda-11.8.tar.gz \
- && tar -xf TensorRT-10.13.0.35.Linux.x86_64-gnu.cuda-11.8.tar.gz \
- && cp -a TensorRT-10.13.0.35/lib/*.so* /usr/lib/x86_64-linux-gnu \
- && pip install TensorRT-10.13.0.35/python/tensorrt-10.13.0.35-cp310-none-linux_x86_64.whl ;\
+RUN if [ "${CUDA_VERSION:0:2}" = "13" ]; then \
+ wget https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.13.2/tars/TensorRT-10.13.2.6.Linux.x86_64-gnu.cuda-13.0.tar.gz \
+ && tar -xf TensorRT-10.13.2.6.Linux.x86_64-gnu.cuda-13.0.tar.gz \
+ && cp -a TensorRT-10.13.2.6/lib/*.so* /usr/lib64 \
+ && pip install TensorRT-10.13.2.6/python/tensorrt-10.13.2.6-cp310-none-linux_x86_64.whl ;\
Manually copying the files from /usr/lib64
to /usr/lib/x86_64-linux-gnu
in the container resolved the error for me.
Environment
TensorRT Version: 10.13.3.9
Use the Ubuntu 22.04 container for the rest of these
NVIDIA GPU:
NVIDIA Driver Version:
CUDA Version:
CUDNN Version:
Operating System:
Python Version (if applicable):
Tensorflow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if so, version): Ubuntu 22.04
Relevant Files
Model link:
Steps To Reproduce
Commands or scripts:
Follow the steps to do the TensorRT-OSS build in the README using the Ubuntu 22.04 container
git clone -b main https://github.com/nvidia/TensorRT TensorRT
cd TensorRT
git submodule update --init --recursive
./docker/build.sh --file docker/ubuntu-22.04.Dockerfile --tag tensorrt-ubuntu22.04-cuda13.0
./docker/launch.sh --tag tensorrt-ubuntu22.04-cuda13.0 --gpus none
# In the container
cd TensorRT/
mkdir -p build && cd build
cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DTRT_OUT_DIR=`pwd`/out
make -j$(nproc)
Have you tried the latest release?:
Attach the captured .json and .bin files from TensorRT's API Capture tool if you're on an x86_64 Unix system
Can this model run on other frameworks? For example run ONNX model with ONNXRuntime (polygraphy run <model.onnx> --onnxrt
):