Skip to content

❓ [Question] How do you solve the error: Expected Tensor but got Uninitialized? #1282

Closed
@Mark-M2L

Description

@Mark-M2L

❓ Question

Currently, I am compiling a custom segmentation model using torch_tensorrt.compile(), using a model script obtained from jit. The code to compile is as follows:

scripted_model = torch.jit.freeze(torch.jit.script(model))

inputs = [torch_tensorrt.Input(
            min_shape=[2, 3, 600, 400],
            opt_shape=[2, 3, 600, 400],
            max_shape=[2, 3, 600, 400],
            dtype=torch.float,
        )]
enabled_precisions = {torch.float, torch.half}

with torch_tensorrt.logging.debug():
    trt_ts_module = torch_tensorrt.compile(scripted_model, inputs=inputs, enabled_precisions=enabled_precisions)

The code fails to compile at the following step:

        a = self.compression(torch.cat(x_list, 1))
        b = self.shortcut(x)

        c = a + b

        return c

, throwing the following error:

Traceback (most recent call last):
  File "test.py", line 118, in <module>
    trt_ts_module = torch_tensorrt.compile(scripted_model, inputs=inputs, enabled_precisions=enabled_precisions)
  File "/home/oem/.pyenv/versions/ddrnet/lib/python3.8/site-packages/torch_tensorrt/_compile.py", line 115, in compile
    return torch_tensorrt.ts.compile(ts_mod, inputs=inputs, enabled_precisions=enabled_precisions, **kwargs)
  File "/home/oem/.pyenv/versions/ddrnet/lib/python3.8/site-packages/torch_tensorrt/ts/_compiler.py", line 113, in compile
    compiled_cpp_mod = _C.compile_graph(module._c, _parse_compile_spec(spec))
RuntimeError: Expected Tensor but got Uninitialized

It seems that some variable is uninitialized. However, the strange thing is that replacing the previous code with the following code pieces both compile:

        a = self.compression(torch.cat(x_list, 1))

        return a

and

        b = self.shortcut(x)

        return b

So, somehow taking the sum of these two tensors results in a failure to compile. Do you have any suggestions I can try such that this step compiles as well?

What you have already tried

Tried adding the following two parameters to the compilation step as well:

trt_ts_module = torch_tensorrt.compile(scripted_model, inputs=inputs, enabled_precisions=enabled_precisions, torch_executed_ops=["prim::ListConstruct"], min_block_size=1)
trt_ts_module = torch_tensorrt.compile(scripted_model, inputs=inputs, enabled_precisions=enabled_precisions, torch_executed_ops=["prim::ListConstruct"])
trt_ts_module = torch_tensorrt.compile(scripted_model, inputs=inputs, enabled_precisions=enabled_precisions, min_block_size=1)

, but these resulted in different errors, thus I decided not to use these parameters for now.

Environment

  • PyTorch Version (e.g., 1.0): 1.11.0+cu113
  • Torch-TensorRT version: 1.1.0
  • CPU Architecture: x86_64
  • OS (e.g., Linux): Ubuntu 20.04 (kernel: 5.4.0-124-generic)
  • How you installed PyTorch (conda, pip, libtorch, source): pip, from within a virtual environment (pyenv)
  • Are you using local sources or building from archives: No
  • Python version: 3.8.13
  • CUDA version: 11.7 (Nvidia Driver: 515.65.01)
  • GPU models and configuration: Nvidia RTX A2000

Looking forwards to your answer, thanks in advance.

Metadata

Metadata

Assignees

Labels

questionFurther information is requested

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions