Closed
Description
Bug Description
The model below shows issues with the valuators for both aten::slice and aten::unbind. Slice does not correctly handle a "None" input for its 'start' argument and unbind does not correctly handle a negative axis.
To Reproduce
Steps to reproduce the behavior:
- Run the model below
- You should see the slice error:
- Resolve the slice issue:
Expected ivalue->isInt() to be true but got false Requested unwrapping of arg IValue assuming it was l however type is NoneType
- Run the model again
- You should see the unbind issue:
Expected eval_list->elements().size() == n->outputs().size() to be true but got false
Size of evaluated results: 2 and node outputs size: 3 must match.
import torch
import torch.nn as nn
import torch_tensorrt
class Unbind(nn.Module):
def __init__(self):
super(Unbind, self).__init__()
def forward(self, in_tensor: torch.Tensor):
mid = torch.unbind(in_tensor, -1)
x, y, z = mid[:3]
return x, y, z
def reproduce_error():
torch_tensorrt.logging.set_reportable_log_level(torch_tensorrt.logging.Level.Graph)
model = Unbind().eval().cuda()
x = torch.randn(3500, 5).cuda()
test_output = model.forward(x)
print(test_output)
print(torch.jit.script(model).graph)
trt_model = torch_tensorrt.compile(model, inputs=[x], **{
"truncate_long_and_double": True,
})
converted_output = trt_model.forward(x)
print(converted_output)
reproduce_error()
Expected behavior
Torch-TensorRT should produce valid results for this model without erroring out.
Environment
Build information about Torch-TensorRT can be found by turning on debug messages
- Torch-TensorRT Version (e.g. 1.0.0): 1.0
- PyTorch Version (e.g. 1.0): 1.11
- CPU Architecture:
- OS (e.g., Linux): Linux
- How you installed PyTorch (
conda
,pip
,libtorch
, source): source - Build command you used (if compiling from source):
- Are you using local sources or building from archives:
- Python version:
- CUDA version:
- GPU models and configuration:
- Any other relevant information: