Skip to content

Model Convertion Error #assertion/root/workspace/mmdeploy/csrc/backend_ops/tensorrt/batched_nms/trt_batched_nms.cpp,98 #134

@TheSeriousProgrammer

Description

@TheSeriousProgrammer

Checklist

  1. I have searched related issues but cannot get the expected help.
  2. I have read the FAQ documentation but cannot get the expected help.
  3. The bug has not been fixed in the latest version.

Describe the bug

I tried to use a convert a mmdetection yolov3 mobilenet model trained with a wildfire dataset and getting teh following error

#assertion/root/workspace/mmdeploy/csrc/backend_ops/tensorrt/batched_nms/trt_batched_nms.cpp,98
2022-02-07 06:19:20,556 - mmdeploy - ERROR - visualize tensorrt model failed.

But it gave out end2end.engine and end2end.onnx

Reproduction

  1. What command or script did you run?
 python3 ~/workspace/mmdeploy/tools/deploy.py ~/workspace/mmdeploy/configs/mmdet/detection/detection_tensorrt_dynamic-320x320-1344x1344.py artifacts_2/yolov3_config.py artifacts_2/latest.pth artifacts_2/test.jpg --work-dir out --device cuda:0
  1. Did you make any modifications on the code or config? Did you understand what you have modified?

Environment

Used docker/GPU with the following changes

### build sdk
RUN wget https://github.com/openppl-public/ppl.cv/archive/refs/tags/v0.6.1.zip && unzip v0.6.1.zip
RUN mv ppl.cv-0.6.1 ppl.cv && \
    cd ppl.cv &&\
    ./build.sh cuda

#RUN git clone https://github.com/openppl-public/ppl.cv.git &&\
#    cd ppl.cv &&\
#    ./build.sh cuda
RUN pip3 install mmdet

Error traceback

If applicable, paste the error trackback here.

2022-02-07 06:13:43,927 - mmdeploy - INFO - torch2onnx start.
load checkpoint from local path: artifacts_2/latest.pth
/root/workspace/mmdeploy/mmdeploy/core/optimizers/function_marker.py:158: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  ys_shape = tuple(int(s) for s in ys.shape)
/opt/conda/lib/python3.8/site-packages/torch/nn/functional.py:3451: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
  warnings.warn(
/opt/conda/lib/python3.8/site-packages/mmdet/models/dense_heads/yolo_head.py:126: UserWarning: DeprecationWarning: `anchor_generator` is deprecated, please use "prior_generator" instead
  warnings.warn('DeprecationWarning: `anchor_generator` is deprecated, '
/opt/conda/lib/python3.8/site-packages/mmdet/core/anchor/anchor_generator.py:333: UserWarning: ``grid_anchors`` would be deprecated soon. Please use ``grid_priors`` 
  warnings.warn('``grid_anchors`` would be deprecated soon. '
/opt/conda/lib/python3.8/site-packages/mmdet/core/anchor/anchor_generator.py:369: UserWarning: ``single_level_grid_anchors`` would be deprecated soon. Please use ``single_level_grid_priors`` 
  warnings.warn(
/opt/conda/lib/python3.8/site-packages/mmdet/core/bbox/coder/yolo_bbox_coder.py:73: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  assert pred_bboxes.size(-1) == bboxes.size(-1) == 4
/root/workspace/mmdeploy/mmdeploy/pytorch/functions/topk.py:54: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if k > size:
/root/workspace/mmdeploy/mmdeploy/codebase/mmdet/core/post_processing/bbox_nms.py:260: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  dets, labels = TRTBatchedNMSop.apply(boxes, scores, int(scores.shape[-1]),
/root/workspace/mmdeploy/mmdeploy/mmcv/ops/nms.py:177: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  out_boxes = min(num_boxes, after_topk)
2022-02-07 06:14:19,567 - mmdeploy - INFO - torch2onnx success.
[2022-02-07 06:14:19.891] [mmdeploy] [info] [model.cpp:97] Register 'DirectoryModel'
2022-02-07 06:14:19,921 - mmdeploy - INFO - onnx2tensorrt of out/end2end.onnx start.
[2022-02-07 06:14:22.631] [mmdeploy] [info] [model.cpp:97] Register 'DirectoryModel'
2022-02-07 06:14:22,685 - mmdeploy - INFO - Successfully loaded tensorrt plugins from /root/workspace/mmdeploy/build/lib/libmmdeploy_tensorrt_ops.so
[TensorRT] WARNING: /workspace/TensorRT/t/oss-cicd/oss/parsers/onnx/onnx2trt_utils.cpp:227: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: /workspace/TensorRT/t/oss-cicd/oss/parsers/onnx/builtin_op_importers.cpp:3125: TensorRT currently uses half_pixel calculation for the pytorch_half_pixel transformation mode. These are equivalent except for interpolations down to 1D.
[TensorRT] WARNING: /workspace/TensorRT/t/oss-cicd/oss/parsers/onnx/builtin_op_importers.cpp:3125: TensorRT currently uses half_pixel calculation for the pytorch_half_pixel transformation mode. These are equivalent except for interpolations down to 1D.
[TensorRT] WARNING: /workspace/TensorRT/t/oss-cicd/oss/parsers/onnx/onnx2trt_utils.cpp:255: One or more weights outside the range of INT32 was clamped
[TensorRT] WARNING: /workspace/TensorRT/t/oss-cicd/oss/parsers/onnx/onnx2trt_utils.cpp:255: One or more weights outside the range of INT32 was clamped
[TensorRT] WARNING: /workspace/TensorRT/t/oss-cicd/oss/parsers/onnx/onnx2trt_utils.cpp:255: One or more weights outside the range of INT32 was clamped
[TensorRT] WARNING: /workspace/TensorRT/t/oss-cicd/oss/parsers/onnx/onnx2trt_utils.cpp:255: One or more weights outside the range of INT32 was clamped
[TensorRT] WARNING: /workspace/TensorRT/t/oss-cicd/oss/parsers/onnx/onnx2trt_utils.cpp:255: One or more weights outside the range of INT32 was clamped
[TensorRT] WARNING: /workspace/TensorRT/t/oss-cicd/oss/parsers/onnx/onnx2trt_utils.cpp:255: One or more weights outside the range of INT32 was clamped
[TensorRT] WARNING: /workspace/TensorRT/t/oss-cicd/oss/parsers/onnx/onnx2trt_utils.cpp:255: One or more weights outside the range of INT32 was clamped
[TensorRT] WARNING: /workspace/TensorRT/t/oss-cicd/oss/parsers/onnx/onnx2trt_utils.cpp:255: One or more weights outside the range of INT32 was clamped
[TensorRT] WARNING: /workspace/TensorRT/t/oss-cicd/oss/parsers/onnx/onnx2trt_utils.cpp:255: One or more weights outside the range of INT32 was clamped
[TensorRT] WARNING: /workspace/TensorRT/t/oss-cicd/oss/parsers/onnx/onnx2trt_utils.cpp:255: One or more weights outside the range of INT32 was clamped
[TensorRT] WARNING: /workspace/TensorRT/t/oss-cicd/oss/parsers/onnx/onnx2trt_utils.cpp:255: One or more weights outside the range of INT32 was clamped
[TensorRT] WARNING: /workspace/TensorRT/t/oss-cicd/oss/parsers/onnx/onnx2trt_utils.cpp:255: One or more weights outside the range of INT32 was clamped
[TensorRT] WARNING: /workspace/TensorRT/t/oss-cicd/oss/parsers/onnx/onnx2trt_utils.cpp:255: One or more weights outside the range of INT32 was clamped
[TensorRT] WARNING: /workspace/TensorRT/t/oss-cicd/oss/parsers/onnx/onnx2trt_utils.cpp:255: One or more weights outside the range of INT32 was clamped
[TensorRT] WARNING: /workspace/TensorRT/t/oss-cicd/oss/parsers/onnx/onnx2trt_utils.cpp:255: One or more weights outside the range of INT32 was clamped
[TensorRT] INFO: No importer registered for op: TRTBatchedNMS. Attempting to import as plugin.
[TensorRT] INFO: Searching for plugin: TRTBatchedNMS, plugin_version: 1, plugin_namespace: 
[TensorRT] INFO: Successfully created plugin: TRTBatchedNMS
[TensorRT] INFO: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
[TensorRT] INFO: Detected 1 inputs and 2 output network tensors.
2022-02-07 06:18:59,744 - mmdeploy - INFO - onnx2tensorrt of out/end2end.onnx success.
2022-02-07 06:18:59,745 - mmdeploy - INFO - visualize tensorrt model start.
[2022-02-07 06:19:15.386] [mmdeploy] [info] [model.cpp:97] Register 'DirectoryModel'
2022-02-07 06:19:15,475 - mmdeploy - INFO - Successfully loaded tensorrt plugins from /root/workspace/mmdeploy/build/lib/libmmdeploy_tensorrt_ops.so
2022-02-07 06:19:15,475 - mmdeploy - INFO - Successfully loaded tensorrt plugins from /root/workspace/mmdeploy/build/lib/libmmdeploy_tensorrt_ops.so
#assertion/root/workspace/mmdeploy/csrc/backend_ops/tensorrt/batched_nms/trt_batched_nms.cpp,98
2022-02-07 06:19:20,556 - mmdeploy - ERROR - visualize tensorrt model failed.

Bug fix

I dont know the reason of the bug , but I saw that end2end.engine and end2end.onnx was created successfully, may I know the reason

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions