Skip to content

Plugin InstanceNormalization failure of TensorRT 8.6 when running InstanceNorm on GPU V100 #3165

Open
@DataXujing

Description

@DataXujing

A bug? I use the Plugin of InstanceNorm plugin, when use trtexec to get the engine, Error show like:

[07/27/2023-06:18:54] [I] [TRT] No importer registered for op: InstanceNormalization_TRT. Attempting to import as plugin.
[07/27/2023-06:18:54] [I] [TRT] Searching for plugin: InstanceNormalization_TRT, plugin_version: 1, plugin_namespace:
[07/27/2023-06:18:54] [V] [TRT] Local registry did not find InstanceNormalization_TRT creator. Will try parent registry if enabled.
[07/27/2023-06:18:54] [V] [TRT] Global registry found InstanceNormalization_TRT creator.
[07/27/2023-06:18:54] [W] [TRT] builtin_op_importers.cpp:5221: Attribute scales not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[07/27/2023-06:18:54] [F] [TRT] Validation failed: scale.count == bias.count
plugin/instanceNormalizationPlugin/instanceNormalizationPlugin.cu:96

[07/27/2023-06:18:54] [E] [TRT] std::exception
[07/27/2023-06:18:54] [E] [TRT] ModelImporter.cpp:771: While parsing node number 3415 [InstanceNormalization_TRT -> "InstanceNormV-27"]:
[07/27/2023-06:18:54] [E] [TRT] ModelImporter.cpp:772: --- Begin node ---
[07/27/2023-06:18:54] [E] [TRT] ModelImporter.cpp:773: input: "/unet/input_blocks.1/input_blocks.1.0/in_layers/in_layers.0/Reshape_output_0"
output: "InstanceNormV-27"
name: "InstanceNormN-27"
op_type: "InstanceNormalization_TRT"
attribute {
  name: "epsilon"
  f: 1e-05
  type: FLOAT
}
attribute {
  name: "scale"
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  type: FLOATS
}
attribute {
  name: "bias"
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  type: FLOATS
}
attribute {
  name: "relu"
  i: 0
  type: INT
}
attribute {
  name: "alpha"
  f: 0
  type: FLOAT
}
attribute {
  name: "plugin_version"
  s: "1"
  type: STRING
}

[07/27/2023-06:18:54] [E] [TRT] ModelImporter.cpp:774: --- End node ---
[07/27/2023-06:18:54] [E] [TRT] ModelImporter.cpp:777: ERROR: builtin_op_importers.cpp:5412 In function importFallbackPluginImporter:
[8] Assertion failed: plugin && "Could not create plugin"
[07/27/2023-06:18:54] [E] Failed to parse onnx file
[07/27/2023-06:18:54] [I] Finished parsing network model. Parse time: 7.78809
[07/27/2023-06:18:54] [E] Parsing model failed
[07/27/2023-06:18:54] [E] Failed to create engine from model or file.
[07/27/2023-06:18:54] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec [TensorRT v8601] # trtexec --onnx=./combine_0.onnx --saveEngine=combine_1.plan --verbose --workspace=3000 --fp16

I give five attributes to this plugin: epsilon,scale, bias, relu and alpha like the readme: https://github.com/NVIDIA/TensorRT/blob/release/8.6/plugin/instanceNormalizationPlugin/README.md#parameters ,

Therefore, I think there is a problem with our readme parameter. Should the scale parameter be replaced with scales?

Metadata

Metadata

Assignees

Labels

triagedIssue has been triaged by maintainers

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions