Skip to content

Adding new element gvastreammux#772

Open
qianlongding wants to merge 12 commits intomainfrom
gvastreammux
Open

Adding new element gvastreammux#772
qianlongding wants to merge 12 commits intomainfrom
gvastreammux

Conversation

@qianlongding
Copy link
Copy Markdown
Contributor

@qianlongding qianlongding commented Apr 14, 2026

Description

This PR introduces two new GStreamer elements — gvastreammux and gvastreamdemux — enabling multi-stream video inference through a single shared pipeline.

gvastreammux collects video frames from N sink pads (sink_0, sink_1, ...) and interleaves them through a single source pad in round-robin order using GStreamer's [GstCollectPads] mechanism. Every output buffer is tagged with a new custom metadata type [GstGvaStreammuxMeta] to record the origin of each buffer.

gvastreamdemux is the companion element that reads [GstGvaStreammuxMeta] from each incoming buffer and routes it to the corresponding per-source output pad (src_0, src_1, ...).

image

The comparison between the Original*25 pipeline and the Use streammux pipeline demonstrates a significant reduction on GPU mem usage from 8G to 2G. As you noted, the original test already had model sharing enabled.

Fixes # (issue)

Any Newly Introduced Dependencies

Please describe any newly introduced 3rd party dependencies in this change. List their name, license information and how they are used in the project.

How Has This Been Tested?

Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration

Checklist:

  • I agree to use the MIT license for my code changes.
  • I have not introduced any 3rd party components incompatible with MIT.
  • I have not included any company confidential information, trade secret, password or security token.
  • I have performed a self-review of my code.

Copy link
Copy Markdown
Contributor

@tjanczak tjanczak left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Avoid adding custom Intel metadata, use one already available in GStreamer

*
* Metadata attached by gvastreammux to identify source origin of each buffer.
*/
struct _GstGvaStreammuxMeta {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need to introduce new Intel-specific metadata for batching?
Please adopt existing GStreamer metadata instead: (https://gstreamer.freedesktop.org/documentation/analytics/gstanalyticsbatchmeta.html?gi-language=c)

@brmarkus
Copy link
Copy Markdown

Can you add a comparison of e.g. different amount of memory usage for different scenarios showing the real benefit of using the newly introduced plugins, please?

What about using the same model-instance-id ("Identifier for sharing a loaded model instance between elements of the same type. Elements with the same model-instance-id will share all model and inference engine related properties")?

@qianlongding
Copy link
Copy Markdown
Contributor Author

Avoid adding custom Intel metadata, use one already available in GStreamer

Need to wait DLS upgrade Gstreamer to 1.28 for GstAnalyticsBatchMeta available.

@qianlongding
Copy link
Copy Markdown
Contributor Author

Can you add a comparison of e.g. different amount of memory usage for different scenarios showing the real benefit of using the newly introduced plugins, please?

What about using the same model-instance-id ("Identifier for sharing a loaded model instance between elements of the same type. Elements with the same model-instance-id will share all model and inference engine related properties")?

Good suggestion.
I will add memory consumption data to the PR description to demonstrate the benefits of using gvastreammux. Without it, multiple streams cannot share a single gvadetect instance. This leads to 25 separate instances for 25 streams, which incurs significant memory overhead—even if they all share the same model-instance-id.

@tjanczak
Copy link
Copy Markdown
Contributor

Avoid adding custom Intel metadata, use one already available in GStreamer

Need to wait DLS upgrade Gstreamer to 1.28 for GstAnalyticsBatchMeta available.

Ok, let's then push 1.28 migration first and merge this change on top.

@qianlongding
Copy link
Copy Markdown
Contributor Author

Can you add a comparison of e.g. different amount of memory usage for different scenarios showing the real benefit of using the newly introduced plugins, please?

What about using the same model-instance-id ("Identifier for sharing a loaded model instance between elements of the same type. Elements with the same model-instance-id will share all model and inference engine related properties")?

pls see the updated PR description. I put the comparison of memory usage reduction with streammux.

@brmarkus
Copy link
Copy Markdown

pls see the updated PR description. I put the comparison of memory usage reduction with streammux.

Thank you very much for the additional information.
Would you mind adding the information as text as well (in addition to the attached picture), allowing to

  • search for it, search within it, getting it indexed,
  • copy&paste for reproduction
    please?

@tbujewsk
Copy link
Copy Markdown
Contributor

Avoid adding custom Intel metadata, use one already available in GStreamer

Need to wait DLS upgrade Gstreamer to 1.28 for GstAnalyticsBatchMeta available.

Ok, let's then push 1.28 migration first and merge this change on top.

ETA of switching to 1.28.2 next week ~22.04

@ZiningLi
Copy link
Copy Markdown
Contributor

pls see the updated PR description. I put the comparison of memory usage reduction with streammux.

Thank you very much for the additional information. Would you mind adding the information as text as well (in addition to the attached picture), allowing to

  • search for it, search within it, getting it indexed,
  • copy&paste for reproduction
    please?

Added the benchmark results as text below as well.

I tested gvastreammux on B60 using the intel/dlstreamer:2026.0.0-ubuntu24-rc1 Docker image, and built gvastreammux inside that container.

For a fair comparison, I used the same inference-interval=1 setting in both pipelines and measured the results after running each pipeline for 120 seconds.

Results:

  • Total throughput with gvastreammux: 35.40 FPS
  • Total throughput without gvastreammux: 35.17 FPS
  • GPU memory usage with gvastreammux: 2.087 GB
  • GPU memory usage without gvastreammux: 4.812 GB

So in this test, gvastreammux achieved nearly the same throughput while reducing GPU memory usage from 4.812 GB to 2.087 GB (about 56.6% lower GPU memory usage).

Pipeline with gvastreammux:

gst-launch-1.0   gvastreammux name=mux ! queue   ! gvadetect       inference-interval=1       model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml       custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so       model-instance-id=yolov7_det0       batch-size=8       nireq=2       ie-config=GPU_THROUGHPUT_STREAMS=2       scale-method=default       share-va-display-ctx=false       device=GPU       pre-process-backend=va-surface-sharing   ! queue   ! gvatrack tracking-type=zero-term-imageless   ! queue   ! gvaclassify       inference-interval=1       model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml       model-proc=/test_resources/resnet-post-process.json       inference-region=roi-list       object-class=face       model-instance-id=face_quality0       batch-size=8       nireq=2       ie-config=GPU_THROUGHPUT_STREAMS=2       scale-method=default       share-va-display-ctx=false       reshape=false       device=GPU       pre-process-backend=va-surface-sharing   ! queue   ! gvaclassify       inference-interval=1       model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml       model-proc=/test_resources/resnet-post-process.json       inference-region=roi-list       object-class=person       model-instance-id=body_attr0       batch-size=8       nireq=2       ie-config=GPU_THROUGHPUT_STREAMS=2       scale-method=default       share-va-display-ctx=false       reshape=false       device=GPU       pre-process-backend=va-surface-sharing   ! queue   ! ${FPS_TAIL} sync=true filesrc location="${VIDEO}"  ! parsebin ! vah264dec ! queue ! mux.sink_0   filesrc location="${VIDEO}"  ! parsebin ! vah264dec ! queue ! mux.sink_1   filesrc location="${VIDEO}"  ! parsebin ! vah264dec ! queue ! mux.sink_2   filesrc location="${VIDEO}"  ! parsebin ! vah264dec ! queue ! mux.sink_3   filesrc location="${VIDEO}"  ! parsebin ! vah264dec ! queue ! mux.sink_4   filesrc location="${VIDEO}"  ! parsebin ! vah264dec ! queue ! mux.sink_5   filesrc location="${VIDEO}"  ! parsebin ! vah264dec ! queue ! mux.sink_6   filesrc location="${VIDEO}"  ! parsebin ! vah264dec ! queue ! mux.sink_7   filesrc location="${VIDEO}"  ! parsebin ! vah264dec ! queue ! mux.sink_8   filesrc location="${VIDEO}"  ! parsebin ! vah264dec ! queue ! mux.sink_9   filesrc location="${VIDEO}"  ! parsebin ! vah264dec ! queue ! mux.sink_10   filesrc location="${VIDEO}"  ! parsebin ! vah264dec ! queue ! mux.sink_11   filesrc location="${VIDEO}"  ! parsebin ! vah264dec ! queue ! mux.sink_12   filesrc location="${VIDEO}"  ! parsebin ! vah264dec ! queue ! mux.sink_13   filesrc location="${VIDEO}"  ! parsebin ! vah264dec ! queue ! mux.sink_14   filesrc location="${VIDEO}"  ! parsebin ! vah264dec ! queue ! mux.sink_15   filesrc location="${VIDEO}"  ! parsebin ! vah264dec ! queue ! mux.sink_16   filesrc location="${VIDEO}"  ! parsebin ! vah264dec ! queue ! mux.sink_17   filesrc location="${VIDEO}"  ! parsebin ! vah264dec ! queue ! mux.sink_18   filesrc location="${VIDEO}"  ! parsebin ! vah264dec ! queue ! mux.sink_19   filesrc location="${VIDEO}"  ! parsebin ! vah264dec ! queue ! mux.sink_20   filesrc location="${VIDEO}"  ! parsebin ! vah264dec ! queue ! mux.sink_21   filesrc location="${VIDEO}"  ! parsebin ! vah264dec ! queue ! mux.sink_22   filesrc location="${VIDEO}"  ! parsebin ! vah264dec ! queue ! mux.sink_23   filesrc location="${VIDEO}"  ! parsebin ! vah264dec ! queue ! mux.sink_24`

Pipeline without gvastreammux:

gst-launch-1.0 filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue !  ${FPS_TAIL}  filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue !  ${FPS_TAIL}  filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue !  ${FPS_TAIL}  filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue !  ${FPS_TAIL}  filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue !  ${FPS_TAIL}  filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue !  ${FPS_TAIL}  filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue !  ${FPS_TAIL}  filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue !  ${FPS_TAIL}  filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue !  ${FPS_TAIL}  filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue !  ${FPS_TAIL}  filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue !  ${FPS_TAIL}  filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue !  ${FPS_TAIL}  filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue !  ${FPS_TAIL}  filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue !  ${FPS_TAIL}  filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue !  ${FPS_TAIL}  filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue !  ${FPS_TAIL}  filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue !  ${FPS_TAIL}  filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue !  ${FPS_TAIL}  filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue !  ${FPS_TAIL}  filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue !  ${FPS_TAIL}  filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue !  ${FPS_TAIL}  filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue !  ${FPS_TAIL}  filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue !  ${FPS_TAIL}  filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue !  ${FPS_TAIL}  filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue !  ${FPS_TAIL}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants