Adding new element gvastreammux#772
Conversation
…when streammux is involved
tjanczak
left a comment
There was a problem hiding this comment.
Avoid adding custom Intel metadata, use one already available in GStreamer
| * | ||
| * Metadata attached by gvastreammux to identify source origin of each buffer. | ||
| */ | ||
| struct _GstGvaStreammuxMeta { |
There was a problem hiding this comment.
Why do we need to introduce new Intel-specific metadata for batching?
Please adopt existing GStreamer metadata instead: (https://gstreamer.freedesktop.org/documentation/analytics/gstanalyticsbatchmeta.html?gi-language=c)
|
Can you add a comparison of e.g. different amount of memory usage for different scenarios showing the real benefit of using the newly introduced plugins, please? What about using the same |
Need to wait DLS upgrade Gstreamer to 1.28 for GstAnalyticsBatchMeta available. |
Good suggestion. |
Ok, let's then push 1.28 migration first and merge this change on top. |
pls see the updated PR description. I put the comparison of memory usage reduction with streammux. |
Thank you very much for the additional information.
|
ETA of switching to 1.28.2 next week ~22.04 |
Added the benchmark results as text below as well. I tested For a fair comparison, I used the same Results:
So in this test, Pipeline with gst-launch-1.0 gvastreammux name=mux ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! ${FPS_TAIL} sync=true filesrc location="${VIDEO}" ! parsebin ! vah264dec ! queue ! mux.sink_0 filesrc location="${VIDEO}" ! parsebin ! vah264dec ! queue ! mux.sink_1 filesrc location="${VIDEO}" ! parsebin ! vah264dec ! queue ! mux.sink_2 filesrc location="${VIDEO}" ! parsebin ! vah264dec ! queue ! mux.sink_3 filesrc location="${VIDEO}" ! parsebin ! vah264dec ! queue ! mux.sink_4 filesrc location="${VIDEO}" ! parsebin ! vah264dec ! queue ! mux.sink_5 filesrc location="${VIDEO}" ! parsebin ! vah264dec ! queue ! mux.sink_6 filesrc location="${VIDEO}" ! parsebin ! vah264dec ! queue ! mux.sink_7 filesrc location="${VIDEO}" ! parsebin ! vah264dec ! queue ! mux.sink_8 filesrc location="${VIDEO}" ! parsebin ! vah264dec ! queue ! mux.sink_9 filesrc location="${VIDEO}" ! parsebin ! vah264dec ! queue ! mux.sink_10 filesrc location="${VIDEO}" ! parsebin ! vah264dec ! queue ! mux.sink_11 filesrc location="${VIDEO}" ! parsebin ! vah264dec ! queue ! mux.sink_12 filesrc location="${VIDEO}" ! parsebin ! vah264dec ! queue ! mux.sink_13 filesrc location="${VIDEO}" ! parsebin ! vah264dec ! queue ! mux.sink_14 filesrc location="${VIDEO}" ! parsebin ! vah264dec ! queue ! mux.sink_15 filesrc location="${VIDEO}" ! parsebin ! vah264dec ! queue ! mux.sink_16 filesrc location="${VIDEO}" ! parsebin ! vah264dec ! queue ! mux.sink_17 filesrc location="${VIDEO}" ! parsebin ! vah264dec ! queue ! mux.sink_18 filesrc location="${VIDEO}" ! parsebin ! vah264dec ! queue ! mux.sink_19 filesrc location="${VIDEO}" ! parsebin ! vah264dec ! queue ! mux.sink_20 filesrc location="${VIDEO}" ! parsebin ! vah264dec ! queue ! mux.sink_21 filesrc location="${VIDEO}" ! parsebin ! vah264dec ! queue ! mux.sink_22 filesrc location="${VIDEO}" ! parsebin ! vah264dec ! queue ! mux.sink_23 filesrc location="${VIDEO}" ! parsebin ! vah264dec ! queue ! mux.sink_24`Pipeline without gst-launch-1.0 filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! ${FPS_TAIL} filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! ${FPS_TAIL} filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! ${FPS_TAIL} filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! ${FPS_TAIL} filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! ${FPS_TAIL} filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! ${FPS_TAIL} filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! ${FPS_TAIL} filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! ${FPS_TAIL} filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! ${FPS_TAIL} filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! ${FPS_TAIL} filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! ${FPS_TAIL} filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! ${FPS_TAIL} filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! ${FPS_TAIL} filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! ${FPS_TAIL} filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! ${FPS_TAIL} filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! ${FPS_TAIL} filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! ${FPS_TAIL} filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! ${FPS_TAIL} filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! ${FPS_TAIL} filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! ${FPS_TAIL} filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! ${FPS_TAIL} filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! ${FPS_TAIL} filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! ${FPS_TAIL} filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! ${FPS_TAIL} filesrc location=${VIDEO} ! parsebin ! vah264dec ! queue ! gvadetect inference-interval=1 model=/test_resources/hema_models_FP16/yolov7-pose_without_YoloPoseLayer_TRT_custom-pp.xml custom-postproc-lib=postproc_callbacks/cpp/mimic_rois_24_model_input_size/build/libadd_face_person_rois_model_input_size.so model-instance-id=yolov7_det0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvatrack tracking-type=zero-term-imageless ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_face_112x112.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=face model-instance-id=face_quality0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! gvaclassify inference-interval=1 model=/test_resources/hema_models_FP16_reshape_inputs/merged_person_256x128.xml model-proc=/test_resources/resnet-post-process.json inference-region=roi-list object-class=person model-instance-id=body_attr0 batch-size=8 nireq=2 ie-config=GPU_THROUGHPUT_STREAMS=2 scale-method=default share-va-display-ctx=false reshape=false device=GPU pre-process-backend=va-surface-sharing ! queue ! ${FPS_TAIL} |
Description
This PR introduces two new GStreamer elements — gvastreammux and gvastreamdemux — enabling multi-stream video inference through a single shared pipeline.
gvastreammux collects video frames from N sink pads (sink_0, sink_1, ...) and interleaves them through a single source pad in round-robin order using GStreamer's [GstCollectPads] mechanism. Every output buffer is tagged with a new custom metadata type [GstGvaStreammuxMeta] to record the origin of each buffer.
gvastreamdemux is the companion element that reads [GstGvaStreammuxMeta] from each incoming buffer and routes it to the corresponding per-source output pad (src_0, src_1, ...).
The comparison between the Original*25 pipeline and the Use streammux pipeline demonstrates a significant reduction on GPU mem usage from 8G to 2G. As you noted, the original test already had model sharing enabled.
Fixes # (issue)
Any Newly Introduced Dependencies
Please describe any newly introduced 3rd party dependencies in this change. List their name, license information and how they are used in the project.
How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration
Checklist: