-
Notifications
You must be signed in to change notification settings - Fork 5.8k
cudacodec VideoReader frame decoding not working properly #3195
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
If it is a package from PyPi, then there is no CUDA support at all. You need to build OpenCV from sources with enabled CUDA dependencies. |
I have built OpenCV from source with CUDA features enabled. Kindly refer to the attached build information in my original post. |
/cc @cudawarped Any thoughts on this difference between CUDA 10 and 11? |
I would be suprised if the CUDA version makes a difference as I would expect the Video SDK version to determine this behaviour, but I would have to check to find out. @jeshels did you use the same Video SDK version in both your CUDA 10 and 11 builds?
Please check out #3001 for instructions on how to get the useable area for the frames returned from As far as I am aware the decoded colour space type is always BGRA for decoding efficiency. I think support for an alpha channel is codec dependant. If this can be determined by the decoder then this information could be included in
Did you try passing
to the CMake as this works for me? |
Thank you @cudawarped for all the information! First, I have to correct myself. I've used my CUDA 10 OpenCV build for a brief period of time so it is possible that the mentioned "unexpected behavior" happens there as well (perhaps I didn't notice it). The actual reason for calling it "unexpected" is due to the difference compared to
I used different versions: Video_Codec_SDK_10.0.26 for CUDA 10 build, and Video_Codec_SDK_11.1.5 for CUDA 11 build.
The class
I understand. Is it possible to configure
Yeah, I did. But for some reason it gets unset inside CMake. Here are the shell variables I'm setting before build:
I tried to debug the CMake files but failed to identify the root cause for the resetting of |
Sorry I didn't realize, I'm just adding python bindings and testing them now on the master branch. It will mean that you need to pass the useable ROI as a numpy view to all routines which use the frame if you want to ignore the padded lines at the bottom. of the decoded frames.
Currently no but I would perform the conversion to BGR on the device before downloading to the host. When
|
System information (version)
Detailed description
Reading frames using cudacodec
VideoReader
yields unexpected frame size and color space type.Taking an mp4 video file of 1920x1080 resolution, I receive frames of 1920x1088 resolution (note the extra 8 pixels) and a color space type of
cv2.CV_8UC4
(instead of the expectedcv2.CV_8UC3
). When displaying the frames to the screen viacv2.imshow()
, everything looks fine. The addition of extraneous pixels happens only for some of my videos (not reproducing in this video), but the unexpected color space type is constantly reproducing for all video files.Here is my build information:
Click to expand
Installed dependencies:
OpenCV build information:
Additional note
It is worth mentioning that these issues don't reproduce with my OpenCV build for CUDA 10. Perhaps it is related to the change to CUDA 11?
Steps to reproduce
Issue submission checklist
forum.opencv.org, Stack Overflow, etc and have not found any solution
These issues don't reproduce with my OpenCV build for CUDA 10.
The two mentioned example video files:
The text was updated successfully, but these errors were encountered: