-
Notifications
You must be signed in to change notification settings - Fork 7.1k
Don't hardcode 255 unless uint8 is enforced #6825
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Good spot. It's also worth noting that the current stable uses incorrect bounds across methods. The ones that you highlighted use 255 for all integers. On the other hand, vision/torchvision/transforms/functional_tensor.py Lines 47 to 57 in 9f024a6
These two need to align. Either we will continue assuming that 255 is the right value for integers, or as you propose we should use the |
I prefer the behavior of
|
One more thing came to mind: although an edge case at best, by hardcoding |
Across our transformations we sometimes hardcode the value
255
. This is justified if we make sure that point onlytorch.uint8
images are allowed, likevision/torchvision/transforms/functional_tensor.py
Lines 471 to 472 in 788ad12
However, there are a few instances where uint8 is implied but never enforced:
vision/torchvision/transforms/functional_tensor.py
Lines 266 to 267 in 788ad12
vision/torchvision/transforms/functional_tensor.py
Lines 778 to 779 in 788ad12
vision/torchvision/transforms/functional_tensor.py
Line 852 in 788ad12
Instead of hardcoding
255
here, we should either use_max_value(dtype)
instead or if uint8 is actually required, enforce it.cc @vfdev-5 @datumbox
The text was updated successfully, but these errors were encountered: