Skip to content

reinstate _split_alpha #6838

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 6 additions & 2 deletions torchvision/prototype/transforms/functional/_meta.py
Original file line number Diff line number Diff line change
Expand Up @@ -183,6 +183,10 @@ def clamp_bounding_box(
return convert_format_bounding_box(xyxy_boxes, BoundingBoxFormat.XYXY, format)


def _split_alpha(image: torch.Tensor) -> List[torch.Tensor]:
return torch.tensor_split(image, indices=(-1,), dim=-3)
Comment on lines +186 to +187
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Frankly I reluctantly agreed with this to unblock the other PR. IMO there is very little value having a private method that does a one liner for an existing method on Core. I don't really mind this, I just think of this from the perspective of our practices. IMO the idiom of:

image, alpha = torch.tensor_split(image, indices=(-1,), dim=-3)

already sufficiently shows that we split the image and the alpha channel. The additional one liner, wrapper method doesn't add anything to the readability of the code. From that perspective, my preference is not to merge this, but I'm not going to insist if this is important to you.



def _strip_alpha(image: torch.Tensor) -> torch.Tensor:
image, alpha = torch.tensor_split(image, indices=(-1,), dim=-3)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you missed it here.

if not torch.all(alpha == _FT._max_value(alpha.dtype)):
Expand Down Expand Up @@ -233,7 +237,7 @@ def convert_color_space_image_tensor(
elif old_color_space == ColorSpace.GRAY_ALPHA and new_color_space == ColorSpace.RGB:
return _gray_to_rgb(_strip_alpha(image))
elif old_color_space == ColorSpace.GRAY_ALPHA and new_color_space == ColorSpace.RGB_ALPHA:
image, alpha = torch.tensor_split(image, indices=(-1,), dim=-3)
image, alpha = _split_alpha(image)
return _add_alpha(_gray_to_rgb(image), alpha)
elif old_color_space == ColorSpace.RGB and new_color_space == ColorSpace.GRAY:
return _rgb_to_gray(image)
Expand All @@ -244,7 +248,7 @@ def convert_color_space_image_tensor(
elif old_color_space == ColorSpace.RGB_ALPHA and new_color_space == ColorSpace.GRAY:
return _rgb_to_gray(_strip_alpha(image))
elif old_color_space == ColorSpace.RGB_ALPHA and new_color_space == ColorSpace.GRAY_ALPHA:
image, alpha = torch.tensor_split(image, indices=(-1,), dim=-3)
image, alpha = _split_alpha(image)
return _add_alpha(_rgb_to_gray(image), alpha)
elif old_color_space == ColorSpace.RGB_ALPHA and new_color_space == ColorSpace.RGB:
return _strip_alpha(image)
Expand Down