Skip to content

reinstate _split_alpha #6838

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed

Conversation

pmeier
Copy link
Collaborator

@pmeier pmeier commented Oct 26, 2022

Follow-up to #6832 (comment). Instead of following the potential solution there, which I agree would not be worth it, this PR simply changes the annotation of _split_alpha from Tuple[torch.Tensor, torch.Tensor] to List[torch.Tensor]. Both mypy and JIT are happy with that.

@datumbox I brought this PR because you initially agreed to keep _split_alpha and apparently the issue above is the thing that tipped you in the other direction. Feel free to close this PR if there is anything else why you prefer the current solution over _split_alpha.

cc @vfdev-5 @datumbox @bjuncek

def _split_alpha(image: torch.Tensor) -> List[torch.Tensor]:
return torch.tensor_split(image, indices=(-1,), dim=-3)


def _strip_alpha(image: torch.Tensor) -> torch.Tensor:
image, alpha = torch.tensor_split(image, indices=(-1,), dim=-3)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you missed it here.

Comment on lines +186 to +187
def _split_alpha(image: torch.Tensor) -> List[torch.Tensor]:
return torch.tensor_split(image, indices=(-1,), dim=-3)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Frankly I reluctantly agreed with this to unblock the other PR. IMO there is very little value having a private method that does a one liner for an existing method on Core. I don't really mind this, I just think of this from the perspective of our practices. IMO the idiom of:

image, alpha = torch.tensor_split(image, indices=(-1,), dim=-3)

already sufficiently shows that we split the image and the alpha channel. The additional one liner, wrapper method doesn't add anything to the readability of the code. From that perspective, my preference is not to merge this, but I'm not going to insist if this is important to you.

@pmeier pmeier closed this Oct 26, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants