Skip to content

Normalize, LinearTransformation are scriptable #2645

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 8 commits into from
Sep 23, 2020

Conversation

vfdev-5
Copy link
Collaborator

@vfdev-5 vfdev-5 commented Sep 4, 2020

Description:

  • Normalize, LinearTransformation are scriptable, support batch tensors and cuda tensors
  • removed object from class definition
  • updated docs

OLD

Description:

  • Compose, RandomApply, Normalize can be jit scripted

  • all transforms are now derived from torch.nn.Module

  • Checked on

    • references/classification
    • references/similarity
    • pytorch/examples
      • dcgan
      • fast neural style
      • super-resolution
      • imagenet
    • swav

TODO:

  • Update docs and put a note about the change and possible problems if encountered...

We need an extensive checking on possible data aug pipelines to stress the code. Any help with that and reporting bugs here is appreciated.

cc @pmeier @fmassa

- Compose, RandomApply, Normalize can be jit scripted
Copy link
Collaborator

@pmeier pmeier left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have no other pipeline to test this against. Thus, my comments are only based on this PR.

t = Lambda(t)
new_transforms.append(t)

self.transforms = torch.nn.ModuleList(new_transforms)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the motivation for using a ModuleList here? Given that _forward_impl simply passes the input sequentially through all transforms, would a Sequential a better fit?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the reason is that in the case of an exception, you'll still have a list of transforms, so you will need perform the forward implementation yourself

Copy link
Collaborator Author

@vfdev-5 vfdev-5 Sep 14, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here is another point for using ModuleList instead of Sequential. Previously, Compose.transforms is of the same type as input transforms: tuple, list. User can create transforms as a compose and then append/extend etc its content.
However, in those cases, we can not simply wrap the input transforms by Lambda...
=> Maybe, we can subclass nn.ModuleList here and automatically perform the same wrapping by Lambda as in the constructor but for append/extend and other methods ...

new_transforms = []
for t in transforms:
if callable(t) and not isinstance(t, torch.nn.Module):
t = Lambda(t)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An issue here can be the following. For example, user would like to configure during the training some options of its custom transform and if it is wrapped by Lambda then previous code wont work...

transforms = Compose([
    ...
    CustomTransform(some_config),  # at index 5
    ...
])

# assert hasattr(CustomTransform, "update_config")
# assert isinstance(transforms[5], Lambda)

for epoch in range(100):
     
    if epoch == 25:
        transforms[5].update_config(new_config)

As suggested by Philip, that we can "either enables attribute access to the underlying object or at least gives a meaningful error message".

- added getattr to Lambda and tests
- updated code and docs of Compose
- added failing test with append/extend on Composed.transforms
@@ -443,7 +467,7 @@ def __repr__(self):
class RandomOrder(RandomTransforms):
"""Apply a list of transformations in a random order
"""
def __call__(self, img):
def forward(self, img):
order = list(range(len(self.transforms)))
random.shuffle(order)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit and not for this PR: this doesn't support torchscript

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

RandomTransforms becomes nn.Module, shouldn't we use forward instead of __call__ ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we should. I was just pointing out that this is not torchscriptable

], p=0.3),
T.Compose([
T.RandomResizedCrop(15),
T.RandomApply([
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: We are missing tests for RandomOrder

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, I can add a basic test for that as RandomOrder is not scriptable

except TypeError:
self.transforms = transforms

def _forward_impl(self, img: Tensor) -> Tensor:
for t in self.transforms:
img = t(img)
return img
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not put the implementation inside forward?

t = Lambda(t)
new_transforms.append(t)

self.transforms = torch.nn.ModuleList(new_transforms)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the reason is that in the case of an exception, you'll still have a list of transforms, so you will need perform the forward implementation yourself

"""Apply a user-defined lambda as a transform.

.. Note::
This class exposes ``lambd`` attributes as its own attributes:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Question: do we want to mention it in the doc? Does this make it less of an implementation detail?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we can rephrase to say that custom ops wrapped by Lambda can behave similarly to original custom op and we do not have to use op.lambd to access attributes.

for t in self.transforms:
img = t(img)
return img
return self._forward_impl(img)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now I see why you implemented Compose with _forward_impl. I wonder if this is really necessary, given that we only save two lines but it adds additional complexity to the user to read the function?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Personnally, I'd prefer to refactor... If you think that it is better for transparency reasons to put explicit code, I'm fine with that too.

])

t.transforms.append(transforms.ToTensor())
t.transforms.append(lambda x: x + 2) # THIS DOES NOT WORK
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you explain what this doesn't work?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This fails due to the fact that lambda x: x + 2 is not nn.Module. Here we modify constructed Compose with custom ops that are not wrapped by Lambda.
See also the discussion above.

Comment on lines 1742 to 1747
t = MyTransform(10)
t_lambda = transforms.Lambda(t)
self.assertEqual(t_lambda.s, t.s)
t_lambda.s = 5
self.assertEqual(t_lambda.s, t.s)
self.assertEqual(t_lambda(10), t(10))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hum, now I think I see why you override __getattr__. This wasn't the case before, I suppose this is to try to keep BC with Compose, if the user wants to access those fields after we wrapped they non-nn module in a Lambda?

My first thought would be to avoid this type of magic as it could be confusing to the user, but I'm not sure I see a better solution for this yet, apart from encouraging all new use-cases to use nn.Sequential, and letting Compose be not recommended module to use.

Thoughts?

Copy link
Collaborator Author

@vfdev-5 vfdev-5 Sep 16, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the reason to override __getattr__ and __setattr__ is to be BC with Compose. More details above.

apart from encouraging all new use-cases to use nn.Sequential, and letting Compose be not recommended module to use.

I'm not sure that this can be a solution for custom ops where users can call for example opencv to transform things...

@vfdev-5
Copy link
Collaborator Author

vfdev-5 commented Sep 18, 2020

According to the discussion with @fmassa :

  • Let's keep Compose as is => no BC-break
  • Update its doc to say about using nn.Sequential for torch.jit.script
  • Remove __get/setattr__ from Lambda

@vfdev-5 vfdev-5 changed the title [WIP][BC-breaking] All transforms are now derived from torch.nn.Module (2641) All transforms are now derived from torch.nn.Module (2641) Sep 22, 2020
@vfdev-5 vfdev-5 changed the title All transforms are now derived from torch.nn.Module (2641) Normalize, LinearTransformation are scriptable (2641) Sep 22, 2020
@vfdev-5 vfdev-5 changed the title Normalize, LinearTransformation are scriptable (2641) Normalize, LinearTransformation are scriptable Sep 22, 2020
@codecov
Copy link

codecov bot commented Sep 22, 2020

Codecov Report

Merging #2645 into master will increase coverage by 0.02%.
The diff coverage is 64.70%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #2645      +/-   ##
==========================================
+ Coverage   72.79%   72.82%   +0.02%     
==========================================
  Files          95       95              
  Lines        8203     8212       +9     
  Branches     1280     1283       +3     
==========================================
+ Hits         5971     5980       +9     
+ Misses       1841     1838       -3     
- Partials      391      394       +3     
Impacted Files Coverage Δ
torchvision/transforms/functional.py 82.13% <42.85%> (ø)
torchvision/transforms/transforms.py 80.89% <70.37%> (+0.30%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 8dfcff7...5fbdbbd. Read the comment docs.

@vfdev-5 vfdev-5 requested a review from fmassa September 22, 2020 14:37
Copy link
Member

@fmassa fmassa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot!

@fmassa fmassa merged commit 1b41525 into pytorch:master Sep 23, 2020
bryant1410 pushed a commit to bryant1410/vision-1 that referenced this pull request Nov 22, 2020
* [WIP] All transforms are now derived from torch.nn.Module
- Compose, RandomApply, Normalize can be jit scripted

* Fixed flake8

* Updated code and docs
- added getattr to Lambda and tests
- updated code and docs of Compose
- added failing test with append/extend on Composed.transforms

* Fixed flake8

* Updated code, tests and docs
vfdev-5 added a commit to Quansight/vision that referenced this pull request Dec 4, 2020
* [WIP] All transforms are now derived from torch.nn.Module
- Compose, RandomApply, Normalize can be jit scripted

* Fixed flake8

* Updated code and docs
- added getattr to Lambda and tests
- updated code and docs of Compose
- added failing test with append/extend on Composed.transforms

* Fixed flake8

* Updated code, tests and docs
@vfdev-5 vfdev-5 deleted the vfdev-5/jit-support-2641 branch January 21, 2022 10:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants