-
Notifications
You must be signed in to change notification settings - Fork 7.1k
Normalize, LinearTransformation are scriptable #2645
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Compose, RandomApply, Normalize can be jit scripted
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have no other pipeline to test this against. Thus, my comments are only based on this PR.
torchvision/transforms/transforms.py
Outdated
t = Lambda(t) | ||
new_transforms.append(t) | ||
|
||
self.transforms = torch.nn.ModuleList(new_transforms) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the motivation for using a ModuleList
here? Given that _forward_impl
simply passes the input sequentially through all transforms, would a Sequential
a better fit?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the reason is that in the case of an exception, you'll still have a list of transforms, so you will need perform the forward implementation yourself
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here is another point for using ModuleList
instead of Sequential
. Previously, Compose.transforms
is of the same type as input transforms: tuple, list. User can create transforms as a compose and then append/extend etc its content.
However, in those cases, we can not simply wrap the input transforms by Lambda
...
=> Maybe, we can subclass nn.ModuleList
here and automatically perform the same wrapping by Lambda
as in the constructor but for append/extend
and other methods ...
torchvision/transforms/transforms.py
Outdated
new_transforms = [] | ||
for t in transforms: | ||
if callable(t) and not isinstance(t, torch.nn.Module): | ||
t = Lambda(t) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An issue here can be the following. For example, user would like to configure during the training some options of its custom transform and if it is wrapped by Lambda then previous code wont work...
transforms = Compose([
...
CustomTransform(some_config), # at index 5
...
])
# assert hasattr(CustomTransform, "update_config")
# assert isinstance(transforms[5], Lambda)
for epoch in range(100):
if epoch == 25:
transforms[5].update_config(new_config)
As suggested by Philip, that we can "either enables attribute access to the underlying object or at least gives a meaningful error message".
- added getattr to Lambda and tests - updated code and docs of Compose - added failing test with append/extend on Composed.transforms
torchvision/transforms/transforms.py
Outdated
@@ -443,7 +467,7 @@ def __repr__(self): | |||
class RandomOrder(RandomTransforms): | |||
"""Apply a list of transformations in a random order | |||
""" | |||
def __call__(self, img): | |||
def forward(self, img): | |||
order = list(range(len(self.transforms))) | |||
random.shuffle(order) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit and not for this PR: this doesn't support torchscript
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
RandomTransforms becomes nn.Module
, shouldn't we use forward instead of __call__
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we should. I was just pointing out that this is not torchscriptable
test/test_transforms_tensor.py
Outdated
], p=0.3), | ||
T.Compose([ | ||
T.RandomResizedCrop(15), | ||
T.RandomApply([ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: We are missing tests for RandomOrder
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, I can add a basic test for that as RandomOrder
is not scriptable
torchvision/transforms/transforms.py
Outdated
except TypeError: | ||
self.transforms = transforms | ||
|
||
def _forward_impl(self, img: Tensor) -> Tensor: | ||
for t in self.transforms: | ||
img = t(img) | ||
return img |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not put the implementation inside forward
?
torchvision/transforms/transforms.py
Outdated
t = Lambda(t) | ||
new_transforms.append(t) | ||
|
||
self.transforms = torch.nn.ModuleList(new_transforms) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the reason is that in the case of an exception, you'll still have a list of transforms, so you will need perform the forward implementation yourself
torchvision/transforms/transforms.py
Outdated
"""Apply a user-defined lambda as a transform. | ||
|
||
.. Note:: | ||
This class exposes ``lambd`` attributes as its own attributes: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Question: do we want to mention it in the doc? Does this make it less of an implementation detail?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we can rephrase to say that custom ops wrapped by Lambda can behave similarly to original custom op and we do not have to use op.lambd
to access attributes.
torchvision/transforms/transforms.py
Outdated
for t in self.transforms: | ||
img = t(img) | ||
return img | ||
return self._forward_impl(img) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now I see why you implemented Compose
with _forward_impl
. I wonder if this is really necessary, given that we only save two lines but it adds additional complexity to the user to read the function?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Personnally, I'd prefer to refactor... If you think that it is better for transparency reasons to put explicit code, I'm fine with that too.
test/test_transforms.py
Outdated
]) | ||
|
||
t.transforms.append(transforms.ToTensor()) | ||
t.transforms.append(lambda x: x + 2) # THIS DOES NOT WORK |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you explain what this doesn't work?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This fails due to the fact that lambda x: x + 2
is not nn.Module
. Here we modify constructed Compose with custom ops that are not wrapped by Lambda.
See also the discussion above.
test/test_transforms.py
Outdated
t = MyTransform(10) | ||
t_lambda = transforms.Lambda(t) | ||
self.assertEqual(t_lambda.s, t.s) | ||
t_lambda.s = 5 | ||
self.assertEqual(t_lambda.s, t.s) | ||
self.assertEqual(t_lambda(10), t(10)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hum, now I think I see why you override __getattr__
. This wasn't the case before, I suppose this is to try to keep BC with Compose, if the user wants to access those fields after we wrapped they non-nn module in a Lambda
?
My first thought would be to avoid this type of magic as it could be confusing to the user, but I'm not sure I see a better solution for this yet, apart from encouraging all new use-cases to use nn.Sequential
, and letting Compose
be not recommended module to use.
Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the reason to override __getattr__
and __setattr__
is to be BC with Compose. More details above.
apart from encouraging all new use-cases to use
nn.Sequential
, and letting Compose be not recommended module to use.
I'm not sure that this can be a solution for custom ops where users can call for example opencv to transform things...
According to the discussion with @fmassa :
|
Codecov Report
@@ Coverage Diff @@
## master #2645 +/- ##
==========================================
+ Coverage 72.79% 72.82% +0.02%
==========================================
Files 95 95
Lines 8203 8212 +9
Branches 1280 1283 +3
==========================================
+ Hits 5971 5980 +9
+ Misses 1841 1838 -3
- Partials 391 394 +3
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot!
* [WIP] All transforms are now derived from torch.nn.Module - Compose, RandomApply, Normalize can be jit scripted * Fixed flake8 * Updated code and docs - added getattr to Lambda and tests - updated code and docs of Compose - added failing test with append/extend on Composed.transforms * Fixed flake8 * Updated code, tests and docs
* [WIP] All transforms are now derived from torch.nn.Module - Compose, RandomApply, Normalize can be jit scripted * Fixed flake8 * Updated code and docs - added getattr to Lambda and tests - updated code and docs of Compose - added failing test with append/extend on Composed.transforms * Fixed flake8 * Updated code, tests and docs
Description:
object
from class definitionOLD
Description:
Compose, RandomApply, Normalize can be jit scripted
all transforms are now derived from
torch.nn.Module
Checked on
TODO:
We need an extensive checking on possible data aug pipelines to stress the code. Any help with that and reporting bugs here is appreciated.
cc @pmeier @fmassa