[NOMERGE] drop in transforms v2 into the v1 tests#7159
[NOMERGE] drop in transforms v2 into the v1 tests#7159pmeier wants to merge 42 commits intopytorch:mainfrom
Conversation
|
I worked on making all the Most of the time it's trivial and not worth bothering about it, like different error messages. But there are some occurrences that require our attention. I marked them all as TODOs in the PR, but the main ones are:
|
This is an oversight on my side. Fix in #7177.
We intentionally did not port that behavior since it is deprecated and scheduled for removal in 0.15, i.e. the upcoming release: vision/torchvision/transforms/transforms.py Lines 327 to 333 in 2cd25c1 I've opened #7176 to handle this as well as all the other scheduled removals. |
|
Thanks a lot Philp. Regarding |
|
I made
I'll move on to |
|
OK, just took care of For most transforms this is OK and this is probably because either:
Either way, we should double check all of the those marked as TODO to make sure these are expected. One thing to note though is that the assertion that |
This just seems like wrong annotations. We didn't change anything about the functionality so reverting back to whatever v1 has should do the trick. As for why our v2 tests are green: I designed the our tests along the annotations of the functions failing to realize how wrong they actually are. Meaning, we never tested tuples, hence the green CI: vision/test/prototype_transforms_kernel_infos.py Lines 440 to 454 in 1120aa9
It should not just be
In practice this meant dropping the helper. The error you were seeing was about the image and the grid not having the same but this doesn't account for floating point inputs that are not As for the reason why our v2 tests didn't catch this: in order to reduce time to signal we initially only wanted to test against vision/test/prototype_common_utils.py Line 298 in 1120aa9 vision/test/prototype_transforms_kernel_infos.py Line 1266 in 1120aa9 I've opened #7195 to see if there are more bugs like this lurking in v2. Since #6945 was mostly about performance and the top comment shows significant improvements for
IIUC, the times you experienced that is just for the family of AA transforms, right? That is expected, since we actually changed the sampling strategy there. This is why they have separate consistency tests:
For
Edit: My assessment below is wrong. See #7159 (comment) for the actual reason.
|
test/test_transforms_tensor.py
Outdated
There was a problem hiding this comment.
I'm not sure what this test is supposed to do. I guess try to JIT script a Compose and see if it fails? Because that is what it should be doing:
vision/torchvision/transforms/transforms.py
Lines 74 to 81 in 1120aa9
No idea why we are checking for a specific message here. The message is what you get if you try to script something without annotations. In that case JIT just assumes every variable is a torch.Tensor. Thus,
vision/torchvision/transforms/transforms.py
Lines 94 to 95 in 1120aa9
just fails, because tensors cannot be called.
We should just remove the message here and be done with it.
| # TODO: None are passing - is it just because of randomness? | ||
| return |
There was a problem hiding this comment.
We should look into that. None carries a special meaning I think.
There was a problem hiding this comment.
None is default value for fill in AA. As in AA code we are calling directly functional ops which have fill=None by default.
test/test_transforms.py
Outdated
There was a problem hiding this comment.
@vfdev-5 and I discussed offline and our answer is no. As explained in #6517, fill=None and fill=0 is not the same. However, some v1 transforms (the three highlighted here) simply coerce None into 0:
vision/torchvision/transforms/transforms.py
Lines 1338 to 1339 in 55d3ba6
AFAIK this behavior is undocumented. In v2 we pass through None values:
vision/torchvision/transforms/v2/_utils.py
Lines 56 to 61 in 55d3ba6
Note that in both cases, fill=None is not the default value
vision/torchvision/transforms/transforms.py
Line 1321 in 55d3ba6
vision/torchvision/transforms/v2/_geometry.py
Line 333 in 55d3ba6
So v1 is silently ignoring user input, while v2 doesn't.
There was a problem hiding this comment.
Right now, this assertion should be
assert t.fill is None
|
|
||
| @pytest.mark.parametrize("device", cpu_and_gpu()) | ||
| @pytest.mark.parametrize("policy", [policy for policy in T.AutoAugmentPolicy]) | ||
| # TODO: Why are there failures only for CIFAR10?? |
There was a problem hiding this comment.
I'll check that.
By the way enabling similar cifar10 policy in prototype tests show also a failure. Maybe there is a bug in the code port.
There was a problem hiding this comment.
(it might have been fixed since then, I'm not sure)
There was a problem hiding this comment.
No, it is failing here and in proto tests
On main it does not fail on cifar10 here but in modified prototype with cifar10
test/test_transforms.py
Outdated
There was a problem hiding this comment.
Right now it should be
with pytest.raises(TypeError, match=r"(Input can either|supports inputs of type)"):
F.rotate(x, 10)
This PR should serve as migration test whether we messed something up in transforms v2 that was enforced by the v1 tests. If CI is green here, we can be reasonably confidence that v2 is actually a drop-in replacement for v1.