-
Notifications
You must be signed in to change notification settings - Fork 7.1k
[proto] Added RandomCrop transform and tests #6271
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[proto] Added RandomCrop transform and tests #6271
Conversation
def fn( | ||
id: Tuple[Any, ...], input: Any | ||
) -> Optional[Tuple[Tuple[Any, ...], Union[PIL.Image.Image, torch.Tensor, features.Image]]]: | ||
if type(input) in {torch.Tensor, features.Image} or isinstance(input, PIL.Image.Image): | ||
if type(input) == torch.Tensor or isinstance(input, (PIL.Image.Image, features.Image)): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pmeier I suggest to check for image-like types in the following way. This helps to use mocker with spec type.
raise TypeError("No image was found in the sample") | ||
|
||
|
||
# vfdev-5: let's use tree_flatten instead of query_recursively and internal fn to make the code simplier |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's simplify the code using tree_flatten
instead of home-made methods.
564767a
to
93e1cfb
Compare
def _forward(self, flat_inputs: List[Any]) -> List[Any]: | ||
if self.padding is not None: | ||
flat_inputs = [F.pad(flat_input, self.padding, self.fill, self.padding_mode) for flat_input in flat_inputs] | ||
|
||
image = query_image(flat_inputs) | ||
_, height, width = get_image_dimensions(image) | ||
|
||
# pad the width if needed | ||
if self.pad_if_needed and width < self.size[1]: | ||
padding = [self.size[1] - width, 0] | ||
flat_inputs = [F.pad(flat_input, padding, self.fill, self.padding_mode) for flat_input in flat_inputs] | ||
# pad the height if needed | ||
if self.pad_if_needed and height < self.size[0]: | ||
padding = [0, self.size[0] - height] | ||
flat_inputs = [F.pad(flat_input, padding, self.fill, self.padding_mode) for flat_input in flat_inputs] | ||
|
||
params = self._get_params(flat_inputs) | ||
|
||
return [F.crop(flat_input, **params) for flat_input in flat_inputs] | ||
|
||
def forward(self, *inputs: Any) -> Any: | ||
from torch.utils._pytree import tree_flatten, tree_unflatten | ||
|
||
sample = inputs if len(inputs) > 1 else inputs[0] | ||
|
||
flat_inputs, spec = tree_flatten(sample) | ||
out_flat_inputs = self._forward(flat_inputs) | ||
return tree_unflatten(out_flat_inputs, spec) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here, we can't just override _transform
method as we need to access image data and generate params on transformed sample.
Proposed solution is to 1) flatten sample structure into a list, 2) apply crop+pad logic, 3) unflatten output into input sample structure
* [proto] Added few transforms tests, part 1 (#6262) * Added supported/unsupported data checks in the tests for cutmix/mixup * Added RandomRotation, RandomAffine transforms tests * Added tests for RandomZoomOut, Pad * Update test_prototype_transforms.py * Added RandomCrop transform and tests (#6271) * [proto] Added GaussianBlur transform and tests (#6273) * Added GaussianBlur transform and tests * Fixing code format * Copied correctness test * [proto] Added random color transforms and tests (#6275) * Added random color transforms and tests * Disable smoke test for RandomSolarize, RandomAdjustSharpness * Added RandomPerspective and tests (#6284) - replaced real image creation by mocks for other tests * Added more functional tests (#6285) * [proto] Added elastic transform and tests (#6295) * WIP [proto] Added functional elastic transform with tests * Added more functional tests * WIP on elastic op * Added elastic transform and tests * Added tests * Added tests for ElasticTransform * Try to format code as in #5106 * Fixed bug in affine get_params test * Implemented RandomErase on PIL input as fallback to tensors (#6309) Added tests * Added image_size computation for BoundingBox.rotate if expand (#6319) * Added image_size computation for BoundingBox.rotate if expand * Added tests * Added erase_image_pil and eager/jit erase_image_tensor test (#6320) * Updates according to the review Co-authored-by: Vasilis Vryniotis <[email protected]>
Summary: * [proto] Added few transforms tests, part 1 (#6262) * Added supported/unsupported data checks in the tests for cutmix/mixup * Added RandomRotation, RandomAffine transforms tests * Added tests for RandomZoomOut, Pad * Update test_prototype_transforms.py * Added RandomCrop transform and tests (#6271) * [proto] Added GaussianBlur transform and tests (#6273) * Added GaussianBlur transform and tests * Fixing code format * Copied correctness test * [proto] Added random color transforms and tests (#6275) * Added random color transforms and tests * Disable smoke test for RandomSolarize, RandomAdjustSharpness * Added RandomPerspective and tests (#6284) - replaced real image creation by mocks for other tests * Added more functional tests (#6285) * [proto] Added elastic transform and tests (#6295) * WIP [proto] Added functional elastic transform with tests * Added more functional tests * WIP on elastic op * Added elastic transform and tests * Added tests * Added tests for ElasticTransform * Try to format code as in #5106 * Fixed bug in affine get_params test * Implemented RandomErase on PIL input as fallback to tensors (#6309) Added tests * Added image_size computation for BoundingBox.rotate if expand (#6319) * Added image_size computation for BoundingBox.rotate if expand * Added tests * Added erase_image_pil and eager/jit erase_image_tensor test (#6320) * Updates according to the review Reviewed By: NicolasHug Differential Revision: D38351755 fbshipit-source-id: 4b52b530f93c3dfa92326e01803665cb44003a63 Co-authored-by: Vasilis Vryniotis <[email protected]>
Description: