Skip to content

Tensor subclass boilerplate can be consolidated #710

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
HDCharles opened this issue Aug 20, 2024 · 2 comments
Open

Tensor subclass boilerplate can be consolidated #710

HDCharles opened this issue Aug 20, 2024 · 2 comments
Assignees
Labels
good first issue Good for newcomers

Comments

@HDCharles
Copy link
Contributor

A lot of code for tensor subclasses can likely be conslidated together into a base class that other classes can utilize

_get_to_kwargs:
https://github.com/pytorch/ao/blob/main/torchao/dtypes/affine_quantized_tensor.py#L64
https://github.com/pytorch/ao/blob/main/torchao/dtypes/affine_quantized_tensor.py#L276
(also needed for https://github.com/pytorch/ao/blob/main/torchao/quantization/autoquant.py#L40)

to:
https://github.com/pytorch/ao/blob/main/torchao/dtypes/affine_quantized_tensor.py#L594
https://github.com/pytorch/ao/blob/main/torchao/dtypes/affine_quantized_tensor.py#L290
https://github.com/pytorch/ao/blob/main/torchao/dtypes/affine_quantized_tensor.py#L423
(also needed for https://github.com/pytorch/ao/blob/main/torchao/quantization/autoquant.py#L40)

_apply_fn_to_data:
https://github.com/pytorch/ao/blob/main/torchao/dtypes/affine_quantized_tensor.py#L432

detach:
https://github.com/pytorch/ao/blob/main/torchao/dtypes/affine_quantized_tensor.py#L444

a default repr would be nice (it caused bugs in the past)

if these were implemented using tensor_flatten and tensor_unflatten e.g. https://github.com/pytorch/ao/blob/main/torchao/dtypes/affine_quantized_tensor.py#L412-L421
you could write a general mixin/parent class that handles the general form of those methods

@HDCharles HDCharles added the good first issue Good for newcomers label Aug 20, 2024
@HDCharles
Copy link
Contributor Author

@jerry39213gh thoughts?

@jerryzh168
Copy link
Contributor

yes, this makes sense I think, we can put this https://github.com/pytorch/ao/blob/main/torchao/dtypes/utils.py. we can also attach

implements = classmethod(_implements)
__torch_function__ = classmethod(_dispatch__torch_function__)
__torch_dispatch__ = classmethod(_dispatch__torch_dispatch__)
by default

there could be some issues of child class modifying the dispatch table for parent class, if this becomes an issue, we can add class to the dispatch table as well in the future.

jerryzh168 added a commit to jerryzh168/ao that referenced this issue Aug 22, 2024
Summary:
Fixes: pytorch#698

Also added `TorchAOBaseTensor` addressing part of pytorch#710

Test Plan:
python test/dtypes/test_affine_quantized.py

Reviewers:

Subscribers:

Tasks:

Tags:
jerryzh168 added a commit that referenced this issue Aug 22, 2024
* Fix affine quantized tensor to device calls

Summary:
Fixes: #698

Also added `TorchAOBaseTensor` addressing part of #710

Test Plan:
python test/dtypes/test_affine_quantized.py

Reviewers:

Subscribers:

Tasks:

Tags:
@jerryzh168 jerryzh168 self-assigned this Aug 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

2 participants