Skip to content

Make mutation test work with quantized tensors #108935

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 5 commits into from

Conversation

ezyang
Copy link
Contributor

@ezyang ezyang commented Sep 9, 2023

Stack from ghstack (oldest at bottom):

Signed-off-by: Edward Z. Yang [email protected]

You can't do torch.equal as nan doesn't compare equal, but if
you reinterpret the tensors as int8 tensors and do equal that
works.  As an added bonus, this works with quantized tensors.

Signed-off-by: Edward Z. Yang <[email protected]>

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Sep 9, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/108935

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (3 Unrelated Failures)

As of commit 9947b31 with merge base 2b138e4 (image):

BROKEN TRUNK - The following job failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

UNSTABLE - The following jobs failed but were likely due to flakiness present on trunk and has been marked as unstable:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

You can't do torch.equal as nan doesn't compare equal, but if
you reinterpret the tensors as int8 tensors and do equal that
works.  As an added bonus, this works with quantized tensors.

Signed-off-by: Edward Z. Yang <ezyangmeta.com>

[ghstack-poisoned]
You can't do torch.equal as nan doesn't compare equal, but if
you reinterpret the tensors as int8 tensors and do equal that
works.  As an added bonus, this works with quantized tensors.

Signed-off-by: Edward Z. Yang <ezyangmeta.com>

[ghstack-poisoned]
You can't do torch.equal as nan doesn't compare equal, but if
you reinterpret the tensors as int8 tensors and do equal that
works.  As an added bonus, this works with quantized tensors.

Signed-off-by: Edward Z. Yang <ezyangmeta.com>

[ghstack-poisoned]
ezyang added a commit that referenced this pull request Sep 10, 2023
You can't do torch.equal as nan doesn't compare equal, but if
you reinterpret the tensors as int8 tensors and do equal that
works.  As an added bonus, this works with quantized tensors.

Signed-off-by: Edward Z. Yang <ezyangmeta.com>

ghstack-source-id: 4446e2a
Pull Request resolved: #108935
You can't do torch.equal as nan doesn't compare equal, but if
you reinterpret the tensors as int8 tensors and do equal that
works.  As an added bonus, this works with quantized tensors.

Signed-off-by: Edward Z. Yang <ezyangmeta.com>

[ghstack-poisoned]
ezyang added a commit that referenced this pull request Sep 12, 2023
You can't do torch.equal as nan doesn't compare equal, but if
you reinterpret the tensors as int8 tensors and do equal that
works.  As an added bonus, this works with quantized tensors.

Signed-off-by: Edward Z. Yang <ezyangmeta.com>

ghstack-source-id: fdfa544
Pull Request resolved: #108935
@ezyang ezyang changed the title Use a bit-identical test for mutation test Make mutation test work with quantized tensors Sep 12, 2023
@ezyang
Copy link
Contributor Author

ezyang commented Sep 13, 2023

@pytorchbot merge -f "regular flow is sus"

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use -f as last resort and instead consider -i/--ignore-current to continue the merge ignoring current failures. This will allow currently pending tests to finish and report signal before the merge.

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@@ -42,6 +42,14 @@ def display_ops(self):
print(*self.ops, sep=",")

def __torch_dispatch__(self, func, types, args=(), kwargs=None):
def bitwise_equal(lhs, rhs):
if lhs.is_quantized:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we actually have this use case? this is tracing a model quantized in eager mode quantization?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was working applying this cross ref test to more operators, and the quantized ones started failing, so I fixed it with this. This is testing code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants