Skip to content

Qualcomm AI Engine Direct - Fix UT example script hang when exception happened #4355

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed

Conversation

winskuo-quic
Copy link
Collaborator

@winskuo-quic winskuo-quic commented Jul 23, 2024

Summary:

  • Fix UT Example Script hang when exceptions happened during execution. While the main process is waiting for child process to return a message, the child process exits without exceptions properly caught.
  • Remove RemoveRedundancy Pass during quantizer to resolve memory format issues while quantizing.
  • Prevent constant being double dequant in AnnotateQuantAttrs Pass

Copy link

pytorch-bot bot commented Jul 23, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/4355

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (1 Unrelated Failure)

As of commit b41626f with merge base 5a20a49 (image):

FLAKY - The following job failed but was likely due to flakiness present on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jul 23, 2024
@winskuo-quic winskuo-quic changed the title Fix UT example script hang when exception happened Qualcomm AI Engine Direct - Fix UT example script hang when exception happened Jul 23, 2024
@winskuo-quic
Copy link
Collaborator Author

Hi @cccclai ,
This PR is used to capture exceptions for UT Example Scripts properly, so it does not hang.
Please have a look on this PR.
Thanks.

@facebook-github-bot
Copy link
Contributor

@cccclai has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@winskuo-quic winskuo-quic force-pushed the dev1/winskuo/fix_ut_hang branch from d17d18a to b41626f Compare July 29, 2024 06:46
@winskuo-quic
Copy link
Collaborator Author

Hi @cccclai,

It seems like there are some failures in CI, so I have force pushed another commit to trigger CI.
I have also fixed some other issues, including:

  • Removing the pass RemoveRedundancy() during quantizer as it will cause memory format errors during quantization after changing from capture_pre_autograd_graph to export.
  • Added checks in the pass AnnotateQuantAttrs(), so constant values do not get dequant twice in some edge cases.

Please have a look.
Thanks

@facebook-github-bot
Copy link
Contributor

@cccclai has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

Copy link
Contributor

@cccclai cccclai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@@ -187,6 +187,7 @@ def build_executorch_binary(
quantizer = QnnQuantizer()
quantizer.add_custom_quant_annotations(custom_annotations)
quantizer.set_per_channel_linear_quant(per_channel_linear)
quantizer.set_per_channel_conv_quant(True)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it always true or configurable?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we set it always to true as there are couple of models having bad accuracy due to turning off per_channel_conv.

@@ -182,7 +181,6 @@ def set_per_channel_linear_quant(self, enable: bool) -> None:
self._update_per_channel_weight_quant_ops(linear_ops, enable)

def transform_for_annotation(self, model: GraphModule) -> GraphModule:
model = RemoveRedundancy()(model).graph_module
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we have a follow up to add this back? Feel like we may have perf regresssion without it

Copy link
Collaborator Author

@winskuo-quic winskuo-quic Jul 30, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for reviewing. Although the pass RemoveRedundancy() is removed during quantizer, it will still be called during capture_program(), so the final performance should be the same.

RemoveRedundancy()(graph_module)

@facebook-github-bot
Copy link
Contributor

@cccclai merged this pull request in e087ac8.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. Merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants