-
Notifications
You must be signed in to change notification settings - Fork 536
[cadence][hifi] update quantized_relu_per_tensor_out signature to match internal flow #8143
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/8143
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ❌ 1 New FailureAs of commit ae50298 with merge base ee6f2d9 ( NEW FAILURE - The following job has failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D69015308 |
588969b
to
6a0c088
Compare
This pull request was exported from Phabricator. Differential Revision: D69015308 |
Summary: fix quantized_relu_per_tensor Differential Revision: D69015308
6a0c088
to
ae50298
Compare
This pull request was exported from Phabricator. Differential Revision: D69015308 |
@@ -48,24 +48,27 @@ void quantized_relu_( | |||
void quantized_relu_per_tensor_out( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this not the operator called by ET? If so, please move this inside an anonymous namespace.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
realized most of hifi ops are not in anonymous namespace yet. Will do separately
Differential Revision: D69015308