-
Notifications
You must be signed in to change notification settings - Fork 64
feat(atenlib): add ops(native_layer_norm) #330
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(atenlib): add ops(native_layer_norm) #330
Conversation
Codecov Report
@@ Coverage Diff @@
## main #330 +/- ##
==========================================
- Coverage 73.31% 73.28% -0.04%
==========================================
Files 96 96
Lines 9512 9526 +14
==========================================
+ Hits 6974 6981 +7
- Misses 2538 2545 +7
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm thanks!
# native_layer_norm(Tensor input, SymInt[] normalized_shape, Tensor? weight, Tensor? bias, float eps) -> (Tensor, Tensor, Tensor) | ||
|
||
raise NotImplementedError() | ||
axes = [-i for i in range(len(normalized_shape), 0, -1)] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
normalized_shape should be a tensor so use onnx op on it? In test we can make normalized_shape to always be a tensor.
Alternatively, annotate normalized_shape to be int for now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will check it later.
using opset17 inside the function to pass the testing, due to opset18 is not ready yet.