Skip to content

feat(atenlib): add ops(native_layer_norm) #330

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 9 commits into from
Jan 19, 2023
Merged

feat(atenlib): add ops(native_layer_norm) #330

merged 9 commits into from
Jan 19, 2023

Conversation

xiaowuhu
Copy link
Contributor

using opset17 inside the function to pass the testing, due to opset18 is not ready yet.

@xiaowuhu xiaowuhu changed the title add ops(native_layer_norm) (feat:atenlib) add ops(native_layer_norm) Jan 18, 2023
@xiaowuhu xiaowuhu added the module: torchlib Related to the torch/aten function lib in development label Jan 18, 2023
@codecov
Copy link

codecov bot commented Jan 18, 2023

Codecov Report

Merging #330 (4487dcd) into main (34359e8) will decrease coverage by 0.04%.
The diff coverage is 29.41%.

@@            Coverage Diff             @@
##             main     #330      +/-   ##
==========================================
- Coverage   73.31%   73.28%   -0.04%     
==========================================
  Files          96       96              
  Lines        9512     9526      +14     
==========================================
+ Hits         6974     6981       +7     
- Misses       2538     2545       +7     
Impacted Files Coverage Δ
onnxscript/function_libs/torch_aten/ops/core.py 63.36% <20.00%> (-0.40%) ⬇️
...t/function_libs/torch_aten/ops_correctness_test.py 95.76% <100.00%> (ø)
onnxscript/utils.py 57.81% <0.00%> (+1.56%) ⬆️
onnxscript/onnx_opset/_impl/opset18.py 45.45% <0.00%> (+2.72%) ⬆️

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

Copy link
Collaborator

@justinchuby justinchuby left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm thanks!

# native_layer_norm(Tensor input, SymInt[] normalized_shape, Tensor? weight, Tensor? bias, float eps) -> (Tensor, Tensor, Tensor)

raise NotImplementedError()
axes = [-i for i in range(len(normalized_shape), 0, -1)]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

normalized_shape should be a tensor so use onnx op on it? In test we can make normalized_shape to always be a tensor.

Alternatively, annotate normalized_shape to be int for now?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will check it later.

@justinchuby justinchuby changed the title (feat:atenlib) add ops(native_layer_norm) feat(atenlib): add ops(native_layer_norm) Jan 18, 2023
@justinchuby justinchuby self-assigned this Jan 18, 2023
@xiaowuhu xiaowuhu merged commit 1a3df01 into microsoft:main Jan 19, 2023
@xiaowuhu xiaowuhu deleted the xiaowu/addOps(native_layer_norm) branch January 19, 2023 02:14
@xiaowuhu xiaowuhu mentioned this pull request Jan 19, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: torchlib Related to the torch/aten function lib in development
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants