Skip to content

Add op(unfold) | feat(torchlib) #893

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 13 commits into from
Jul 19, 2023
Merged

Add op(unfold) | feat(torchlib) #893

merged 13 commits into from
Jul 19, 2023

Conversation

xiaowuhu
Copy link
Contributor

@xiaowuhu xiaowuhu commented Jul 18, 2023

from: #534

@codecov
Copy link

codecov bot commented Jul 18, 2023

Codecov Report

Merging #893 (d6019d4) into main (9383d1a) will increase coverage by 0.03%.
The diff coverage is 96.66%.

❗ Current head d6019d4 differs from pull request most recent head 7b17805. Consider uploading reports for the commit 7b17805 to get more accurate results

@@            Coverage Diff             @@
##             main     #893      +/-   ##
==========================================
+ Coverage   76.70%   76.73%   +0.03%     
==========================================
  Files         112      112              
  Lines       13524    13549      +25     
  Branches     1366     1369       +3     
==========================================
+ Hits        10373    10397      +24     
  Misses       2810     2810              
- Partials      341      342       +1     
Impacted Files Coverage Δ
...ipt/tests/function_libs/torch_lib/ops_test_data.py 96.78% <ø> (ø)
onnxscript/function_libs/torch_lib/ops/core.py 77.36% <96.66%> (+0.18%) ⬆️

@xiaowuhu xiaowuhu changed the title feat: add op(aten unfold) add op(aten unfold) | feat(torchlib) Jul 18, 2023
Copy link
Collaborator

@justinchuby justinchuby left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Would be good to get another review from @BowenBao

self_rank = self_rank + 1
# perm need to be list[int], so have to be generated in trace_only mode
perm = list(range(self_rank))
perm.append(perm.pop(dimension + 1))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be great to add a comment to explain this logic, thanks!

Copy link
Contributor Author

@xiaowuhu xiaowuhu Jul 19, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added comment.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we do this though? What does it mean? Is there a reference logic?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@justinchuby justinchuby requested a review from BowenBao July 18, 2023 17:45
@justinchuby justinchuby added the module: torchlib Related to the torch/aten function lib in development label Jul 18, 2023
@justinchuby justinchuby mentioned this pull request Jul 18, 2023
@justinchuby justinchuby changed the title add op(aten unfold) | feat(torchlib) Add op(unfold) | feat(torchlib) Jul 19, 2023
@justinchuby justinchuby merged commit ed3df93 into main Jul 19, 2023
@justinchuby justinchuby deleted the xiaowu/addOp(aten_unfold) branch July 19, 2023 17:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: torchlib Related to the torch/aten function lib in development
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants