Skip to content

feat(atenlib): add op(unfold)#534

Closed
xiaowuhu wants to merge 21 commits intomainfrom
xiaowu/addOp(unfold)
Closed

feat(atenlib): add op(unfold)#534
xiaowuhu wants to merge 21 commits intomainfrom
xiaowu/addOp(unfold)

Conversation

@xiaowuhu
Copy link
Copy Markdown
Contributor

@xiaowuhu xiaowuhu commented Mar 18, 2023

Close this one, see #893

@codecov
Copy link
Copy Markdown

codecov bot commented Mar 18, 2023

Codecov Report

Merging #534 (0084cb1) into main (9fe15b2) will decrease coverage by 1.99%.
The diff coverage is 100.00%.

❗ Current head 0084cb1 differs from pull request most recent head eeac06f. Consider uploading reports for the commit eeac06f to get more accurate results

@@            Coverage Diff             @@
##             main     #534      +/-   ##
==========================================
- Coverage   73.91%   71.93%   -1.99%     
==========================================
  Files         109      109              
  Lines       11007    10959      -48     
  Branches     1142     1137       -5     
==========================================
- Hits         8136     7883     -253     
- Misses       2564     2775     +211     
+ Partials      307      301       -6     
Impacted Files Coverage Δ
...s/function_libs/torch_aten/ops_correctness_test.py 72.67% <ø> (-17.61%) ⬇️
onnxscript/function_libs/torch_aten/ops/core.py 72.31% <100.00%> (-0.40%) ⬇️

... and 8 files with indirect coverage changes

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

@github-advanced-security
Copy link
Copy Markdown
Contributor

You have successfully added a new lintrunner configuration lintrunner. As part of the setup process, we have scanned this repository and found no existing alerts. In the future, you will see all code scanning alerts on the repository Security tab.

@xiaowuhu xiaowuhu mentioned this pull request Mar 20, 2023
target_end = op.Squeeze(((dim_size - size) / step + 1) * step)
seq_result = op.SequenceEmpty()

for i in range(0, target_end, step):
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gramalingam Do you have recommendation on how this loop should be created?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if not set trace_only=True, will get this error:

onnxscript\analysis.py:87: in defs
    raise ValueError(f"Unsupported statement type {type(stmt)!r}.")
E   ValueError: Unsupported statement type <class 'ast.For'>.

@justinchuby justinchuby added the module: torchlib Related to the torch/aten function lib in development label Mar 20, 2023
if self_rank == 0:
result = op.Unsqueeze(self, 0)
else:
dims = op.Constant(value_ints=[dimension])
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We may need to use Expand because Constant requires a compile time constant

target_end = op.Squeeze(((dim_size - size) / step + 1) * step)
seq_result = op.SequenceEmpty()

for i in range(target_end):
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using trace only will be tricky because we don’t know what target end is at trace time


for i in range(target_end):
if op.Mod(i, step) == 0:
starts = op.Constant(value_ints=[i])
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe assign this outside of the for loop first


for i in range(target_end):
if op.Mod(i, step) == 0:
starts = op.Constant(value_ints=[i])
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This won't work: attribute-values can NOT depend on runtime-values (like i). But this is not necessary. We can use i, perhaps with a Unsqueeze or Reshape to convert a 0D tensor into a 1D tensor.


def aten_unfold(self: TensorType, dimension: int, size: int, step: int) -> TensorType:
@torch_op("aten::unfold", trace_only=True) # FIXME: Seems ast.For was not supported
def aten_unfold(self: TTensor, dimension: int, size: int, step: int) -> TTensor:
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand what the op is supposed to do. Is there a description I can read somewhere? Looks like it is a variant of a slice-like op with params (size, step) in (dimension), followed by some form of transpose?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The unfold do below things:
x = [1,2,3,4,5,6],
unfold it to [1,2],[2,3],[3,4]... when stride=2,step=1
unfold it to [1,2],[3,4],[5,6] when stride=2,step=2
it is a Core aten op.

@justinchuby
Copy link
Copy Markdown
Collaborator

It doesn't seem like unfold is really being used these days. https://pytorch.org/docs/stable/ir.html Maybe we can just skip it?

@xiaowuhu
Copy link
Copy Markdown
Contributor Author

It doesn't seem like unfold is really being used these days. https://pytorch.org/docs/stable/ir.html Maybe we can just skip it?

Agree.

@xiaowuhu xiaowuhu changed the title feat(atenlib): add op(unfold) [low-priority] feat(atenlib): add op(unfold) Mar 28, 2023
@justinchuby justinchuby changed the title [low-priority] feat(atenlib): add op(unfold) feat(atenlib): add op(unfold) Jun 25, 2023
@justinchuby
Copy link
Copy Markdown
Collaborator

Ok so it turns out we still need it in #783

@xiaowuhu xiaowuhu closed this Jul 18, 2023
justinchuby pushed a commit that referenced this pull request Jul 19, 2023
@justinchuby justinchuby deleted the xiaowu/addOp(unfold) branch January 27, 2025 18:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

module: torchlib Related to the torch/aten function lib in development

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants