Conversation
…nnx-script into xiaowu/addOp(unfold)
Codecov Report
@@ Coverage Diff @@
## main #534 +/- ##
==========================================
- Coverage 73.91% 71.93% -1.99%
==========================================
Files 109 109
Lines 11007 10959 -48
Branches 1142 1137 -5
==========================================
- Hits 8136 7883 -253
- Misses 2564 2775 +211
+ Partials 307 301 -6
... and 8 files with indirect coverage changes Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. |
|
You have successfully added a new lintrunner configuration |
| target_end = op.Squeeze(((dim_size - size) / step + 1) * step) | ||
| seq_result = op.SequenceEmpty() | ||
|
|
||
| for i in range(0, target_end, step): |
There was a problem hiding this comment.
@gramalingam Do you have recommendation on how this loop should be created?
There was a problem hiding this comment.
if not set trace_only=True, will get this error:
onnxscript\analysis.py:87: in defs
raise ValueError(f"Unsupported statement type {type(stmt)!r}.")
E ValueError: Unsupported statement type <class 'ast.For'>.
…nnx-script into xiaowu/addOp(unfold)
| if self_rank == 0: | ||
| result = op.Unsqueeze(self, 0) | ||
| else: | ||
| dims = op.Constant(value_ints=[dimension]) |
There was a problem hiding this comment.
We may need to use Expand because Constant requires a compile time constant
| target_end = op.Squeeze(((dim_size - size) / step + 1) * step) | ||
| seq_result = op.SequenceEmpty() | ||
|
|
||
| for i in range(target_end): |
There was a problem hiding this comment.
Using trace only will be tricky because we don’t know what target end is at trace time
|
|
||
| for i in range(target_end): | ||
| if op.Mod(i, step) == 0: | ||
| starts = op.Constant(value_ints=[i]) |
There was a problem hiding this comment.
Maybe assign this outside of the for loop first
|
|
||
| for i in range(target_end): | ||
| if op.Mod(i, step) == 0: | ||
| starts = op.Constant(value_ints=[i]) |
There was a problem hiding this comment.
This won't work: attribute-values can NOT depend on runtime-values (like i). But this is not necessary. We can use i, perhaps with a Unsqueeze or Reshape to convert a 0D tensor into a 1D tensor.
|
|
||
| def aten_unfold(self: TensorType, dimension: int, size: int, step: int) -> TensorType: | ||
| @torch_op("aten::unfold", trace_only=True) # FIXME: Seems ast.For was not supported | ||
| def aten_unfold(self: TTensor, dimension: int, size: int, step: int) -> TTensor: |
There was a problem hiding this comment.
I don't understand what the op is supposed to do. Is there a description I can read somewhere? Looks like it is a variant of a slice-like op with params (size, step) in (dimension), followed by some form of transpose?
There was a problem hiding this comment.
The unfold do below things:
x = [1,2,3,4,5,6],
unfold it to [1,2],[2,3],[3,4]... when stride=2,step=1
unfold it to [1,2],[3,4],[5,6] when stride=2,step=2
it is a Core aten op.
|
It doesn't seem like unfold is really being used these days. https://pytorch.org/docs/stable/ir.html Maybe we can just skip it? |
Agree. |
|
Ok so it turns out we still need it in #783 |
Close this one, see #893