Skip to content

Add op (unbind) | feat(torchlib) #831

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Jul 5, 2023
Merged

Add op (unbind) | feat(torchlib) #831

merged 4 commits into from
Jul 5, 2023

Conversation

fatcat-z
Copy link
Contributor

@fatcat-z fatcat-z commented Jul 5, 2023

Add unbind op into torch_lib functions.
Add tests as well.

@fatcat-z fatcat-z requested a review from justinchuby July 5, 2023 08:14
@fatcat-z fatcat-z mentioned this pull request Jul 5, 2023
@fatcat-z fatcat-z requested review from titaiwangms and xiaowuhu July 5, 2023 08:16
@codecov
Copy link

codecov bot commented Jul 5, 2023

Codecov Report

Merging #831 (67097dd) into main (2d26103) will increase coverage by 0.00%.
The diff coverage is 100.00%.

@@           Coverage Diff           @@
##             main     #831   +/-   ##
=======================================
  Coverage   76.23%   76.23%           
=======================================
  Files         113      113           
  Lines       13312    13316    +4     
  Branches     1321     1321           
=======================================
+ Hits        10148    10152    +4     
  Misses       2838     2838           
  Partials      326      326           
Impacted Files Coverage Δ
...ipt/tests/function_libs/torch_lib/ops_test_data.py 96.77% <ø> (ø)
onnxscript/function_libs/torch_lib/ops/core.py 76.18% <100.00%> (+0.03%) ⬆️
...nxscript/tests/function_libs/torch_lib/ops_test.py 91.66% <100.00%> (ø)

@titaiwangms titaiwangms added the module: torchlib Related to the torch/aten function lib in development label Jul 5, 2023
if (
op.name.startswith("split")
or op.name.startswith("chunk")
or op.name.startswith("unbind")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Growing

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suspect we can know look at the function signature for this?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh yup. We can leverage on Opschema and get output dtype.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had a try yesterday but failed to find a clear solution.

core_ops.aten_unbind,
).xfail(
dtypes=[torch.float16],
reason="fixme: SplitToSequence op inference failed. https://github.com/microsoft/onnxruntime/issues/16006",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: What is op inference? Is this the common issue of ORT missing impl for float16 dtype?

not blocking merge.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My read is shape inference failed? I will test it on 1.15 and update the issue

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. Updated the issue

@justinchuby justinchuby merged commit b1a1604 into microsoft:main Jul 5, 2023
@fatcat-z fatcat-z deleted the add_unbind branch July 6, 2023 01:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: torchlib Related to the torch/aten function lib in development
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants