-
Notifications
You must be signed in to change notification settings - Fork 536
Add a pass to convert rank-0 tensor to rank-1 tensor #8298
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/8298
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 411fd3f with merge base f438da8 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D69281867 |
@pytorchbot label "topic: not user facing" |
@cccclai @YifanShenSZ I wonder if Core ML is not gonna support scalar tensors, does it make sense to embed such workaround somewhere deeper to happen automatically under the hood when using the Core ML backend? |
Specifically, adding new cases here to cast scalars to tensors: https://github.com/pytorch/executorch/blob/main/backends/apple/coreml/runtime/delegate/coreml_backend_delegate.mm#L72-L95 It looks like ct.convert already does the cast, we just don't have it in CoreML's ET runtime. |
7fc0958
to
ff9cb64
Compare
Summary: Test with following code, and no long see the error/warning to complain rank-0 tensor ``` class Model(torch.nn.Module): def forward(self, x, y): return x + y model = Model() model.eval() example_inputs = (torch.tensor(1.0), torch.tensor(2.0)) exported_program_manager_aten = torch.export.export(model, example_inputs) exported_program_manager_edge = executorch.exir.to_edge( exported_program_manager_aten ).transform([Rank0ToRank1Pass()]) delegated_module = to_backend( CoreMLBackend.__name__, exported_program_manager_edge.exported_program(), [] ) ``` Differential Revision: D69281867
This pull request was exported from Phabricator. Differential Revision: D69281867 |
Summary: Test with following code, and no long see the error/warning to complain rank-0 tensor ``` class Model(torch.nn.Module): def forward(self, x, y): return x + y model = Model() model.eval() example_inputs = (torch.tensor(1.0), torch.tensor(2.0)) exported_program_manager_aten = torch.export.export(model, example_inputs) exported_program_manager_edge = executorch.exir.to_edge( exported_program_manager_aten ).transform([Rank0ToRank1Pass()]) delegated_module = to_backend( CoreMLBackend.__name__, exported_program_manager_edge.exported_program(), [] ) ``` Differential Revision: D69281867
ff9cb64
to
411fd3f
Compare
This pull request was exported from Phabricator. Differential Revision: D69281867 |
Differential Revision: D69281867
Test with following code, and no long see the error/warning to complain rank-0 tensor