-
Notifications
You must be signed in to change notification settings - Fork 577
fix export doc #26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix export doc #26
Changes from 2 commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -72,16 +72,20 @@ class MyModule(torch.nn.Module): | |
|
||
aten_dialect = exir.capture(MyModule(), (torch.randn(3, 4),)) | ||
|
||
print(aten_dialect.exported_program) | ||
print(aten_dialect) | ||
""" | ||
ExportedProgram: | ||
class GraphModule(torch.nn.Module): | ||
def forward(self, arg0_1: f32[3, 4], arg1_1: f32[5, 4], arg2_1: f32[5], arg3_1: f32[3, 4]): | ||
add: f32[3, 4] = torch.ops.aten.add.Tensor(arg3_1, arg0_1); | ||
permute: f32[4, 5] = torch.ops.aten.permute_copy.default(arg1_1, [1, 0]); | ||
addmm: f32[3, 5] = torch.ops.aten.addmm.default(arg2_1, add, permute); | ||
clamp: f32[3, 5] = torch.ops.aten.clamp.default(addmm, 0.0, 1.0); | ||
return (clamp,) | ||
class GraphModule(torch.nn.Module): | ||
def forward(self, arg0_1: f32[4, 4]): | ||
# File: /Users/marksaroufim/Dev/zzz/test3.py:10, code: return self.linear(x) | ||
_param_constant0 = self._param_constant0 | ||
t: f32[4, 4] = torch.ops.aten.t.default(_param_constant0); _param_constant0 = None | ||
_param_constant1 = self._param_constant1 | ||
addmm: f32[4, 4] = torch.ops.aten.addmm.default(_param_constant1, arg0_1, t); _param_constant1 = arg0_1 = t = None | ||
return [addmm] | ||
|
||
Graph Signature: ExportGraphSignature(parameters=[], buffers=[], user_inputs=[], user_outputs=[], inputs_to_parameters={}, inputs_to_buffers={}, buffers_to_mutate={}, backward_signature=None, assertion_dep_token=None) | ||
Symbol to range: {} | ||
""" | ||
``` | ||
|
||
|
@@ -106,18 +110,22 @@ This lowering will be done through the `to_edge()` API. | |
|
||
```python | ||
aten_dialect = exir.capture(MyModule(), (torch.randn(3, 4),)) | ||
edge_dialect = aten_dialect.to_edge() | ||
edge_dialect = aten_dialect.to_edge(exir.EdgeCompileConfig(_check_ir_validity=False)) | ||
|
||
print(edge_dialect.exported_program) | ||
print(edge_dialect) | ||
""" | ||
ExportedProgram: | ||
class GraphModule(torch.nn.Module): | ||
def forward(self, arg0_1: f32[3, 4], arg1_1: f32[5, 4], arg2_1: f32[5], arg3_1: f32[3, 4]): | ||
add: f32[3, 4] = executorch_exir_dialects_edge__ops_aten_add_Tensor(arg3_1, arg0_1); | ||
permute: f32[4, 5] = executorch_exir_dialects_edge__ops_permute_copy_default(arg1_1, [1, 0]); | ||
addmm: f32[3, 5] = executorch_exir_dialects_edge__ops_addmm_default(arg2_1, add, permute); | ||
clamp: f32[3, 5] = executorch_exir_dialects_edge__ops_clamp_default(addmm, 0.0, 1.0); | ||
return (clamp,) | ||
class GraphModule(torch.nn.Module): | ||
def forward(self, arg0_1: f32[3, 3]): | ||
# File: /Users/marksaroufim/Dev/zzz/test3.py:10, code: return self.linear(x) | ||
_param_constant0: f32[3, 3] = self._param_constant0 | ||
t_copy_default: f32[3, 3] = torch.ops.aten.t_copy.default(_param_constant0); _param_constant0 = None | ||
_param_constant1: f32[3] = self._param_constant1 | ||
addmm_default: f32[3, 3] = torch.ops.aten.addmm.default(_param_constant1, arg0_1, t_copy_default); _param_constant1 = arg0_1 = t_copy_default = None | ||
return [addmm_default] | ||
|
||
Graph Signature: ExportGraphSignature(parameters=[], buffers=[], user_inputs=[], user_outputs=[], inputs_to_parameters={}, inputs_to_buffers={}, buffers_to_mutate={}, backward_signature=None, assertion_dep_token=None) | ||
Symbol to range: {} | ||
""" | ||
``` | ||
|
||
|
@@ -185,8 +193,7 @@ be loaded in the Executorch runtime. | |
|
||
```python | ||
edge_dialect = exir.capture(MyModule(), (torch.randn(3, 4),)).to_edge() | ||
# edge_dialect = to_backend(edge_dialect.exported_program, CustomBackendPartitioner) | ||
executorch_program = edge_dialect.to_executorch(executorch_backend_config) | ||
executorch_program = edge_dialect.to_executorch() | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. partitioner and custom config could probably be their own section, regardless backend config is not defined here There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I also tried instead dumping in There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
This should be covered in the details of section 1.3, but I think you're right that we should move it.
Backend config should be something passed in by the user, and covered in section 1.4. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ah whoops I just found ExecutorchBackendConfig lemme just use an instance of that |
||
buffer = executorch_program.buffer | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Also one more issue I'm not sure how to resolve yet is you can't actually import in OSS
I tried different buck incantations on the target but can't get the right one There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. maybe @malfet has some thoughts too since we'll need to figure this out soon enough |
||
|
||
# Save it to a file and load it in the Executorch runtime | ||
|
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
without this you get this error
torch._export.verifier.SpecViolationError: Operator torch._ops.aten.t.default is not Aten Canonical.
- should probably figure out how to make this error go awayI could for example get rid of this error by reworking the example to just do vector multiplication but like matmuls are probably more interesting lol https://gist.github.com/msaroufim/629b5c623fade2d5a30bec379f9e08da
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This error should be fixed by D47346723 (which will be landed before PP) where aten.t will get decomposed to aten.permute which is ATen Canonical. We want to avoid users using the _check_ir_validity flag, but we should probably provide a better error message like "Please file an issue to executorch team, or turn on _check_ir_validity flag to unblock yourself for now"