Skip to content

Commit 4c61b71

Browse files
Gasoonjiafacebook-github-bot
authored andcommitted
bundled program alpha document (#3224)
Summary: as title Reviewed By: Jack-Khuu Differential Revision: D56446890
1 parent 6c36f10 commit 4c61b71

File tree

2 files changed

+67
-74
lines changed

2 files changed

+67
-74
lines changed

docs/source/sdk-bundled-io.md

+66-73
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,8 @@ ExecuTorch Program can be emitted from user's model by using ExecuTorch APIs. Fo
2323

2424
In `BundledProgram`, we create two new classes, `MethodTestCase` and `MethodTestSuite`, to hold essential info for ExecuTorch program verification.
2525

26+
`MethodTestCase` represents a single testcase. Each `MethodTestCase` contains inputs and expected outputs for a single execution.
27+
2628
:::{dropdown} `MethodTestCase`
2729

2830
```{eval-rst}
@@ -31,6 +33,8 @@ In `BundledProgram`, we create two new classes, `MethodTestCase` and `MethodTest
3133
```
3234
:::
3335

36+
`MethodTestSuite` contains all testing info for single method, including a str representing method name, and a `List[MethodTestCase]` for all testcases:
37+
3438
:::{dropdown} `MethodTestSuite`
3539

3640
```{eval-rst}
@@ -44,18 +48,18 @@ Since each model may have multiple inference methods, we need to generate `List[
4448

4549
### Step 3: Generate `BundledProgram`
4650

47-
We provide `create_bundled_program` API under `executorch/sdk/bundled_program/core.py` to generate `BundledProgram` by bundling the emitted ExecuTorch program with the `List[MethodTestSuite]`:
51+
We provide `BundledProgram` class under `executorch/sdk/bundled_program/core.py` to bundled the `ExecutorchProgram`-like variable, including
52+
`ExecutorchProgram`, `MultiMethodExecutorchProgram` or `ExecutorchProgramManager`, with the `List[MethodTestSuite]`:
4853

4954
:::{dropdown} `BundledProgram`
5055

5156
```{eval-rst}
52-
.. currentmodule:: executorch.sdk.bundled_program.core
53-
.. autofunction:: create_bundled_program
57+
.. autofunction:: executorch.sdk.bundled_program.core.BundledProgram.__init__
5458
:noindex:
5559
```
5660
:::
5761

58-
`create_bundled_program` will do sannity check internally to see if the given `List[MethodTestSuite]` matches the given Program's requirements. Specifically:
62+
Construtor of `BundledProgram `will do sannity check internally to see if the given `List[MethodTestSuite]` matches the given Program's requirements. Specifically:
5963
1. The method_names of each `MethodTestSuite` in `List[MethodTestSuite]` for should be also in program. Please notice that it is no need to set testcases for every method in the Program.
6064
2. The metadata of each testcase should meet the requirement of the coresponding inference methods input.
6165

@@ -83,20 +87,20 @@ To serialize `BundledProgram` to make runtime APIs use it, we provide two APIs,
8387
Here is a flow highlighting how to generate a `BundledProgram` given a PyTorch model and the representative inputs we want to test it along with.
8488

8589
```python
86-
8790
import torch
8891

92+
from executorch.exir import to_edge
93+
from executorch.sdk import BundledProgram
94+
8995
from executorch.sdk.bundled_program.config import MethodTestCase, MethodTestSuite
90-
from executorch.sdk.bundled_program.core import create_bundled_program
9196
from executorch.sdk.bundled_program.serialize import (
9297
serialize_from_bundled_program_to_flatbuffer,
9398
)
94-
95-
from executorch.exir import to_edge
99+
from torch._export import capture_pre_autograd_graph
96100
from torch.export import export
97101

98-
# Step 1: ExecuTorch Program Export
99102

103+
# Step 1: ExecuTorch Program Export
100104
class SampleModel(torch.nn.Module):
101105
"""An example model with multi-methods. Each method has multiple input and single output"""
102106

@@ -105,82 +109,70 @@ class SampleModel(torch.nn.Module):
105109
self.a: torch.Tensor = 3 * torch.ones(2, 2, dtype=torch.int32)
106110
self.b: torch.Tensor = 2 * torch.ones(2, 2, dtype=torch.int32)
107111

108-
def encode(self, x: torch.Tensor, q: torch.Tensor) -> torch.Tensor:
112+
def forward(self, x: torch.Tensor, q: torch.Tensor) -> torch.Tensor:
109113
z = x.clone()
110114
torch.mul(self.a, x, out=z)
111115
y = x.clone()
112116
torch.add(z, self.b, out=y)
113117
torch.add(y, q, out=y)
114118
return y
115119

116-
def decode(self, x: torch.Tensor, q: torch.Tensor) -> torch.Tensor:
117-
y = x * q
118-
torch.add(y, self.b, out=y)
119-
return y
120120

121-
# Inference method names of SampleModel we want to bundle testcases to.
121+
# Inference method name of SampleModel we want to bundle testcases to.
122122
# Notices that we do not need to bundle testcases for every inference methods.
123-
method_names = ["encode", "decode"]
123+
method_name = "forward"
124124
model = SampleModel()
125125

126-
capture_inputs = {
127-
m_name: (
128-
(torch.rand(2, 2) - 0.5).to(dtype=torch.int32),
129-
(torch.rand(2, 2) - 0.5).to(dtype=torch.int32),
130-
)
131-
for m_name in method_names
132-
}
126+
# Inputs for graph capture.
127+
capture_input = (
128+
(torch.rand(2, 2) - 0.5).to(dtype=torch.int32),
129+
(torch.rand(2, 2) - 0.5).to(dtype=torch.int32),
130+
)
133131

134-
# Find each method of model needs to be traced my its name, export its FX Graph.
135-
method_graphs = {
136-
m_name: export(getattr(model, m_name), capture_inputs[m_name])
137-
for m_name in method_names
138-
}
132+
# Export method's FX Graph.
133+
method_graph = export(
134+
capture_pre_autograd_graph(model, capture_input),
135+
capture_input,
136+
)
139137

140-
# Emit the traced methods into ET Program.
141-
program = to_edge(method_graphs).to_executorch().executorch_program
138+
139+
# Emit the traced method into ET Program.
140+
et_program = to_edge(method_graph).to_executorch()
142141

143142
# Step 2: Construct MethodTestSuite for Each Method
144143

145144
# Prepare the Test Inputs.
146145

147-
# number of input sets to be verified
146+
# Number of input sets to be verified
148147
n_input = 10
149148

150-
# Input sets to be verified for each inference methods.
151-
# To simplify, here we create same inputs for all methods.
152-
inputs = {
153-
# Inference method name corresponding to its test cases.
154-
m_name: [
155-
# Each list below is a individual input set.
156-
# The number of inputs, dtype and size of each input follow Program's spec.
157-
[
158-
(torch.rand(2, 2) - 0.5).to(dtype=torch.int32),
159-
(torch.rand(2, 2) - 0.5).to(dtype=torch.int32),
160-
]
161-
for _ in range(n_input)
149+
# Input sets to be verified.
150+
inputs = [
151+
# Each list below is a individual input set.
152+
# The number of inputs, dtype and size of each input follow Program's spec.
153+
[
154+
(torch.rand(2, 2) - 0.5).to(dtype=torch.int32),
155+
(torch.rand(2, 2) - 0.5).to(dtype=torch.int32),
162156
]
163-
for m_name in method_names
164-
}
157+
for _ in range(n_input)
158+
]
165159

166160
# Generate Test Suites
167161
method_test_suites = [
168162
MethodTestSuite(
169-
method_name=m_name,
163+
method_name=method_name,
170164
test_cases=[
171165
MethodTestCase(
172166
inputs=input,
173-
expected_outputs=getattr(model, m_name)(*input),
167+
expected_outputs=(getattr(model, method_name)(*input), ),
174168
)
175-
for input in inputs[m_name]
169+
for input in inputs
176170
],
177-
)
178-
for m_name in method_names
171+
),
179172
]
180173

181174
# Step 3: Generate BundledProgram
182-
183-
bundled_program = create_bundled_program(program, method_test_suites)
175+
bundled_program = BundledProgram(et_program, method_test_suites)
184176

185177
# Step 4: Serialize BundledProgram to flatbuffer.
186178
serialized_bundled_program = serialize_from_bundled_program_to_flatbuffer(
@@ -320,10 +312,10 @@ Here's the example of the dtype of test input not meet model's requirement:
320312
```python
321313
import torch
322314
323-
from executorch.sdk.bundled_program.config import MethodTestCase, MethodTestSuite
324-
from executorch.sdk.bundled_program.core import create_bundled_program
325-
326315
from executorch.exir import to_edge
316+
from executorch.sdk import BundledProgram
317+
318+
from executorch.sdk.bundled_program.config import MethodTestCase, MethodTestSuite
327319
from torch.export import export
328320
329321
@@ -344,15 +336,16 @@ class Module(torch.nn.Module):
344336
model = Module()
345337
method_names = ["forward"]
346338
347-
inputs = torch.ones(2, 2, dtype=torch.float)
339+
inputs = (torch.ones(2, 2, dtype=torch.float), )
348340
349341
# Find each method of model needs to be traced my its name, export its FX Graph.
350-
method_graphs = {
351-
m_name: export(getattr(model, m_name), (inputs,)) for m_name in method_names
352-
}
342+
method_graph = export(
343+
capture_pre_autograd_graph(model, inputs),
344+
inputs,
345+
)
353346
354347
# Emit the traced methods into ET Program.
355-
program = to_edge(method_graphs).to_executorch().executorch_program
348+
et_program = to_edge(method_graph).to_executorch()
356349
357350
# number of input sets to be verified
358351
n_input = 10
@@ -378,7 +371,7 @@ method_test_suites = [
378371
test_cases=[
379372
MethodTestCase(
380373
inputs=input,
381-
expected_outputs=getattr(model, m_name)(*input),
374+
expected_outputs=(getattr(model, m_name)(*input),),
382375
)
383376
for input in inputs[m_name]
384377
],
@@ -388,7 +381,7 @@ method_test_suites = [
388381
389382
# Generate BundledProgram
390383
391-
bundled_program = create_bundled_program(program, method_test_suites)
384+
bundled_program = BundledProgram(et_program, method_test_suites)
392385
```
393386

394387
:::{dropdown} Raised Error
@@ -455,10 +448,10 @@ Another common error would be the method name in any `MethodTestSuite` does not
455448
```python
456449
import torch
457450

458-
from executorch.sdk.bundled_program.config import MethodTestCase, MethodTestSuite
459-
from executorch.sdk.bundled_program.core import create_bundled_program
460-
461451
from executorch.exir import to_edge
452+
from executorch.sdk import BundledProgram
453+
454+
from executorch.sdk.bundled_program.config import MethodTestCase, MethodTestSuite
462455
from torch.export import export
463456

464457

@@ -477,18 +470,18 @@ class Module(torch.nn.Module):
477470

478471

479472
model = Module()
480-
481473
method_names = ["forward"]
482474

483-
inputs = torch.ones(2, 2, dtype=torch.float)
475+
inputs = (torch.ones(2, 2, dtype=torch.float),)
484476

485477
# Find each method of model needs to be traced my its name, export its FX Graph.
486-
method_graphs = {
487-
m_name: export(getattr(model, m_name), (inputs,)) for m_name in method_names
488-
}
478+
method_graph = export(
479+
capture_pre_autograd_graph(model, inputs),
480+
inputs,
481+
)
489482

490483
# Emit the traced methods into ET Program.
491-
program = to_edge(method_graphs).to_executorch().executorch_program
484+
et_program = to_edge(method_graph).to_executorch()
492485

493486
# number of input sets to be verified
494487
n_input = 10
@@ -513,7 +506,7 @@ method_test_suites = [
513506
test_cases=[
514507
MethodTestCase(
515508
inputs=input,
516-
expected_outputs=getattr(model, m_name)(*input),
509+
expected_outputs=(getattr(model, m_name)(*input),),
517510
)
518511
for input in inputs[m_name]
519512
],
@@ -525,7 +518,7 @@ method_test_suites = [
525518
method_test_suites[0].method_name = "MISSING_METHOD_NAME"
526519

527520
# Generate BundledProgram
528-
bundled_program = create_bundled_program(program, method_test_suites)
521+
bundled_program = BundledProgram(et_program, method_test_suites)
529522

530523
```
531524

sdk/bundled_program/config.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ def __init__(
6262
input: All inputs required by eager_model with specific inference method for one-time execution.
6363
6464
It is worth mentioning that, although both bundled program and ET runtime apis support setting input
65-
other than torch.tensor type, only the input in torch.tensor type will be actually updated in
65+
other than `torch.tensor` type, only the input in `torch.tensor` type will be actually updated in
6666
the method, and the rest of the inputs will just do a sanity check if they match the default value in method.
6767
6868
expected_output: Expected output of given input for verification. It can be None if user only wants to use the test case for profiling.

0 commit comments

Comments
 (0)