You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, the **Core ML** backend delegates the whole module to **Core ML**. If a specific op is not supported by the **Core ML** backend then the `to_backend` call would throw an exception. We will be adding a **Core ML Partitioner** to resolve the issue.
56
+
The module will be fully or partially delegated to **Core ML**, depending on whether all or part of ops are supported by the **Core ML** backend. User may force skip certain ops by `CoreMLPartitioner(skip_ops_for_coreml_delegation=...)`
57
+
58
+
The `to_backend` implementation is a thin wrapper over [coremltools](https://apple.github.io/coremltools/docs-guides/), `coremltools` is responsible for converting an **ExportedProgram** to a **MLModel**. The converted **MLModel** data is saved, flattened, and returned as bytes to **ExecuTorch**.
59
+
60
+
## Quantization
56
61
57
-
The `to_backend` implementation is a thin wrapper over `coremltools`, `coremltools` is responsible for converting an **ExportedProgram** to a **MLModel**. The converted **MLModel** data is saved, flattened, and returned as bytes to **ExecuTorch**.
62
+
To quantize a Program in a Core ML favored way, the client may utilize **CoreMLQuantizer**.
63
+
64
+
```python
65
+
import torch
66
+
import executorch.exir
67
+
68
+
from torch._export import capture_pre_autograd_graph
69
+
from torch.ao.quantization.quantize_pt2e import (
70
+
convert_pt2e,
71
+
prepare_pt2e,
72
+
prepare_qat_pt2e,
73
+
)
74
+
75
+
from executorch.backends.apple.coreml.quantizer.coreml_quantizer import CoreMLQuantizer
76
+
from coremltools.optimize.torch.quantization.quantization_config import (
The `converted_graph` is the quantized torch model, and can be delegated to **Core ML** similarly through **CoreMLPartitioner**
58
119
59
120
## Runtime
60
121
61
-
To execute a **Core ML** delegated **Program**, the client must link to the `coremldelegate` library. Once linked there are no additional steps required, **ExecuTorch** when running the **Program** would call the **Core ML** runtime to execute the **Core ML** delegated part of the **Program**.
122
+
To execute a Core ML delegated program, the application must link to the `coremldelegate` library. Once linked there are no additional steps required, ExecuTorch when running the program would call the Core ML runtime to execute the Core ML delegated part of the program.
62
123
63
124
Please follow the instructions described in the [Core ML setup](/backends/apple/coreml/setup.md) to link the `coremldelegate` library.
125
+
126
+
## Help & Improvements
127
+
If you have problems or questions or have suggestions for ways to make
128
+
implementation and testing better, please create an issue on [github](https://www.github.com/pytorch/executorch/issues).
Copy file name to clipboardExpand all lines: docs/source/build-run-coreml.md
+46-17Lines changed: 46 additions & 17 deletions
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# Building and Running ExecuTorch with Core ML Backend
2
2
3
-
Core ML delegate uses Core ML apis to enable running neural networks via Apple's hardware acceleration. For more about coreml you can read [here](https://developer.apple.com/documentation/coreml). In this tutorial we will walk through steps of lowering a PyTorch model to Core ML delegate
3
+
Core ML delegate uses Core ML APIs to enable running neural networks via Apple's hardware acceleration. For more about coreml you can read [here](https://developer.apple.com/documentation/coreml). In this tutorial, we will walk through the steps of lowering a PyTorch model to Core ML delegate
4. Create an instance of the [Inspector API](./sdk-inspector.rst) by passing in the [ETDump](./sdk-etdump.md) you have sourced from the runtime along with the optionally generated [ETRecord](./sdk-etrecord.rst) from step 1 or execute the following command in your terminal to display the profiling data table.
2. Create a new [Xcode project](https://developer.apple.com/documentation/xcode/creating-an-xcode-project-for-an-app#) or open an existing project.
102
133
103
-
3. Drag the `executorch.xcframework` generated from Step 2 to Frameworks.
134
+
3. Drag the `executorch.xcframework`and `coreml_backend.xcframework`generated from Step 2 to Frameworks.
104
135
105
136
4. Go to the project's [Build Phases](https://developer.apple.com/documentation/xcode/customizing-the-build-phases-of-a-target) - Link Binaries With Libraries, click the + sign, and add the following frameworks:
106
137
```
107
-
- executorch.xcframework
108
-
- coreml_backend.xcframework
109
-
- Accelerate.framework
110
-
- CoreML.framework
111
-
- libsqlite3.tbd
138
+
executorch.xcframework
139
+
coreml_backend.xcframework
140
+
Accelerate.framework
141
+
CoreML.framework
142
+
libsqlite3.tbd
112
143
```
113
144
5. Add the exported program to the [Copy Bundle Phase](https://developer.apple.com/documentation/xcode/customizing-the-build-phases-of-a-target#Copy-files-to-the-finished-product) of your Xcode target.
114
145
115
-
6. Please follow the [running a model](running-a-model-cpp-tutorial.md) tutorial to integrate the code for loading a ExecuTorch program.
146
+
6. Please follow the [running a model](./running-a-model-cpp-tutorial.md) tutorial to integrate the code for loading an ExecuTorch program.
116
147
117
148
7. Update the code to load the program from the Application's bundle.
8. Use [Xcode](https://developer.apple.com/documentation/xcode/building-and-running-an-app#Build-run-and-debug-your-app) to deploy the application on the device.
Copy file name to clipboardExpand all lines: examples/apple/coreml/README.md
+18-8Lines changed: 18 additions & 8 deletions
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# Examples
2
2
3
-
This directory contains scripts and other helper utilities to illustrate an end-to-end workflow to run a **Core ML** delegated `torch.nn.module` with the **ExecuTorch** runtime.
3
+
This directory contains scripts and other helper utilities to illustrate an end-to-end workflow to run a Core ML delegated `torch.nn.module` with the ExecuTorch runtime.
4
4
5
5
6
6
## Directory structure
@@ -13,7 +13,7 @@ coreml
13
13
14
14
## Using the examples
15
15
16
-
We will walk through an example model to generate a **Core ML** delegated binary file from a python `torch.nn.module` then we will use the `coreml/executor_runner` to run the exported binary file.
16
+
We will walk through an example model to generate a Core ML delegated binary file from a python `torch.nn.module` then we will use the `coreml_executor_runner` to run the exported binary file.
17
17
18
18
1. Following the setup guide in [Setting Up ExecuTorch](https://pytorch.org/executorch/stable/getting-started-setup)
19
19
you should be able to get the basic development environment for ExecuTorch working.
@@ -27,40 +27,50 @@ cd executorch
27
27
28
28
```
29
29
30
-
3. Run the export script to generate a **Core ML** delegated binary file.
30
+
3. Run the export script to generate a Core ML delegated binary file.
31
31
32
32
```bash
33
33
cd executorch
34
34
35
35
# To get a list of example models
36
36
python3 -m examples.portable.scripts.export -h
37
37
38
-
# Generates ./add_coreml_all.pte file if successful.
38
+
# Generates add_coreml_all.pte file if successful.
4.Once we have the **Core ML** delegated model binary (pte) file, then let's run it with the **ExecuTorch** runtime using the `coreml_executor_runner`.
42
+
4.Run the binary file using the `coreml_executor_runner`.
43
43
44
44
```bash
45
45
cd executorch
46
46
47
47
# Builds the Core ML executor runner. Generates ./coreml_executor_runner if successful.
- The `examples.apple.coreml.scripts.export` could fail if the model is not supported by the **Core ML** backend. The following models from the examples models list (` python3 -m examples.portable.scripts.export -h`)are currently supported by the **Core ML** backend.
55
+
- The `examples.apple.coreml.scripts.export` could fail if the model is not supported by the Core ML backend. The following models from the examples models list (` python3 -m examples.portable.scripts.export -h`)are currently supported by the Core ML backend.
0 commit comments