Skip to content

Improve android related docs #9725

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Mar 28, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file added docs/source/_static/img/android_studio.jpeg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/source/_static/img/android_studio.mp4
Binary file not shown.
2 changes: 1 addition & 1 deletion docs/source/backend-delegates-xnnpack-reference.md
Original file line number Diff line number Diff line change
Expand Up @@ -142,5 +142,5 @@ def _qdq_quantized_linear(
You can read more indepth explanations on PyTorch 2 quantization [here](https://pytorch.org/tutorials/prototype/pt2e_quant_ptq.html).

## See Also
- [Integrating XNNPACK Delegate Android App](demo-apps-android.md)
- [Integrating XNNPACK Delegate in Android AAR](using-executorch-android.md)
- [Complete the Lowering to XNNPACK Tutorial](tutorial-xnnpack-delegate-lowering.md)
5 changes: 0 additions & 5 deletions docs/source/backends-qualcomm.md
Original file line number Diff line number Diff line change
Expand Up @@ -351,11 +351,6 @@ The command-line arguments are written in [utils.py](https://github.com/pytorch/
The model, inputs, and output location are passed to `qnn_executorch_runner` by `--model_path`, `--input_list_path`, and `--output_folder_path`.


### Running a model via ExecuTorch's android demo-app

An Android demo-app using Qualcomm AI Engine Direct Backend can be found in
`examples`. Please refer to android demo app [tutorial](demo-apps-android.md).

## Supported model list

Please refer to `$EXECUTORCH_ROOT/examples/qualcomm/scripts/` and `EXECUTORCH_ROOT/examples/qualcomm/oss_scripts/` to the list of supported models.
Expand Down
4 changes: 2 additions & 2 deletions docs/source/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ ExecuTorch provides support for:
- [Building from Source](using-executorch-building-from-source)
- [FAQs](using-executorch-faqs)
#### Examples
- [Android Demo Apps](demo-apps-android.md)
- [Android Demo Apps](https://github.com/pytorch-labs/executorch-examples/tree/main/dl3/android/DeepLabV3Demo#executorch-android-demo-app)
- [iOS Demo Apps](demo-apps-ios.md)
#### Backends
- [Overview](backends-overview)
Expand Down Expand Up @@ -142,7 +142,7 @@ using-executorch-faqs
:caption: Examples
:hidden:

demo-apps-android.md
Building an ExecuTorch Android Demo App <https://github.com/pytorch-labs/executorch-examples/tree/main/dl3/android/DeepLabV3Demo#executorch-android-demo-app>
demo-apps-ios.md
```

Expand Down
2 changes: 1 addition & 1 deletion docs/source/tutorial-xnnpack-delegate-lowering.md
Original file line number Diff line number Diff line change
Expand Up @@ -176,7 +176,7 @@ Now you should be able to find the executable built at `./cmake-out/backends/xnn
```

## Building and Linking with the XNNPACK Backend
You can build the XNNPACK backend [CMake target](https://github.com/pytorch/executorch/blob/main/backends/xnnpack/CMakeLists.txt#L83), and link it with your application binary such as an Android or iOS application. For more information on this you may take a look at this [resource](demo-apps-android.md) next.
You can build the XNNPACK backend [CMake target](https://github.com/pytorch/executorch/blob/main/backends/xnnpack/CMakeLists.txt#L83), and link it with your application binary such as an Android or iOS application. For more information on this you may take a look at this [resource](./using-executorch-android.md) next.

## Profiling
To enable profiling in the `xnn_executor_runner` pass the flags `-DEXECUTORCH_ENABLE_EVENT_TRACER=ON` and `-DEXECUTORCH_BUILD_DEVTOOLS=ON` to the build command (add `-DENABLE_XNNPACK_PROFILING=ON` for additional details). This will enable ETDump generation when running the inference and enables command line flags for profiling (see `xnn_executor_runner --help` for details).
9 changes: 9 additions & 0 deletions docs/source/using-executorch-android.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,8 @@ The AAR artifact contains the Java library for users to integrate with their Jav
- LLaMa-specific Custom ops library.
- Comes with two ABI variants, arm64-v8a and x86\_64.

The AAR library can be used for generic Android device with arm64-v8a or x86_64 architecture. It can be used across form factors, including phones, tablets, tv boxes, etc, as it does not contain any UI components.

## Using AAR from Maven Central

ExecuTorch is available on [Maven Central](https://mvnrepository.com/artifact/org.pytorch/executorch-android).
Expand All @@ -38,6 +40,11 @@ dependencies {

Note: `org.pytorch:executorch-android:0.5.1` corresponds to executorch v0.5.0.

Click the screenshot below to watch the *demo video* on how to add the package and run a simple ExecuTorch model with Android Studio.
<a href="https://pytorch.org/executorch/main/_static/img/android_studio.mp4">
<img src="https://pytorch.org/executorch/main/_static/img/android_studio.jpeg" width="800" alt="Integrating and Running ExecuTorch on Android">
</a>

## Using AAR file directly

You can also directly specify an AAR file in the app. We upload pre-built AAR to S3 during each release, or as a snapshot.
Expand Down Expand Up @@ -103,6 +110,8 @@ export ANDROID_NDK=/path/to/ndk
sh scripts/build_android_library.sh
```

Currently, XNNPACK backend is always built with the script.

### Optional environment variables

Optionally, set these environment variables before running `build_android_library.sh`.
Expand Down
24 changes: 20 additions & 4 deletions docs/source/using-executorch-building-from-source.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ portability details.
./install_executorch.sh
```

Use the [`--pybind` flag](https://github.com/pytorch/executorch/blob/main/install_executorch.sh#L26-L29) to install with pybindings and dependencies for other backends.
Use the [`--pybind` flag](https://github.com/pytorch/executorch/blob/main/install_executorch.sh#L26-L29) to install with pybindings and dependencies for other backends.
```bash
./install_executorch.sh --pybind <coreml | mps | xnnpack>

Expand All @@ -86,7 +86,7 @@ portability details.
For development mode, run the command with `--editable`, which allows us to modify Python source code and see changes reflected immediately.
```bash
./install_executorch.sh --editable [--pybind xnnpack]

# Or you can directly do the following if dependencies are already installed
# either via a previous invocation of `./install_executorch.sh` or by explicitly installing requirements via `./install_requirements.sh` first.
pip install -e .
Expand Down Expand Up @@ -200,7 +200,7 @@ I 00:00:00.000612 executorch:executor_runner.cpp:138] Setting up planned buffer
I 00:00:00.000669 executorch:executor_runner.cpp:161] Method loaded.
I 00:00:00.000685 executorch:executor_runner.cpp:171] Inputs prepared.
I 00:00:00.000764 executorch:executor_runner.cpp:180] Model executed successfully.
I 00:00:00.000770 executorch:executor_runner.cpp:184] 1 outputs:
I 00:00:00.000770 executorch:executor_runner.cpp:184] 1 outputs:
Output 0: tensor(sizes=[1], [2.])
```

Expand All @@ -210,6 +210,8 @@ Output 0: tensor(sizes=[1], [2.])
Following are instruction on how to perform cross compilation for Android and iOS.

### Android

#### Building executor_runner shell binary
- Prerequisite: [Android NDK](https://developer.android.com/ndk), choose one of the following:
- Option 1: Download Android Studio by following the instructions to [install ndk](https://developer.android.com/studio/projects/install-ndk).
- Option 2: Download Android NDK directly from [here](https://developer.android.com/ndk/downloads).
Expand All @@ -235,6 +237,20 @@ adb push add.pte /data/local/tmp/executorch
adb shell "/data/local/tmp/executorch/executor_runner --model_path /data/local/tmp/executorch/add.pte"
```

#### Building AAR for app integration from source
- Prerequisite: Android NDK from the previous section, and Android SDK (Android Studio is recommended).

Assuming Android NDK and SDK is available, run:
```bash
export ANDROID_ABIS=arm64-v8a
export BUILD_AAR_DIR=aar-out
mkdir -p $BUILD_AAR_DIR
sh scripts/build_android_library.sh
```

This script will build the AAR, which contains the Java API and its corresponding JNI library. Please see
[this documentation](./using-executorch-android#using-aar-file) for usage.

### iOS

For iOS we'll build [frameworks](https://developer.apple.com/documentation/xcode/creating-a-multi-platform-binary-framework-bundle) instead of static libraries, that will also contain the public headers inside.
Expand Down Expand Up @@ -268,5 +284,5 @@ Check out the [iOS Demo App](demo-apps-ios.md) tutorial for more info.
You have successfully cross-compiled `executor_runner` binary to iOS and Android platforms. You can start exploring advanced features and capabilities. Here is a list of sections you might want to read next:

* [Selective build](kernel-library-selective-build.md) to build the runtime that links to only kernels used by the program, which can provide significant binary size savings.
* Tutorials on building [Android](./demo-apps-android.md) and [iOS](./demo-apps-ios.md) demo apps.
* Tutorials on building [Android](https://github.com/pytorch-labs/executorch-examples/tree/main/dl3/android/DeepLabV3Demo#executorch-android-demo-app) and [iOS](./demo-apps-ios.md) demo apps.
* Tutorials on deploying applications to embedded devices such as [ARM Cortex-M/Ethos-U](backends-arm-ethos-u.md) and [XTensa HiFi DSP](./backends-cadence.md).
Loading