Skip to content

Switch docs to 0.6 branch #10212

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Apr 15, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion Package.swift
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
//
// For details on building frameworks locally or using prebuilt binaries,
// see the documentation:
// https://pytorch.org/executorch/main/using-executorch-ios.html
// https://pytorch.org/executorch/0.6/using-executorch-ios.html

import PackageDescription

Expand Down
8 changes: 4 additions & 4 deletions README-wheel.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,10 +14,10 @@ to run ExecuTorch `.pte` files, with some restrictions:
operators](https://pytorch.org/executorch/stable/ir-ops-set-definition.html)
are linked into the prebuilt module
* Only the [XNNPACK backend
delegate](https://pytorch.org/executorch/main/native-delegates-executorch-xnnpack-delegate.html)
delegate](https://pytorch.org/executorch/0.6/backends-xnnpack)
is linked into the prebuilt module.
* \[macOS only] [Core ML](https://pytorch.org/executorch/main/build-run-coreml.html)
and [MPS](https://pytorch.org/executorch/main/build-run-mps.html) backend
* \[macOS only] [Core ML](https://pytorch.org/executorch/0.6/backends-coreml)
and [MPS](https://pytorch.org/executorch/0.6/backends-mps) backend
delegates are also linked into the prebuilt module.

Please visit the [ExecuTorch website](https://pytorch.org/executorch/) for
Expand All @@ -30,7 +30,7 @@ tutorials and documentation. Here are some starting points:
* Learn how to use ExecuTorch to export and accelerate a large-language model
from scratch.
* [Exporting to
ExecuTorch](https://pytorch.org/executorch/main/tutorials/export-to-executorch-tutorial.html)
ExecuTorch](https://pytorch.org/executorch/0.6/tutorials/export-to-executorch-tutorial.html)
* Learn the fundamentals of exporting a PyTorch `nn.Module` to ExecuTorch, and
optimizing its performance using quantization and hardware delegation.
* Running LLaMA on
Expand Down
2 changes: 1 addition & 1 deletion backends/cadence/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

## Tutorial

Please follow the [tutorial](https://pytorch.org/executorch/main/backends-cadence) for more information on how to run models on Cadence/Xtensa DSPs.
Please follow the [tutorial](https://pytorch.org/executorch/0.6/backends-cadence) for more information on how to run models on Cadence/Xtensa DSPs.

## Directory Structure

Expand Down
2 changes: 1 addition & 1 deletion backends/qualcomm/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ This backend is implemented on the top of
[Qualcomm AI Engine Direct SDK](https://developer.qualcomm.com/software/qualcomm-ai-engine-direct-sdk).
Please follow [tutorial](../../docs/source/backends-qualcomm.md) to setup environment, build, and run executorch models by this backend (Qualcomm AI Engine Direct is also referred to as QNN in the source and documentation).

A website version of the tutorial is [here](https://pytorch.org/executorch/main/backends-qualcomm).
A website version of the tutorial is [here](https://pytorch.org/executorch/0.6/backends-qualcomm).

## Delegate Options

Expand Down
4 changes: 2 additions & 2 deletions backends/xnnpack/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -132,5 +132,5 @@ create an issue on [github](https://www.github.com/pytorch/executorch/issues).

## See Also
For more information about the XNNPACK Backend, please check out the following resources:
- [XNNPACK Backend](https://pytorch.org/executorch/main/backends-xnnpack)
- [XNNPACK Backend Internals](https://pytorch.org/executorch/main/backend-delegates-xnnpack-reference)
- [XNNPACK Backend](https://pytorch.org/executorch/0.6/backends-xnnpack)
- [XNNPACK Backend Internals](https://pytorch.org/executorch/0.6/backend-delegates-xnnpack-reference)
4 changes: 2 additions & 2 deletions docs/source/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ ExecuTorch provides support for:
- [Executorch Runtime API Reference](executorch-runtime-api-reference)
- [Runtime Python API Reference](runtime-python-api-reference)
- [API Life Cycle](api-life-cycle)
- [Javadoc](https://pytorch.org/executorch/main/javadoc/)
- [Javadoc](https://pytorch.org/executorch/0.6/javadoc/)
#### Quantization
- [Overview](quantization-overview)
#### Kernel Library
Expand Down Expand Up @@ -208,7 +208,7 @@ export-to-executorch-api-reference
executorch-runtime-api-reference
runtime-python-api-reference
api-life-cycle
Javadoc <https://pytorch.org/executorch/main/javadoc/>
Javadoc <https://pytorch.org/executorch/0.6/javadoc/>
```

```{toctree}
Expand Down
4 changes: 2 additions & 2 deletions docs/source/llm/getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -159,7 +159,7 @@ example_inputs = (torch.randint(0, 100, (1, model.config.block_size), dtype=torc
# long as they adhere to the rules specified in the dynamic shape configuration.
# Here we set the range of 0th model input's 1st dimension as
# [0, model.config.block_size].
# See https://pytorch.org/executorch/main/concepts#dynamic-shapes
# See https://pytorch.org/executorch/0.6/concepts#dynamic-shapes
# for details about creating dynamic shapes.
dynamic_shape = (
{1: torch.export.Dim("token_dim", max=model.config.block_size)},
Expand Down Expand Up @@ -478,7 +478,7 @@ example_inputs = (
# long as they adhere to the rules specified in the dynamic shape configuration.
# Here we set the range of 0th model input's 1st dimension as
# [0, model.config.block_size].
# See https://pytorch.org/executorch/main/concepts.html#dynamic-shapes
# See https://pytorch.org/executorch/0.6/concepts.html#dynamic-shapes
# for details about creating dynamic shapes.
dynamic_shape = (
{1: torch.export.Dim("token_dim", max=model.config.block_size - 1)},
Expand Down
8 changes: 4 additions & 4 deletions docs/source/memory-planning-inspection.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
# Memory Planning Inspection in ExecuTorch

After the [Memory Planning](https://pytorch.org/executorch/main/concepts.html#memory-planning) pass of ExecuTorch, memory allocation information is stored on the nodes of the [`ExportedProgram`](https://pytorch.org/executorch/main/concepts.html#exportedprogram). Here, we present a tool designed to inspect memory allocation and visualize all active tensor objects.
After the [Memory Planning](https://pytorch.org/executorch/0.6/concepts.html#memory-planning) pass of ExecuTorch, memory allocation information is stored on the nodes of the [`ExportedProgram`](https://pytorch.org/executorch/0.6/concepts.html#exportedprogram). Here, we present a tool designed to inspect memory allocation and visualize all active tensor objects.

## Usage
User should add this code after they call [to_executorch()](https://pytorch.org/executorch/main/export-to-executorch-api-reference.html#executorch.exir.EdgeProgramManager.to_executorch), and it will write memory allocation information stored on the nodes to the file path "memory_profile.json". The file is compatible with the Chrome trace viewer; see below for more information about interpreting the results.
User should add this code after they call [to_executorch()](https://pytorch.org/executorch/0.6/export-to-executorch-api-reference.html#executorch.exir.EdgeProgramManager.to_executorch), and it will write memory allocation information stored on the nodes to the file path "memory_profile.json". The file is compatible with the Chrome trace viewer; see below for more information about interpreting the results.

```python
from executorch.util.activation_memory_profiler import generate_memory_trace
Expand All @@ -13,7 +13,7 @@ generate_memory_trace(
enable_memory_offsets=True,
)
```
* `prog` is an instance of [`ExecuTorchProgramManager`](https://pytorch.org/executorch/main/export-to-executorch-api-reference.html#executorch.exir.ExecutorchProgramManager), returned by [to_executorch()](https://pytorch.org/executorch/main/export-to-executorch-api-reference.html#executorch.exir.EdgeProgramManager.to_executorch).
* `prog` is an instance of [`ExecuTorchProgramManager`](https://pytorch.org/executorch/0.6/export-to-executorch-api-reference.html#executorch.exir.ExecutorchProgramManager), returned by [to_executorch()](https://pytorch.org/executorch/0.6/export-to-executorch-api-reference.html#executorch.exir.EdgeProgramManager.to_executorch).
* Set `enable_memory_offsets` to `True` to show the location of each tensor on the memory space.

## Chrome Trace
Expand All @@ -27,4 +27,4 @@ Note that, since we are repurposing the Chrome trace tool, the axes in this cont
* The vertical axis has a 2-level hierarchy. The first level, "pid", represents memory space. For CPU, everything is allocated on one "space"; other backends may have multiple. In the second level, each row represents one time step. Since nodes will be executed sequentially, each node represents one time step, thus you will have as many nodes as there are rows.

## Further Reading
* [Memory Planning](https://pytorch.org/executorch/main/compiler-memory-planning.html)
* [Memory Planning](https://pytorch.org/executorch/0.6/compiler-memory-planning.html)
2 changes: 1 addition & 1 deletion docs/source/new-contributor-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -129,7 +129,7 @@ Before you can start writing any code, you need to get a copy of ExecuTorch code
git push # push updated local main to your GitHub fork
```

6. [Build the project](https://pytorch.org/executorch/main/using-executorch-building-from-source.html) and [run the tests](https://github.com/pytorch/executorch/blob/main/CONTRIBUTING.md#testing).
6. [Build the project](https://pytorch.org/executorch/0.6/using-executorch-building-from-source.html) and [run the tests](https://github.com/pytorch/executorch/blob/main/CONTRIBUTING.md#testing).

Unfortunately, this step is too long to detail here. If you get stuck at any point, please feel free to ask for help on our [Discord server](https://discord.com/invite/Dh43CKSAdc) — we're always eager to help newcomers get onboarded.

Expand Down
14 changes: 7 additions & 7 deletions docs/source/using-executorch-android.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

To use from Android, ExecuTorch provides Java/Kotlin API bindings and Android platform integration, available as an AAR file.

Note: This page covers Android app integration through the AAR library. The ExecuTorch C++ APIs can also be used from Android native, and the documentation can be found on [this page about cross compilation](https://pytorch.org/executorch/main/using-executorch-building-from-source.html#cross-compilation).
Note: This page covers Android app integration through the AAR library. The ExecuTorch C++ APIs can also be used from Android native, and the documentation can be found on [this page about cross compilation](https://pytorch.org/executorch/0.6/using-executorch-building-from-source.html#cross-compilation).

## Installation

Expand Down Expand Up @@ -41,8 +41,8 @@ dependencies {
Note: If you want to use release v0.5.0, please use dependency `org.pytorch:executorch-android:0.5.1`.

Click the screenshot below to watch the *demo video* on how to add the package and run a simple ExecuTorch model with Android Studio.
<a href="https://pytorch.org/executorch/main/_static/img/android_studio.mp4">
<img src="https://pytorch.org/executorch/main/_static/img/android_studio.jpeg" width="800" alt="Integrating and Running ExecuTorch on Android">
<a href="https://pytorch.org/executorch/0.6/_static/img/android_studio.mp4">
<img src="https://pytorch.org/executorch/0.6/_static/img/android_studio.jpeg" width="800" alt="Integrating and Running ExecuTorch on Android">
</a>

## Using AAR file directly
Expand Down Expand Up @@ -130,17 +130,17 @@ Set environment variable `EXECUTORCH_CMAKE_BUILD_TYPE` to `Release` or `Debug` b

#### Using MediaTek backend

To use [MediaTek backend](https://pytorch.org/executorch/main/backends-mediatek.html),
To use [MediaTek backend](https://pytorch.org/executorch/0.6/backends-mediatek.html),
after installing and setting up the SDK, set `NEURON_BUFFER_ALLOCATOR_LIB` and `NEURON_USDK_ADAPTER_LIB` to the corresponding path.

#### Using Qualcomm AI Engine Backend

To use [Qualcomm AI Engine Backend](https://pytorch.org/executorch/main/backends-qualcomm.html#qualcomm-ai-engine-backend),
To use [Qualcomm AI Engine Backend](https://pytorch.org/executorch/0.6/backends-qualcomm.html#qualcomm-ai-engine-backend),
after installing and setting up the SDK, set `QNN_SDK_ROOT` to the corresponding path.

#### Using Vulkan Backend

To use [Vulkan Backend](https://pytorch.org/executorch/main/backends-vulkan.html#vulkan-backend),
To use [Vulkan Backend](https://pytorch.org/executorch/0.6/backends-vulkan.html#vulkan-backend),
set `EXECUTORCH_BUILD_VULKAN` to `ON`.

## Android Backends
Expand Down Expand Up @@ -195,4 +195,4 @@ using ExecuTorch AAR package.

## Java API reference

Please see [Java API reference](https://pytorch.org/executorch/main/javadoc/).
Please see [Java API reference](https://pytorch.org/executorch/0.6/javadoc/).
6 changes: 3 additions & 3 deletions docs/source/using-executorch-ios.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,8 +35,8 @@ Then select which ExecuTorch framework should link against which target.

Click the screenshot below to watch the *demo video* on how to add the package and run a simple ExecuTorch model on iOS.

<a href="https://pytorch.org/executorch/main/_static/img/swiftpm_xcode.mp4">
<img src="https://pytorch.org/executorch/main/_static/img/swiftpm_xcode.png" width="800" alt="Integrating and Running ExecuTorch on Apple Platforms">
<a href="https://pytorch.org/executorch/0.6/_static/img/swiftpm_xcode.mp4">
<img src="https://pytorch.org/executorch/0.6/_static/img/swiftpm_xcode.png" width="800" alt="Integrating and Running ExecuTorch on Apple Platforms">
</a>

#### CLI
Expand Down Expand Up @@ -293,7 +293,7 @@ From existing memory buffers:

From `NSData` / `Data`:
- `init(data:shape:dataType:...)`: Creates a tensor using an `NSData` object, referencing its bytes without copying.

From scalar arrays:
- `init(_:shape:dataType:...)`: Creates a tensor from an array of `NSNumber` scalars. Convenience initializers exist to infer shape or data type.

Expand Down
2 changes: 1 addition & 1 deletion examples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ ExecuTorch's extensive support spans from simple modules like "Add" to comprehen
## Directory structure
```
examples
├── llm_manual # A storage place for the files that [LLM Maunal](https://pytorch.org/executorch/main/llm/getting-started.html) needs
├── llm_manual # A storage place for the files that [LLM Maunal](https://pytorch.org/executorch/0.6/llm/getting-started.html) needs
├── models # Contains a set of popular and representative PyTorch models
├── portable # Contains end-to-end demos for ExecuTorch in portable mode
├── selective_build # Contains demos of selective build for optimizing the binary size of the ExecuTorch runtime
Expand Down
4 changes: 2 additions & 2 deletions examples/arm/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ To run these scripts. On a Linux system, in a terminal, with a working internet
$ cd <EXECUTORCH-ROOT-FOLDER>
$ executorch/examples/arm/setup.sh --i-agree-to-the-contained-eula [optional-scratch-dir]

# Step [2] - Setup Patch to tools, The `setup.sh` script has generated a script that you need to source everytime you restart you shell.
# Step [2] - Setup Patch to tools, The `setup.sh` script has generated a script that you need to source everytime you restart you shell.
$ source executorch/examples/arm/ethos-u-scratch/setup_path.sh

# Step [3] - build + run ExecuTorch and executor_runner baremetal application
Expand All @@ -34,6 +34,6 @@ $ executorch/examples/arm/run.sh --model_name=mv2 --target=ethos-u85-128 [--scra

### Online Tutorial

We also have a [tutorial](https://pytorch.org/executorch/main/backends-arm-ethos-u) explaining the steps performed in these
We also have a [tutorial](https://pytorch.org/executorch/0.6/backends-arm-ethos-u) explaining the steps performed in these
scripts, expected results, possible problems and more. It is a step-by-step guide
you can follow to better understand this delegate.
2 changes: 1 addition & 1 deletion examples/demo-apps/apple_ios/LLaMA/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ Link your binary with the ExecuTorch runtime and any backends or kernels used by

Note: To access logs, link against the Debug build of the ExecuTorch runtime, i.e., the executorch_debug framework. For optimal performance, always link against the Release version of the deliverables (those without the _debug suffix), which have all logging overhead removed.

For more details integrating and Running ExecuTorch on Apple Platforms, checkout this [link](https://pytorch.org/executorch/main/using-executorch-ios).
For more details integrating and Running ExecuTorch on Apple Platforms, checkout this [link](https://pytorch.org/executorch/0.6/using-executorch-ios).

### XCode
* Open XCode and select "Open an existing project" to open `examples/demo-apps/apple_ios/LLama`.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ Link your binary with the ExecuTorch runtime and any backends or kernels used by

Note: To access logs, link against the Debug build of the ExecuTorch runtime, i.e., the executorch_debug framework. For optimal performance, always link against the Release version of the deliverables (those without the _debug suffix), which have all logging overhead removed.

For more details integrating and Running ExecuTorch on Apple Platforms, checkout this [link](https://pytorch.org/executorch/main/using-executorch-ios.html).
For more details integrating and Running ExecuTorch on Apple Platforms, checkout this [link](https://pytorch.org/executorch/0.6/using-executorch-ios.html).

<p align="center">
<img src="https://raw.githubusercontent.com/pytorch/executorch/refs/heads/main/docs/source/_static/img/ios_demo_app_swift_pm.png" alt="iOS LLaMA App Swift PM" style="width:600px">
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -164,7 +164,7 @@ If you cannot add the package into your app target (it's greyed out), it might h



More details on integrating and Running ExecuTorch on Apple Platforms, check out the detailed guide [here](https://pytorch.org/executorch/main/using-executorch-ios#local-build).
More details on integrating and Running ExecuTorch on Apple Platforms, check out the detailed guide [here](https://pytorch.org/executorch/0.6/using-executorch-ios#local-build).

### 3. Configure Build Schemes

Expand All @@ -176,7 +176,7 @@ Navigate to `Product --> Scheme --> Edit Scheme --> Info --> Build Configuration

We recommend that you only use the Debug build scheme during development, where you might need to access additional logs. Debug build has logging overhead and will impact inferencing performance, while release build has compiler optimizations enabled and all logging overhead removed.

For more details integrating and Running ExecuTorch on Apple Platforms or building the package locally, checkout this [link](https://pytorch.org/executorch/main/using-executorch-ios).
For more details integrating and Running ExecuTorch on Apple Platforms or building the package locally, checkout this [link](https://pytorch.org/executorch/0.6/using-executorch-ios).

### 4. Build and Run the project

Expand Down
2 changes: 1 addition & 1 deletion examples/llm_manual/README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
# LLM Manual

This repository is a storage place for the files that [LLM Manual](https://pytorch.org/executorch/main/llm/getting-started) needs. Please refer to the documentation website for more information.
This repository is a storage place for the files that [LLM Manual](https://pytorch.org/executorch/0.6/llm/getting-started) needs. Please refer to the documentation website for more information.
2 changes: 1 addition & 1 deletion examples/models/deepseek-r1-distill-llama-8B/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ This example demonstrates how to run [Deepseek R1 Distill Llama 8B](https://hugg

# Instructions
## Step 1: Setup
1. Follow the [tutorial](https://pytorch.org/executorch/main/getting-started-setup) to set up ExecuTorch. For installation run `./install_executorch.sh`
1. Follow the [tutorial](https://pytorch.org/executorch/0.6/getting-started-setup) to set up ExecuTorch. For installation run `./install_executorch.sh`

2. Run the installation step for Llama specific requirements
```
Expand Down
Loading
Loading