diff --git a/.github/workflows/apple.yml b/.github/workflows/apple.yml index 9dafaf67b1..43d35c7c4a 100644 --- a/.github/workflows/apple.yml +++ b/.github/workflows/apple.yml @@ -37,7 +37,7 @@ jobs: id: set_version shell: bash run: | - VERSION="0.5.0.$(TZ='PST8PDT' date +%Y%m%d)" + VERSION="0.6.0" echo "version=$VERSION" >> "$GITHUB_OUTPUT" build-demo-ios: @@ -212,7 +212,7 @@ jobs: name: executorch-frameworks-ios path: ${{ runner.temp }}/frameworks-ios/ - name: Only push to S3 when running the workflow manually from main branch - if: ${{ (github.event_name == 'schedule' || github.event_name == 'workflow_dispatch') && github.ref == 'refs/heads/main' }} + if: ${{ (github.event_name == 'schedule' || github.event_name == 'workflow_dispatch') && github.ref == 'refs/heads/release/0.6' }} shell: bash run: | echo "UPLOAD_ON_MAIN=1" >> "${GITHUB_ENV}" diff --git a/docs/README.md b/docs/README.md index dd1fded5aa..20476f3c16 100644 --- a/docs/README.md +++ b/docs/README.md @@ -39,17 +39,20 @@ To build the documentation locally: 1. Clone the ExecuTorch repo to your machine. -1. If you don't have it already, start a conda environment: + ```bash + git clone -b release/0.6 https://github.com/pytorch/executorch.git && cd executorch + ``` - ```{note} - The below command generates a completely new environment and resets - any existing dependencies. If you have an environment already, skip - the `conda create` command. +1. If you don't have it already, start either a Python virtual envitonment: + + ```bash + python3 -m venv .venv && source .venv/bin/activate && pip install --upgrade pip ``` + Or a Conda environment: + ```bash - conda create -yn executorch python=3.10.0 - conda activate executorch + conda create -yn executorch python=3.10.0 && conda activate executorch ``` 1. Install dependencies: @@ -57,15 +60,11 @@ To build the documentation locally: ```bash pip3 install -r ./.ci/docker/requirements-ci.txt ``` -1. Update submodules - ```bash - git submodule sync && git submodule update --init - ``` 1. Run: ```bash - bash install_executorch.sh + ./install_executorch.sh ``` 1. Go to the `docs/` directory. diff --git a/docs/source/getting-started.md b/docs/source/getting-started.md index ddc7571944..a11856421d 100644 --- a/docs/source/getting-started.md +++ b/docs/source/getting-started.md @@ -137,7 +137,7 @@ For a full example of running a model on Android, see the [DeepLabV3AndroidDemo] #### Installation ExecuTorch supports both iOS and MacOS via C++, as well as hardware backends for CoreML, MPS, and CPU. The iOS runtime library is provided as a collection of .xcframework targets and are made available as a Swift PM package. -To get started with Xcode, go to File > Add Package Dependencies. Paste the URL of the ExecuTorch repo into the search bar and select it. Make sure to change the branch name to the desired ExecuTorch version in format “swiftpm-”, (e.g. “swiftpm-0.5.0”). The ExecuTorch dependency can also be added to the package file manually. See [Using ExecuTorch on iOS](using-executorch-ios.md) for more information. +To get started with Xcode, go to File > Add Package Dependencies. Paste the URL of the ExecuTorch repo into the search bar and select it. Make sure to change the branch name to the desired ExecuTorch version in format “swiftpm-”, (e.g. “swiftpm-0.6.0”). The ExecuTorch dependency can also be added to the package file manually. See [Using ExecuTorch on iOS](using-executorch-ios.md) for more information. #### Runtime APIs Models can be loaded and run from Objective-C using the C++ APIs. diff --git a/docs/source/llm/getting-started.md b/docs/source/llm/getting-started.md index 066bb3f3d1..a4ff2b6fad 100644 --- a/docs/source/llm/getting-started.md +++ b/docs/source/llm/getting-started.md @@ -43,15 +43,17 @@ Instructions on installing miniconda can be [found here](https://docs.anaconda.c mkdir et-nanogpt cd et-nanogpt -# Clone the ExecuTorch repository and submodules. +# Clone the ExecuTorch repository. mkdir third-party -git clone -b release/0.4 https://github.com/pytorch/executorch.git third-party/executorch -cd third-party/executorch -git submodule update --init +git clone -b release/0.6 https://github.com/pytorch/executorch.git third-party/executorch && cd third-party/executorch -# Create a conda environment and install requirements. -conda create -yn executorch python=3.10.0 -conda activate executorch +# Create either a Python virtual environment: +python3 -m venv .venv && source .venv/bin/activate && pip install --upgrade pip + +# Or a Conda environment: +conda create -yn executorch python=3.10.0 && conda activate executorch + +# Install requirements ./install_executorch.sh cd ../.. @@ -76,11 +78,8 @@ pyenv install -s 3.10 pyenv virtualenv 3.10 executorch pyenv activate executorch -# Clone the ExecuTorch repository and submodules. -mkdir third-party -git clone -b release/0.4 https://github.com/pytorch/executorch.git third-party/executorch -cd third-party/executorch -git submodule update --init +# Clone the ExecuTorch repository. +git clone -b release/0.6 https://github.com/pytorch/executorch.git third-party/executorch && cd third-party/executorch # Install requirements. PYTHON_EXECUTABLE=python ./install_executorch.sh diff --git a/docs/source/using-executorch-building-from-source.md b/docs/source/using-executorch-building-from-source.md index 345f0324d5..093e8e3386 100644 --- a/docs/source/using-executorch-building-from-source.md +++ b/docs/source/using-executorch-building-from-source.md @@ -36,27 +36,23 @@ portability details. ## Environment Setup -### Create a Virtual Environment +### Clone ExecuTorch -[Install conda on your machine](https://conda.io/projects/conda/en/latest/user-guide/install/index.html). Then, create a virtual environment to manage our dependencies. ```bash - # Create and activate a conda environment named "executorch" - conda create -yn executorch python=3.10.0 - conda activate executorch + # Clone the ExecuTorch repo from GitHub + git clone -b release/0.6 https://github.com/pytorch/executorch.git && cd executorch ``` -### Clone ExecuTorch +### Create a Virtual Environment +Create and activate a Python virtual environment: ```bash - # Clone the ExecuTorch repo from GitHub - # 'main' branch is the primary development branch where you see the latest changes. - # 'viable/strict' contains all of the commits on main that pass all of the necessary CI checks. - git clone --branch viable/strict https://github.com/pytorch/executorch.git - cd executorch - - # Update and pull submodules - git submodule sync - git submodule update --init + python3 -m venv .venv && source .venv/bin/activate && pip install --upgrade pip + ``` + +Or alternatively, [install conda on your machine](https://conda.io/projects/conda/en/latest/user-guide/install/index.html). Then, create a Conda environment named "executorch". + ```bash + conda create -yn executorch python=3.10.0 && conda activate executorch ``` ## Install ExecuTorch pip package from Source diff --git a/docs/source/using-executorch-ios.md b/docs/source/using-executorch-ios.md index 70c2b366fa..56f9084376 100644 --- a/docs/source/using-executorch-ios.md +++ b/docs/source/using-executorch-ios.md @@ -25,7 +25,7 @@ The prebuilt ExecuTorch runtime, backend, and kernels are available as a [Swift #### Xcode -In Xcode, go to `File > Add Package Dependencies`. Paste the URL of the [ExecuTorch repo](https://github.com/pytorch/executorch) into the search bar and select it. Make sure to change the branch name to the desired ExecuTorch version in format "swiftpm-", (e.g. "swiftpm-0.5.0"), or a branch name in format "swiftpm-." (e.g. "swiftpm-0.5.0-20250228") for a nightly build on a specific date. +In Xcode, go to `File > Add Package Dependencies`. Paste the URL of the [ExecuTorch repo](https://github.com/pytorch/executorch) into the search bar and select it. Make sure to change the branch name to the desired ExecuTorch version in format "swiftpm-", (e.g. "swiftpm-0.6.0"), or a branch name in format "swiftpm-." (e.g. "swiftpm-0.6.0-20250501") for a nightly build on a specific date. ![](_static/img/swiftpm_xcode1.png) @@ -58,7 +58,7 @@ let package = Package( ], dependencies: [ // Use "swiftpm-." branch name for a nightly build. - .package(url: "https://github.com/pytorch/executorch.git", branch: "swiftpm-0.5.0") + .package(url: "https://github.com/pytorch/executorch.git", branch: "swiftpm-0.6.0") ], targets: [ .target( @@ -97,7 +97,7 @@ xcode-select --install 2. Clone ExecuTorch: ```bash -git clone https://github.com/pytorch/executorch.git --depth 1 --recurse-submodules --shallow-submodules && cd executorch +git clone -b release/0.6 https://github.com/pytorch/executorch.git && cd executorch ``` 3. Set up [Python](https://www.python.org/downloads/macos/) 3.10+ and activate a virtual environment: @@ -106,15 +106,16 @@ git clone https://github.com/pytorch/executorch.git --depth 1 --recurse-submodul python3 -m venv .venv && source .venv/bin/activate && ./install_requirements.sh ``` -4. Install the required dependencies, including those needed for the backends like [Core ML](backends-coreml.md) or [MPS](backends-mps.md). Choose one: +4. Install the required dependencies, including those needed for the backends like [Core ML](backends-coreml.md) or [MPS](backends-mps.md). Choose one, or both: ```bash # ExecuTorch with xnnpack and CoreML backend -./install_executorch.sh --pybind xnnpack +./backends/apple/coreml/scripts/install_requirements.sh +./install_executorch.sh --pybind coreml xnnpack -# Optional: ExecuTorch with xnnpack, CoreML, and MPS backend +# ExecuTorch with xnnpack and MPS backend ./backends/apple/mps/install_requirements.sh -./install_executorch.sh --pybind xnnpack mps +./install_executorch.sh --pybind mps xnnpack ``` 5. Install [CMake](https://cmake.org): diff --git a/examples/demo-apps/android/LlamaDemo/docs/delegates/mediatek_README.md b/examples/demo-apps/android/LlamaDemo/docs/delegates/mediatek_README.md index 4d1346963c..dabe0b6dc6 100644 --- a/examples/demo-apps/android/LlamaDemo/docs/delegates/mediatek_README.md +++ b/examples/demo-apps/android/LlamaDemo/docs/delegates/mediatek_README.md @@ -21,23 +21,29 @@ Phone verified: MediaTek Dimensity 9300 (D9300) chip. ## Setup ExecuTorch In this section, we will need to set up the ExecuTorch repo first with Conda environment management. Make sure you have Conda available in your system (or follow the instructions to install it [here](https://anaconda.org/anaconda/conda)). The commands below are running on Linux (CentOS). -Create a Conda environment +Checkout ExecuTorch repo and sync submodules + ``` -conda create -yn et_mtk python=3.10.0 -conda activate et_mtk +git clone -b release/0.6 https://github.com/pytorch/executorch.git && cd executorch ``` -Checkout ExecuTorch repo and sync submodules +Create either a Python virtual environment: + +``` +python3 -m venv .venv && source .venv/bin/activate && pip install --upgrade pip ``` -git clone https://github.com/pytorch/executorch.git -cd executorch -git submodule sync -git submodule update --init + +Or a Conda environment: + +``` +conda create -n et_xnnpack python=3.10.0 && conda activate et_xnnpack ``` + Install dependencies ``` ./install_executorch.sh ``` + ## Setup Environment Variables ### Download Buck2 and make executable * Download Buck2 from the official [Release Page](https://github.com/facebook/buck2/releases/tag/2024-02-01) diff --git a/examples/demo-apps/android/LlamaDemo/docs/delegates/qualcomm_README.md b/examples/demo-apps/android/LlamaDemo/docs/delegates/qualcomm_README.md index 92afe613f7..bb14e7d295 100644 --- a/examples/demo-apps/android/LlamaDemo/docs/delegates/qualcomm_README.md +++ b/examples/demo-apps/android/LlamaDemo/docs/delegates/qualcomm_README.md @@ -19,19 +19,24 @@ Phone verified: OnePlus 12, Samsung 24+, Samsung 23 ## Setup ExecuTorch In this section, we will need to set up the ExecuTorch repo first with Conda environment management. Make sure you have Conda available in your system (or follow the instructions to install it [here](https://anaconda.org/anaconda/conda)). The commands below are running on Linux (CentOS). -Create a Conda environment +Checkout ExecuTorch repo and sync submodules + ``` -conda create -n et_qnn python=3.10.0 -conda activate et_qnn +git clone -b release/0.6 https://github.com/pytorch/executorch.git && cd executorch ``` -Checkout ExecuTorch repo and sync submodules +Create either a Python virtual environment: + ``` -git clone https://github.com/pytorch/executorch.git -cd executorch -git submodule sync -git submodule update --init +python3 -m venv .venv && source .venv/bin/activate && pip install --upgrade pip ``` + +Or a Conda environment: + +``` +conda create -n et_xnnpack python=3.10.0 && conda activate et_xnnpack +``` + Install dependencies ``` ./install_executorch.sh @@ -74,7 +79,7 @@ cmake --build cmake-out -j16 --target install --config Release ### Setup Llama Runner Next we need to build and compile the Llama runner. This is similar to the requirements for running Llama with XNNPACK. ``` -sh examples/models/llama/install_requirements.sh +./examples/models/llama/install_requirements.sh cmake -DPYTHON_EXECUTABLE=python \ -DCMAKE_INSTALL_PREFIX=cmake-out \ diff --git a/examples/demo-apps/android/LlamaDemo/docs/delegates/xnnpack_README.md b/examples/demo-apps/android/LlamaDemo/docs/delegates/xnnpack_README.md index 2b9bad21b7..6192624647 100644 --- a/examples/demo-apps/android/LlamaDemo/docs/delegates/xnnpack_README.md +++ b/examples/demo-apps/android/LlamaDemo/docs/delegates/xnnpack_README.md @@ -21,35 +21,34 @@ Phone verified: OnePlus 12, OnePlus 9 Pro. Samsung S23 (Llama only), Samsung S24 ## Setup ExecuTorch In this section, we will need to set up the ExecuTorch repo first with Conda environment management. Make sure you have Conda available in your system (or follow the instructions to install it [here](https://anaconda.org/anaconda/conda)). The commands below are running on Linux (CentOS). -Create a Conda environment +Checkout ExecuTorch repo and sync submodules + ``` -conda create -yn executorch python=3.10.0 -conda activate executorch +git clone -b release/0.6 https://github.com/pytorch/executorch.git && cd executorch ``` -Checkout ExecuTorch repo and sync submodules +Create either a Python virtual environment: + ``` -git clone https://github.com/pytorch/executorch.git -cd executorch -git submodule sync -git submodule update --init +python3 -m venv .venv && source .venv/bin/activate && pip install --upgrade pip ``` -Install dependencies + +Or a Conda environment: + ``` -./install_executorch.sh +conda create -n et_xnnpack python=3.10.0 && conda activate et_xnnpack ``` -Optional: Use the --pybind flag to install with pybindings. +Install dependencies ``` -./install_executorch.sh --pybind xnnpack +./install_executorch.sh ``` - ## Prepare Models In this demo app, we support text-only inference with up-to-date Llama models and image reasoning inference with LLaVA 1.5. * You can request and download model weights for Llama through Meta official [website](https://llama.meta.com/). * For chat use-cases, download the instruct models instead of pretrained. -* Run `examples/models/llama/install_requirements.sh` to install dependencies. +* Run `./examples/models/llama/install_requirements.sh` to install dependencies. * Rename tokenizer for Llama3.x with command: `mv tokenizer.model tokenizer.bin`. We are updating the demo app to support tokenizer in original format directly. ### For Llama 3.2 1B and 3B SpinQuant models diff --git a/examples/demo-apps/apple_ios/ExecuTorchDemo/ExecuTorchDemo.xcodeproj/project.pbxproj b/examples/demo-apps/apple_ios/ExecuTorchDemo/ExecuTorchDemo.xcodeproj/project.pbxproj index 2ee4db5361..7c88eff27a 100644 --- a/examples/demo-apps/apple_ios/ExecuTorchDemo/ExecuTorchDemo.xcodeproj/project.pbxproj +++ b/examples/demo-apps/apple_ios/ExecuTorchDemo/ExecuTorchDemo.xcodeproj/project.pbxproj @@ -806,7 +806,7 @@ isa = XCRemoteSwiftPackageReference; repositoryURL = "https://github.com/pytorch/executorch"; requirement = { - branch = "swiftpm-0.5.0.20250317"; + branch = "swiftpm-0.6.0"; kind = branch; }; }; diff --git a/examples/demo-apps/apple_ios/ExecuTorchDemo/README.md b/examples/demo-apps/apple_ios/ExecuTorchDemo/README.md index 844c83d220..0a44de8268 100644 --- a/examples/demo-apps/apple_ios/ExecuTorchDemo/README.md +++ b/examples/demo-apps/apple_ios/ExecuTorchDemo/README.md @@ -44,8 +44,7 @@ Follow the [Setting Up ExecuTorch](https://pytorch.org/executorch/stable/getting tutorial to configure the basic environment: ```bash -git clone https://github.com/pytorch/executorch.git --depth 1 --recurse-submodules --shallow-submodules -cd executorch +git clone -b release/0.6 https://github.com/pytorch/executorch.git && cd executorch python3 -m venv .venv && source .venv/bin/activate diff --git a/examples/demo-apps/apple_ios/LLaMA/LLaMA.xcodeproj/project.pbxproj b/examples/demo-apps/apple_ios/LLaMA/LLaMA.xcodeproj/project.pbxproj index a067873a0b..0cfc4ddaa7 100644 --- a/examples/demo-apps/apple_ios/LLaMA/LLaMA.xcodeproj/project.pbxproj +++ b/examples/demo-apps/apple_ios/LLaMA/LLaMA.xcodeproj/project.pbxproj @@ -852,7 +852,7 @@ isa = XCRemoteSwiftPackageReference; repositoryURL = "https://github.com/pytorch/executorch"; requirement = { - branch = "swiftpm-0.5.0.20250228"; + branch = "swiftpm-0.6.0"; kind = branch; }; }; diff --git a/examples/demo-apps/apple_ios/LLaMA/docs/delegates/mps_README.md b/examples/demo-apps/apple_ios/LLaMA/docs/delegates/mps_README.md index f5292fe5c0..dd0dfda733 100644 --- a/examples/demo-apps/apple_ios/LLaMA/docs/delegates/mps_README.md +++ b/examples/demo-apps/apple_ios/LLaMA/docs/delegates/mps_README.md @@ -14,26 +14,29 @@ More specifically, it covers: ## Setup ExecuTorch In this section, we will need to set up the ExecuTorch repo first with Conda environment management. Make sure you have Conda available in your system (or follow the instructions to install it [here](https://conda.io/projects/conda/en/latest/user-guide/install/index.html)). The commands below are running on Linux (CentOS). -Create a Conda environment +Checkout ExecuTorch repo and sync submodules ``` -conda create -n et_mps python=3.10.0 -conda activate et_mps +git clone -b release/0.6 https://github.com/pytorch/executorch.git && cd executorch ``` -Checkout ExecuTorch repo and sync submodules +Create either a Python virtual environment: + +``` +python3 -m venv .venv && source .venv/bin/activate && pip install --upgrade pip +``` + +Or a Conda environment ``` -git clone https://github.com/pytorch/executorch.git -cd executorch -git submodule sync -git submodule update --init +conda create -n et_mps python=3.10.0 && conda activate et_mps ``` Install dependencies ``` ./install_executorch.sh +./backends/apple/mps/install_requirements.sh ``` ## Prepare Models @@ -42,7 +45,7 @@ In this demo app, we support text-only inference with Llama 3.1, Llama 3, and Ll Install the required packages to export the model ``` -sh examples/models/llama/install_requirements.sh +./examples/models/llama/install_requirements.sh ``` Export the model @@ -76,17 +79,7 @@ sudo /Applications/CMake.app/Contents/bin/cmake-gui --install The prebuilt ExecuTorch runtime, backend, and kernels are available as a Swift PM package. ### Xcode -Open the project in Xcode.In Xcode, go to `File > Add Package Dependencies`. Paste the URL of the ExecuTorch repo into the search bar and select it. Make sure to change the branch name to the desired ExecuTorch version, e.g., “swiftpm-0.5.0”, or a branch name in format "swiftpm-." (e.g. "swiftpm-0.5.0-20250228") for a nightly build on a specific date. - -Note: If you're running into any issues related to package dependencies, quit Xcode entirely, delete the whole executorch repo, clean the caches by running the command below in terminal and clone the repo again. - -``` -rm -rf \ - ~/Library/org.swift.swiftpm \ - ~/Library/Caches/org.swift.swiftpm \ - ~/Library/Caches/com.apple.dt.Xcode \ - ~/Library/Developer/Xcode/DerivedData -``` +Open the project in Xcode.In Xcode, go to `File > Add Package Dependencies`. Paste the URL of the ExecuTorch repo into the search bar and select it. Make sure to change the branch name to the desired ExecuTorch version, e.g., “swiftpm-0.6.0”, or a branch name in format "swiftpm-." (e.g. "swiftpm-0.6.0-20250501") for a nightly build on a specific date. Link your binary with the ExecuTorch runtime and any backends or kernels used by the exported ML model. It is recommended to link the core runtime to the components that use ExecuTorch directly, and link kernels and backends against the main app target. diff --git a/examples/demo-apps/apple_ios/LLaMA/docs/delegates/xnnpack_README.md b/examples/demo-apps/apple_ios/LLaMA/docs/delegates/xnnpack_README.md index c45871a1fe..687ef034bd 100644 --- a/examples/demo-apps/apple_ios/LLaMA/docs/delegates/xnnpack_README.md +++ b/examples/demo-apps/apple_ios/LLaMA/docs/delegates/xnnpack_README.md @@ -13,31 +13,30 @@ More specifically, it covers: ## Setup ExecuTorch In this section, we will need to set up the ExecuTorch repo first with Conda environment management. Make sure you have Conda available in your system (or follow the instructions to install it [here](https://conda.io/projects/conda/en/latest/user-guide/install/index.html)). The commands below are running on Linux (CentOS). -Create a Conda environment +Checkout ExecuTorch repo and sync submodules ``` -conda create -n et_xnnpack python=3.10.0 -conda activate et_xnnpack +git clone -b release/0.6 https://github.com/pytorch/executorch.git && cd executorch ``` -Checkout ExecuTorch repo and sync submodules +Create either a Python virtual environment: ``` -git clone https://github.com/pytorch/executorch.git -cd executorch -git submodule sync -git submodule update --init +python3 -m venv .venv && source .venv/bin/activate && pip install --upgrade pip ``` -Install dependencies +Or a Conda environment: ``` -./install_executorch.sh +conda create -n et_xnnpack python=3.10.0 && conda activate et_xnnpack ``` -Optional: Use the --pybind flag to install with pybindings. + +Install dependencies + ``` -./install_executorch.sh --pybind xnnpack +./install_executorch.sh ``` + ## Prepare Models In this demo app, we support text-only inference with up-to-date Llama models and image reasoning inference with LLaVA 1.5. * You can request and download model weights for Llama through Meta official [website](https://llama.meta.com/). @@ -45,8 +44,9 @@ In this demo app, we support text-only inference with up-to-date Llama models an * Install the required packages to export the model: ``` -sh examples/models/llama/install_requirements.sh +./examples/models/llama/install_requirements.sh ``` + ### For Llama 3.2 1B and 3B SpinQuant models Meta has released prequantized INT4 SpinQuant Llama 3.2 models that ExecuTorch supports on the XNNPACK backend. * Export Llama model and generate .pte file as below: @@ -112,27 +112,14 @@ There are two options to add ExecuTorch runtime package into your XCode project: The current XCode project is pre-configured to automatically download and link the latest prebuilt package via Swift Package Manager. -If you have an old ExecuTorch package cached before in XCode, or are running into any package dependencies issues (incorrect checksum hash, missing package, outdated package), close XCode and run the following command in terminal inside your ExecuTorch directory - -``` -rm -rf \ - ~/Library/org.swift.swiftpm \ - ~/Library/Caches/org.swift.swiftpm \ - ~/Library/Caches/com.apple.dt.Xcode \ - ~/Library/Developer/Xcode/DerivedData \ - examples/demo-apps/apple_ios/LLaMA/LLaMA.xcodeproj/project.xcworkspace/xcshareddata/swiftpm -``` - -The command above will clear all the package cache, and when you re-open the XCode project, it should re-download the latest package and link them correctly. - #### (Optional) Changing the prebuilt package version While we recommended using the latest prebuilt package pre-configured with the XCode project, you can also change the package version manually to your desired version. Go to Project Navigator, click on LLaMA. `Project --> LLaMA --> Package Dependencies`, and update the package dependencies to any of the available options below: -- Branch --> swiftpm-0.5.0.20250228 (amend to match the latest nightly build) +- Branch --> swiftpm-0.6.0.20250501 (amend to match the latest nightly build) +- Branch --> swiftpm-0.6.0 - Branch --> swiftpm-0.5.0 -- Branch --> swiftpm-0.4.0 ### 2.2 Manually build the package locally and link them diff --git a/examples/demo-apps/react-native/rnllama/README.md b/examples/demo-apps/react-native/rnllama/README.md index 33c607d635..e08e3820cd 100644 --- a/examples/demo-apps/react-native/rnllama/README.md +++ b/examples/demo-apps/react-native/rnllama/README.md @@ -20,7 +20,7 @@ A React Native mobile application for running LLaMA language models using ExecuT ## Installation -1. Clone the repository: `git clone git@github.com:pytorch/executorch.git` +1. Clone the repository: `git clone -b release/0.6 git@github.com:pytorch/executorch.git` 2. Navigate to the root of the repository: `cd executorch` diff --git a/examples/demo-apps/react-native/rnllama/ios/rnllama.xcodeproj/project.pbxproj b/examples/demo-apps/react-native/rnllama/ios/rnllama.xcodeproj/project.pbxproj index 612dd410a1..ea08f8cf77 100644 --- a/examples/demo-apps/react-native/rnllama/ios/rnllama.xcodeproj/project.pbxproj +++ b/examples/demo-apps/react-native/rnllama/ios/rnllama.xcodeproj/project.pbxproj @@ -947,7 +947,7 @@ isa = XCRemoteSwiftPackageReference; repositoryURL = "https://github.com/pytorch/executorch.git"; requirement = { - branch = "swiftpm-0.5.0.20250228"; + branch = "swiftpm-0.6.0"; kind = branch; }; }; diff --git a/examples/llm_pte_finetuning/README.md b/examples/llm_pte_finetuning/README.md index bdd317109e..8aeea31608 100644 --- a/examples/llm_pte_finetuning/README.md +++ b/examples/llm_pte_finetuning/README.md @@ -7,7 +7,7 @@ In this tutorial, we show how to fine-tune an LLM using executorch. You will need to have a model's checkpoint, in the Hugging Face format. For example: ```console -git clone git clone https://huggingface.co/Qwen/Qwen2-0.5B-Instruct +git clone https://huggingface.co/Qwen/Qwen2-0.5B-Instruct ``` You will need to install [torchtune](https://github.com/pytorch/torchtune) following [its installation instructions](https://github.com/pytorch/torchtune?tab=readme-ov-file#installation). diff --git a/extension/benchmark/apple/Benchmark/README.md b/extension/benchmark/apple/Benchmark/README.md index e993ae4f97..5f54f5bd30 100644 --- a/extension/benchmark/apple/Benchmark/README.md +++ b/extension/benchmark/apple/Benchmark/README.md @@ -24,8 +24,7 @@ It provides a flexible framework for dynamically generating and running performa To get started, clone the ExecuTorch repository and cd into the source code directory: ```bash -git clone https://github.com/pytorch/executorch.git --depth 1 --recurse-submodules --shallow-submodules -cd executorch +git clone -b release/0.6 https://github.com/pytorch/executorch.git && cd executorch ``` This command performs a shallow clone to speed up the process. diff --git a/scripts/test_ios.sh b/scripts/test_ios.sh index 09461e0953..b93d3378ff 100755 --- a/scripts/test_ios.sh +++ b/scripts/test_ios.sh @@ -47,7 +47,7 @@ say() { say "Cloning the Code" pushd . > /dev/null -git clone https://github.com/pytorch/executorch.git "$OUTPUT" +git clone -b release/0.6 https://github.com/pytorch/executorch.git "$OUTPUT" cd "$OUTPUT" say "Updating the Submodules"