Skip to content

Commit ce1e9c1

Browse files
shoumikhinfacebook-github-bot
authored andcommitted
Update readme.
Summary: . Reviewed By: cccclai Differential Revision: D56532283 fbshipit-source-id: 62d7c9e8583fdb5c9a1b2e781e80799c06682aae
1 parent b669056 commit ce1e9c1

File tree

1 file changed

+22
-5
lines changed

1 file changed

+22
-5
lines changed

examples/demo-apps/apple_ios/LLaMA/README.md

+22-5
Original file line numberDiff line numberDiff line change
@@ -5,9 +5,19 @@ This app demonstrates the use of the LLaMA chat app demonstrating local inferenc
55
<img src="../_static/img/llama_ios_app.png" alt="iOS LLaMA App" /><br>
66

77
## Prerequisites
8-
* [Xcode 15](https://developer.apple.com/xcode).
9-
* [iOS 17 SDK](https://developer.apple.com/ios).
10-
* Set up your ExecuTorch repo and environment if you haven’t done so by following the [Setting up ExecuTorch](https://pytorch.org/executorch/stable/getting-started-setup) to set up the repo and dev environment.
8+
* [Xcode 15](https://developer.apple.com/xcode)
9+
* [iOS 17 SDK](https://developer.apple.com/ios)
10+
* Set up your ExecuTorch repo and environment if you haven’t done so by following the [Setting up ExecuTorch](https://pytorch.org/executorch/stable/getting-started-setup) to set up the repo and dev environment:
11+
12+
```bash
13+
git clone -b release/0.2 https://github.com/pytorch/executorch.git
14+
cd executorch
15+
git submodule update --init
16+
17+
python3 -m venv .venv && source .venv/bin/activate
18+
19+
./install_requirements.sh
20+
```
1121

1222
## Exporting models
1323
Please refer to the [ExecuTorch Llama2 docs](https://github.com/pytorch/executorch/blob/main/examples/models/llama2/README.md) to export the model.
@@ -16,10 +26,11 @@ Please refer to the [ExecuTorch Llama2 docs](https://github.com/pytorch/executor
1626

1727
1. Open the [project](https://github.com/pytorch/executorch/blob/main/examples/demo-apps/apple_ios/LLaMA/LLaMA.xcodeproj) in Xcode.
1828
2. Run the app (cmd+R).
19-
3. In app UI pick a model and tokenizer to use, type a prompt and tap the arrow buton as on the [video](../_static/img/llama_ios_app.mp4).
29+
3. In app UI pick a model and tokenizer to use, type a prompt and tap the arrow buton
2030

2131
```{note}
22-
ExecuTorch runtime is distributed as a Swift package providing some .xcframework as prebuilt binary targets. Xcode will dowload and cache the package on the first run, which will take some time.
32+
ExecuTorch runtime is distributed as a Swift package providing some .xcframework as prebuilt binary targets.
33+
Xcode will dowload and cache the package on the first run, which will take some time.
2334
```
2435

2536
## Copy the model to Simulator
@@ -33,5 +44,11 @@ ExecuTorch runtime is distributed as a Swift package providing some .xcframework
3344
2. Navigate to the Files tab and drag&drop the model and tokenizer files onto the iLLaMA folder.
3445
3. Wait until the files are copied.
3546

47+
Click the image below to see it in action!
48+
49+
<a href="https://pytorch.org/executorch/main/_static/img/llama_ios_app.mp4">
50+
<img src="https://pytorch.org/executorch/main/_static/img/llama_ios_app.png" width="600" alt="iOS app running a LlaMA model">
51+
</a>
52+
3653
## Reporting Issues
3754
If you encountered any bugs or issues following this tutorial please file a bug/issue here on [Github](https://github.com/pytorch/executorch/issues/new).

0 commit comments

Comments
 (0)