Skip to content

Commit 223b38d

Browse files
byjlwJack-Khuu
andauthored
make sure we get details from users when they're filing issues (pytorch#950)
* make sure we get details from users when they're filing issues * Update README.md Co-authored-by: Jack-Khuu <[email protected]> --------- Co-authored-by: Jack-Khuu <[email protected]>
1 parent 196577d commit 223b38d

File tree

2 files changed

+19
-10
lines changed

2 files changed

+19
-10
lines changed

.gitignore

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,3 +18,6 @@ runner-aoti/cmake-out/*
1818

1919
# pte files
2020
*.pte
21+
22+
# debug / logging files
23+
system_info.txt

README.md

Lines changed: 16 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -91,7 +91,7 @@ python3 torchchat.py list
9191
```
9292

9393
### Where
94-
This subcommands shows location of a particular model.
94+
This subcommands shows location of a particular model.
9595
```bash
9696
python3 torchchat.py list
9797
```
@@ -162,7 +162,7 @@ python3 torchchat.py server llama3
162162
[skip default]: end
163163

164164
In the other terminal window, interact with the API using curl. Depending on the model configuration, this query might take a few minutes to respond
165-
165+
166166
**Example Input + Output**
167167

168168
```
@@ -222,7 +222,7 @@ To run in a C++ enviroment, we need to build the runner binary.
222222
scripts/build_native.sh aoti
223223
```
224224

225-
Then run the compiled executable, with the exported DSO from earlier:
225+
Then run the compiled executable, with the exported DSO from earlier:
226226
```bash
227227
cmake-out/aoti_run exportedModels/llama3.so -z `python3 torchchat.py where llama3`/tokenizer.model -l 3 -i "Once upon a time"
228228
```
@@ -344,7 +344,7 @@ The following assumes you've completed the steps for [Setting up ExecuTorch](#se
344344

345345
### Deploy and run on Android
346346

347-
The following assumes you've completed the steps for [Setting up ExecuTorch](#set-up-executorch).
347+
The following assumes you've completed the steps for [Setting up ExecuTorch](#set-up-executorch).
348348
349349
<details>
350350
<summary>Approach 1 (Recommended): Android Studio</summary>
@@ -358,7 +358,7 @@ The following assumes you've completed the steps for [Setting up ExecuTorch](#se
358358
359359
#### Steps
360360
361-
1. Download the AAR file, which contains the Java library and corresponding JNI library, to build and run the app.
361+
1. Download the AAR file, which contains the Java library and corresponding JNI library, to build and run the app.
362362
363363
- [executorch-llama-tiktoken-rc3-0719.aar](https://ossci-android.s3.amazonaws.com/executorch/main/executorch-llama-tiktoken-rc3-0719.aar) (SHASUM: c3e5d2a97708f033c2b1839a89f12f737e3bbbef)
364364
@@ -371,7 +371,7 @@ The following assumes you've completed the steps for [Setting up ExecuTorch](#se
371371
adb push <tokenizer.model or tokenizer.bin> /data/local/tmp/llama
372372
```
373373
374-
4. Use Android Studio to open the torchchat app skeleton, located at `android/Torchchat`.
374+
4. Use Android Studio to open the torchchat app skeleton, located at `android/Torchchat`.
375375
376376
5. Click the Play button (^R) to launch it to emulator/device.
377377
@@ -382,13 +382,13 @@ The following assumes you've completed the steps for [Setting up ExecuTorch](#se
382382

383383
<img src="https://pytorch.org/executorch/main/_static/img/android_llama_app.png" width="600" alt="Android app running a LlaMA model">
384384

385-
**Note:** The AAR file listed above comes with tiktoken tokenizer, which is used for llama3 model. If you want to use a model with BPE tokenizer (llama2 model for example),
386-
use this AAR
385+
**Note:** The AAR file listed above comes with tiktoken tokenizer, which is used for llama3 model. If you want to use a model with BPE tokenizer (llama2 model for example),
386+
use this AAR
387387

388388
* [executorch-llama-bpe-rc3-0719.aar](https://ossci-android.s3.amazonaws.com/executorch/main/executorch-llama-bpe-rc3-0719.aar) (SHASUM: d5fe81d9a4700c36b50ae322e6bf34882134edb0)
389389
* Since the tokenizer is built at compile time, to use a different tokenizer you need to re-build the app.
390390

391-
If you need to tweak or use your own tokenizer and runtime, modify the ExecuTorch code and use [this script](https://github.com/pytorch/executorch/blob/main/build/build_android_llm_demo.sh) to build the AAR library.
391+
If you need to tweak or use your own tokenizer and runtime, modify the ExecuTorch code and use [this script](https://github.com/pytorch/executorch/blob/main/build/build_android_llm_demo.sh) to build the AAR library.
392392

393393

394394
</details>
@@ -488,7 +488,6 @@ We really value our community and the contributions made by our wonderful users.
488488
489489
## Troubleshooting
490490
491-
492491
**CERTIFICATE_VERIFY_FAILED**
493492
Run `pip install --upgrade certifi`.
494493
@@ -500,6 +499,13 @@ link provided in the error to get access.
500499
**Installing ET Fails**
501500
If `./scripts/install_et.sh` fails with an error like `Building wheel for executorch (pyproject.toml) did not run successfully` It's possible that it's linking to an older version of pytorch installed some other way like via homebrew. You can break the link by uninstalling other versions such as `brew uninstall pytorch` Note: You may break something that depends on this, so be aware.
502501
502+
## Filing Issues
503+
Please include the exact command you ran and the output of that command.
504+
Also, run this script and include the output saved to `system_info.txt` so that we can better debug your issue.
505+
506+
```
507+
(echo "Operating System Information"; uname -a; echo ""; cat /etc/os-release; echo ""; echo "Python Version"; python --version || python3 --version; echo ""; echo "PIP Version"; pip --version || pip3 --version; echo ""; echo "Installed Packages"; pip freeze || pip3 freeze; echo ""; echo "PyTorch Version"; python -c "import torch; print(torch.__version__)" || python3 -c "import torch; print(torch.__version__)"; echo ""; echo "Collection Complete") > system_info.txt
508+
```
503509
504510
## Disclaimer
505511
The torchchat Repository Content is provided without any guarantees

0 commit comments

Comments
 (0)