Skip to content

Commit 2c0f725

Browse files
mergennachinpytorchbot
authored andcommitted
Fixing minor issues in llama2 7b repro (#2926)
Summary: Pull Request resolved: #2926 Fixing issues we've seen in #2907 and #2805 bypass-github-export-checks bypass-github-pytorch-ci-checks bypass-github-executorch-ci-checks Reviewed By: iseeyuan, cccclai Differential Revision: D55893925 fbshipit-source-id: c6e0264d868cb487faf02f95ff1bd223cbcc97ac (cherry picked from commit 6db9d72)
1 parent e193c71 commit 2c0f725

File tree

1 file changed

+8
-1
lines changed

1 file changed

+8
-1
lines changed

examples/models/llama2/README.md

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -61,10 +61,17 @@ You can export and run the original Llama2 7B model.
6161

6262
1. Llama2 pretrained parameters can be downloaded from [Meta's official website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) or from [Hugging Face](https://huggingface.co/meta-llama/Llama-2-7b).
6363

64-
2. Export model and generate `.pte` file:
64+
2. Edit `params.json` file. Replace `"vocab_size": -1` with `"vocab_size": 32000`. This is a short-term workaround.
65+
66+
3. Export model and generate `.pte` file:
6567
```
6668
python -m examples.models.llama2.export_llama --checkpoint <checkpoint.pth> --params <params.json> -kv --use_sdpa_with_kv_cache -X -qmode 8da4w --group_size 128 -d fp32
6769
```
70+
4. Create tokenizer.bin.
71+
72+
```
73+
python -m examples.models.llama2.tokenizer.tokenizer -t tokenizer.model -o tokenizer.bin
74+
```
6875
6976
### Option B: Download and export stories110M model
7077

0 commit comments

Comments
 (0)