Skip to content

Commit b713e53

Browse files
committed
upgrade lm_eval to 0.4.5
ghstack-source-id: ea34312 ghstack-comment-id: 2445169575 Pull Request resolved: #6555
1 parent 2c32bf3 commit b713e53

File tree

2 files changed

+6
-2
lines changed

2 files changed

+6
-2
lines changed

examples/models/llama/evaluate/eager_eval.py

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ def __init__(
3131
use_kv_cache: bool = False,
3232
):
3333
device = "cuda" if torch.cuda.is_available() else "cpu"
34-
super().__init__(device=device)
34+
super().__init__(device=device, pretrained="gpt2")
3535
self._model = model
3636
self._tokenizer = tokenizer
3737
self._device = torch.device(device)
@@ -47,6 +47,10 @@ def eot_token_id(self):
4747
return self._tokenizer.eot_id
4848
return self._tokenizer.eos_id
4949

50+
@property
51+
def prefix_token_id(self):
52+
return self._tokenizer.eos_id
53+
5054
@property
5155
def max_length(self):
5256
return self._max_seq_length

examples/models/llama/install_requirements.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ pip install --no-use-pep517 "git+https://github.com/pytorch/ao.git@${TORCHAO_VER
1515

1616
# Install lm-eval for Model Evaluation with lm-evalution-harness
1717
# Install tiktoken for tokenizer
18-
pip install lm_eval==0.4.2
18+
pip install lm_eval==0.4.5
1919
pip install tiktoken blobfile
2020

2121
# Call the install helper for further setup

0 commit comments

Comments
 (0)