Skip to content

Commit 0afd6c0

Browse files
committed
upgrade lm_eval to 0.4.5
Pull Request resolved: #6533 We have been using a pretty old `lm_eval` version. This is blocking us from upgrading other libraries like `transformers` and blocking some others work. For example, #6489. In newer versions `lm_eval`, `pretrainedModel` becomes a required parameter. In 0.4.2, it defaults to `gpt2` if not provided. This PR upgrades our `lm_eval` version to the latest version 0.4.5 and set `pretrainedModel` to its original default value `gpt2`. Differential Revision: [D65079913](https://our.internmc.facebook.com/intern/diff/D65079913/) ghstack-source-id: 250754584
1 parent 2c32bf3 commit 0afd6c0

File tree

2 files changed

+6
-2
lines changed

2 files changed

+6
-2
lines changed

examples/models/llama/evaluate/eager_eval.py

+5-1
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ def __init__(
3131
use_kv_cache: bool = False,
3232
):
3333
device = "cuda" if torch.cuda.is_available() else "cpu"
34-
super().__init__(device=device)
34+
super().__init__(device=device, pretrained="gpt2")
3535
self._model = model
3636
self._tokenizer = tokenizer
3737
self._device = torch.device(device)
@@ -47,6 +47,10 @@ def eot_token_id(self):
4747
return self._tokenizer.eot_id
4848
return self._tokenizer.eos_id
4949

50+
@property
51+
def prefix_token_id(self):
52+
return self.eot_token_id
53+
5054
@property
5155
def max_length(self):
5256
return self._max_seq_length

examples/models/llama/install_requirements.sh

+1-1
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ pip install --no-use-pep517 "git+https://github.com/pytorch/ao.git@${TORCHAO_VER
1515

1616
# Install lm-eval for Model Evaluation with lm-evalution-harness
1717
# Install tiktoken for tokenizer
18-
pip install lm_eval==0.4.2
18+
pip install lm_eval==0.4.5
1919
pip install tiktoken blobfile
2020

2121
# Call the install helper for further setup

0 commit comments

Comments
 (0)