-
Notifications
You must be signed in to change notification settings - Fork 543
Update Hugging Face version #6489
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/6489
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ❌ 1 New FailureAs of commit 51f64f3 with merge base d7826c8 ( NEW FAILURE - The following job has failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
091df31
to
8340c60
Compare
@guangy10 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
8340c60
to
3161181
Compare
@guangy10 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
Ok, the circular import issue is still valid. I will resolve it in a separate PR. |
db82e06
to
13aafa3
Compare
Pull Request resolved: #6533 We have been using a pretty old `lm_eval` version. This is blocking us from upgrading other libraries like `transformers` and blocking some others work. For example, #6489. In newer versions `lm_eval`, `pretrainedModel` becomes a required parameter. In 0.4.2, it defaults to `gpt2` if not provided. This PR upgrades our `lm_eval` version to the latest version 0.4.5 and set `pretrainedModel` to its original default value `gpt2`. Differential Revision: [D65079913](https://our.internmc.facebook.com/intern/diff/D65079913/) ghstack-source-id: 250754584 Co-authored-by: Lunwen He <[email protected]>
13aafa3
to
69b00ed
Compare
69b00ed
to
51f64f3
Compare
@guangy10 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
Seems to be a false alarm. Running |
Bump version to v4.46.0 where more
transformers
models are compatible with ExecuTorch out-of-the-box.Consolidate to use a single version of
transformers
in anywhere in the codebase, e.g.examples/
,.workflow/
However, we are hardcodelm_eval
to a very old version0.4.2
which is not compatible withtransformers >= 4.45
(actually incompatible withtokenizers >= 0.20
). It must be upgraded, but not in this PR. Since the eval is using the tinyllama in the CI, the version oftransformers
doesn't matter. The workaround is to force reinstall a lower version in the CI.