Skip to content

Support model evaluation on Intel Gaudi #20

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Apr 9, 2025

Conversation

XinyaoWa
Copy link
Contributor

@XinyaoWa XinyaoWa commented Mar 13, 2025

  1. We add a new model class called TgiVllmModel, which can evaluate based on TGI or vLLM endpoint.
  2. We enable vLLM on Gaudi for evaluation, add detailed scripts and readme.

Original implementation refer to minmin's code

Copy link
Collaborator

@howard-yen howard-yen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the hard work, please see the suggestions on each file and make changes accordingly. Please also rebase the PR on the latest main branch. Thanks!

@XinyaoWa
Copy link
Contributor Author

XinyaoWa commented Apr 8, 2025

Hi @howard-yen Thanks for your comments, I have all updated, please have a review~

@joshuayao joshuayao added this to OPEA Apr 9, 2025
@howard-yen howard-yen merged commit 4526dfb into princeton-nlp:main Apr 9, 2025
1 check passed
@github-project-automation github-project-automation bot moved this to Done in OPEA Apr 9, 2025
@howard-yen
Copy link
Collaborator

@XinyaoWa Thanks for the hard work, merged!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature] refactor HELMET: support vllm-gaudi for agent related benchmark
2 participants