-
Notifications
You must be signed in to change notification settings - Fork 590
[Feature] 请问Base模型测评ppl任务什么时候可以支持vllm? #970
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Actually we already support VLLM and LMDeploy, see in opencompass/opencompass/models/vllm.py Line 14 in 3098d78
|
不好意思,我没有表述清楚。就是base模型测评一般使用这个ppl测试,但是ppl-based 的任务不支持vllm |
I'm sorry but the Inference backends like VLLM and LMDeploy generally do not support PPL. |
In fact, vllm already provided the feature of returning logits for prompts: opencompass can use this feature to infer PPL for a given prompt. |
@bittersweet1999 应该是的。您好,可以再关注一下这个问题吗?可以参考EleutherAI/lm-evaluation-harness的评测框架,他们应该使用了该方法来支持使用vllm计算ppl。https://github.com/EleutherAI/lm-evaluation-harness/blob/28ec7fa950346b5a895e85e1f3edd5648168acc4/lm_eval/models/vllm_causallms.py#L183-L184 |
Hi we have supported vllm and lmdeploy get ppl right now, for more information please see in here #1003 |
描述该功能
如题,请问Base模型和相应任务测评什么时候可以支持vllm,使用hf测评速度太难以接受了
是否希望自己实现该功能?
The text was updated successfully, but these errors were encountered: