Skip to content

[Feature] 请问Base模型测评ppl任务什么时候可以支持vllm? #970

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
1 task
noforit opened this issue Mar 13, 2024 · 6 comments
Closed
1 task
Assignees

Comments

@noforit
Copy link

noforit commented Mar 13, 2024

描述该功能

如题,请问Base模型和相应任务测评什么时候可以支持vllm,使用hf测评速度太难以接受了

是否希望自己实现该功能?

  • 我希望自己来实现这一功能,并向 OpenCompass 贡献代码!
@bittersweet1999
Copy link
Collaborator

Actually we already support VLLM and LMDeploy, see in

class VLLM(BaseModel):

@noforit
Copy link
Author

noforit commented Mar 13, 2024

Actually we already support VLLM and LMDeploy, see in

class VLLM(BaseModel):

不好意思,我没有表述清楚。就是base模型测评一般使用这个ppl测试,但是ppl-based 的任务不支持vllm
image
这个有什么好的办法吗?

@bittersweet1999
Copy link
Collaborator

I'm sorry but the Inference backends like VLLM and LMDeploy generally do not support PPL.

@silverriver
Copy link

I'm sorry but the Inference backends like VLLM and LMDeploy generally do not support PPL.

In fact, vllm already provided the feature of returning logits for prompts:
vllm-project/vllm#1328

opencompass can use this feature to infer PPL for a given prompt.

@noforit
Copy link
Author

noforit commented Mar 22, 2024

I'm sorry but the Inference backends like VLLM and LMDeploy generally do not support PPL.

In fact, vllm already provided the feature of returning logits for prompts: vllm-project/vllm#1328

opencompass can use this feature to infer PPL for a given prompt.

@bittersweet1999 应该是的。您好,可以再关注一下这个问题吗?可以参考EleutherAI/lm-evaluation-harness的评测框架,他们应该使用了该方法来支持使用vllm计算ppl。https://github.com/EleutherAI/lm-evaluation-harness/blob/28ec7fa950346b5a895e85e1f3edd5648168acc4/lm_eval/models/vllm_causallms.py#L183-L184
image
这使得他们的ppl评测非常迅速。
opencompass如果能支持该项功能将非常有帮助。非常感谢您耐心的解答。

@noforit noforit changed the title [Feature] 请问Base模型和相应任务测评什么时候可以支持vllm? [Feature] 请问Base模型测评ppl任务什么时候可以支持vllm? Mar 22, 2024
@bittersweet1999
Copy link
Collaborator

Hi we have supported vllm and lmdeploy get ppl right now, for more information please see in here #1003

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants