-
Notifications
You must be signed in to change notification settings - Fork 31.7k
enable 2 llama UT cases on xpu #37126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…t::test_model_7b_logits and tests/models/llama/test_modeling_llama.py::LlamaIntegrationTest::test_model_7b_logits_bf16 on xpu Signed-off-by: YAO Matrix <[email protected]>
|
Hi 👋, thank you for opening this pull request! The pull request is converted to draft by default. The CI will be paused while the PR is in draft mode. When it is ready for review, please click the |
|
Hi @yao-matrix Thank you for this PR. It's not good to use In #36569, @ivarflakstad introduced a way to better deal with more general expected values. Could you try to use that new approach in this PR? Thank you! |
Signed-off-by: YAO Matrix <[email protected]>
Signed-off-by: YAO Matrix <[email protected]>
@ydshieh i switched to use Expectations, pls help review, thx. |
Signed-off-by: YAO Matrix <[email protected]>
ydshieh
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the iteration 🙏 💯
Signed-off-by: YAO Matrix <[email protected]>
Signed-off-by: YAO Matrix <[email protected]>
SunMarc
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
|
Let me run on CI runner and merge if everything is good ! |
* enable tests/models/llama/test_modeling_llama.py::LlamaIntegrationTest::test_model_7b_logits and tests/models/llama/test_modeling_llama.py::LlamaIntegrationTest::test_model_7b_logits_bf16 on xpu Signed-off-by: YAO Matrix <[email protected]> * switch to use Expectations Signed-off-by: YAO Matrix <[email protected]> * fix style Signed-off-by: YAO Matrix <[email protected]> * extract gen bits from architecture and use it Signed-off-by: YAO Matrix <[email protected]> * add cross refererence Signed-off-by: YAO Matrix <[email protected]> * fix style Signed-off-by: YAO Matrix <[email protected]> --------- Signed-off-by: YAO Matrix <[email protected]> Co-authored-by: Marc Sun <[email protected]>
* enable tests/models/llama/test_modeling_llama.py::LlamaIntegrationTest::test_model_7b_logits and tests/models/llama/test_modeling_llama.py::LlamaIntegrationTest::test_model_7b_logits_bf16 on xpu Signed-off-by: YAO Matrix <[email protected]> * switch to use Expectations Signed-off-by: YAO Matrix <[email protected]> * fix style Signed-off-by: YAO Matrix <[email protected]> * extract gen bits from architecture and use it Signed-off-by: YAO Matrix <[email protected]> * add cross refererence Signed-off-by: YAO Matrix <[email protected]> * fix style Signed-off-by: YAO Matrix <[email protected]> --------- Signed-off-by: YAO Matrix <[email protected]> Co-authored-by: Marc Sun <[email protected]>
* enable tests/models/llama/test_modeling_llama.py::LlamaIntegrationTest::test_model_7b_logits and tests/models/llama/test_modeling_llama.py::LlamaIntegrationTest::test_model_7b_logits_bf16 on xpu Signed-off-by: YAO Matrix <[email protected]> * switch to use Expectations Signed-off-by: YAO Matrix <[email protected]> * fix style Signed-off-by: YAO Matrix <[email protected]> * extract gen bits from architecture and use it Signed-off-by: YAO Matrix <[email protected]> * add cross refererence Signed-off-by: YAO Matrix <[email protected]> * fix style Signed-off-by: YAO Matrix <[email protected]> --------- Signed-off-by: YAO Matrix <[email protected]> Co-authored-by: Marc Sun <[email protected]>
case 1: pytest -rA tests/models/llama/test_modeling_llama.py::LlamaIntegrationTest::test_model_7b_logits
case 2: pytest -rA tests/models/llama/test_modeling_llama.py::LlamaIntegrationTest::test_model_7b_logits_bf16
both don't have XPU criteria, put them as key 0 and reuse A100/A100 ground truth. Both can pass in Ponte Vecchio XPU.