-
Notifications
You must be signed in to change notification settings - Fork 31.6k
GLM-4.1V Model support #38431
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GLM-4.1V Model support #38431
Conversation
5f515ac to
7e670eb
Compare
Cyrilvallez
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alright, I fixed the remaining parts, and confirmed that the model is still working as expected on real checkpoints - merging now! Thanks for the work!! 🤗🚀
|
@zRzRzRzRzRzRzR Thank you for adding this model. This model's tests is quite slow as you can see in the following list, and causes the job that has this model tests running in 12 minutes instead of other jobs (~4 minutes) Would you be up to make is faster? Usually it means to tweak |
## Summary <!--- This is a required section; please describe the main purpose of this proposed code change. ---> This PR adds support for GLM4.1V (GLM-4 Vision) models to the Liger Kernel #854 https://huggingface.co/zai-org/GLM-4.1V-9B-Thinking This model have been merged in huggingface/transformers#38431 <!--- ## Details This is an optional section; is there anything specific that reviewers should be aware of? ---> ## Testing Done <!--- This is a required section; please describe how this change was tested. ---> <!-- Replace BLANK with your device type. For example, A100-80G-PCIe Complete the following tasks before sending your PR, and replace `[ ]` with `[x]` to indicate you have done them. --> - Hardware Type: <BLANK> - [x] run `make test` to ensure correctness - [x] run `make checkstyle` to ensure code style - [x] run `make test-convergence` to ensure convergence --------- Co-authored-by: Shao Tang <[email protected]>
Uh oh!
There was an error while loading. Please reload this page.