-
Notifications
You must be signed in to change notification settings - Fork 11.8k
Does this work with Synthia-7B-v1.3 #3396
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
It works for me. Are you using the latest version? |
Yes, built it from source just before I posted. Can you run me through exact steps you're doing on your machine? |
git clone --depth 1 https://huggingface.co/migtissera/Synthia-7B-v1.3
./convert.py Synthia-7B-v1.3/ Note that until very recently, there was an issue converting some HF models that use GQA, such as this one. You need to update llama.cpp to the current master. |
It works for me too. |
There are also GGUF files available from here: https://huggingface.co/TheBloke/Synthia-7B-v1.3-GGUF/tree/main |
As mentioned, I'm trying to run this model. More info on it here.
I've managed to convert and quantize it without errors but when I run it I get this:
I converted it with
convert.py model_dir --ctx 4096
and then quantized to q4_0.The text was updated successfully, but these errors were encountered: