'llama.cpp/quantize' does not exist. #13205
Unanswered
LMseventeen
asked this question in
Q&A
Replies: 1 comment 1 reply
-
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Unsloth: Installing llama.cpp. This might take 3 minutes...
Traceback (most recent call last):
File "/root/autodl-tmp/unsloth-custom/main.py", line 115, in
model.push_to_hub_gguf("Wuhao17/testserver", tokenizer, token = "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")
File "/root/autodl-tmp/unsloth-custom/.venv/lib/python3.10/site-packages/unsloth/save.py", line 2034, in unsloth_push_to_hub_gguf
all_file_locations, want_full_precision = save_to_gguf(
File "/root/autodl-tmp/unsloth-custom/.venv/lib/python3.10/site-packages/unsloth/save.py", line 1085, in save_to_gguf
raise RuntimeError(
RuntimeError: Unsloth: The file ('llama.cpp/llama-quantize' or 'llama.cpp/llama-quantize.exe' if you are on Windows WSL) or 'llama.cpp/quantize' does not exist.
But we expect this file to exist! Maybe the llama.cpp developers changed the name or check extension of the llama-quantize file.
Beta Was this translation helpful? Give feedback.
All reactions