-
Notifications
You must be signed in to change notification settings - Fork 11.6k
Merging #7568 with #7430(Implementing LLaMA 3 torch to gguf conversion) #7651
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Merging #7568 with #7430(Implementing LLaMA 3 torch to gguf conversion) #7651
Conversation
I thought now the only supported script is convert-hf-to-gguf.py, according #7658 (comment)? |
I mean yea I would assume you would receive more support if you were to use that and have something break while doing so, but you COULD still use this script if you really want to convert from the pytorch files, you just might not get much help if you encounter an issue |
hi can someone merge pls 🥺 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I appreciate the effort, but I'm not sure about extending the legacy convert script to support Llama-3, because it would need to be kept in sync with how it's handled by convert_hf_to_gguf.py
.
If you really want to make the legacy convert script also work with the newer Llama-3.1 models then you'll also need to handle the RoPE freqs (hence the reduction in maintenance burden by only using convert_hf_to_gguf.py
).
but you COULD still use this script if you really want to convert from the pytorch files, you just might not get much help if you encounter an issue
Which pytorch files for Llama-3? AFAIK the official weights are in the safetensors
format. And convert_hf_to_gguf.py
also supports converting from pytorch_model.bin
files. However, the legacy convert script is the only one which also supports converting from .pth
files (because Llama-1 and Llama-2 used that when they were released, I think).
I'm not sure I understand the use-case to allow using the legacy convert script with the Llama-3*
models. I'd like that use-case to be more clearly explained.
#TODO: add more bandaids for llama 3 detection | ||
if is_llama3_tokenizer(os.path.join(args.model, "tokenizer.model")): | ||
global is_llama3_model | ||
write_llama3_tokenizer(args.model, os.path.join(args.model, "tokenizer.model")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What if there's already a tokenizer.model
there? Should it be overwritten or used as-is?
The convert scripts don't normally modify the model files, so I think there should be a presence check.
from transformers import PreTrainedTokenizerFast | ||
from transformers.convert_slow_tokenizer import TikTokenConverter |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
convert-legacy-llama.py
did not depend on transformers
before, I think (I might be mistaken).
Is there a way this could be imported only when a Llama 3 model is detected? Maybe by defining the Llama3Converter
class in write_llama3_tokenizer
, and importing at the start of that function?
Otherwise a comment saying these are for Llama 3 might be helpful.
chat_template = ( | ||
"{% set loop_messages = messages %}" | ||
"{% for message in loop_messages %}" | ||
"{% set content = '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n'+ message['content'] | trim + '<|eot_id|>' %}" | ||
"{% if loop.index0 == 0 %}" | ||
"{% set content = bos_token + content %}" | ||
"{% endif %}" | ||
"{{ content }}" | ||
"{% endfor %}" | ||
"{{ '<|start_header_id|>assistant<|end_header_id|>\n\n' }}" | ||
) | ||
num_reserved_special_tokens = 256 | ||
special_tokens = [ | ||
"<|begin_of_text|>", | ||
"<|end_of_text|>", | ||
"<|reserved_special_token_0|>", | ||
"<|reserved_special_token_1|>", | ||
"<|reserved_special_token_2|>", | ||
"<|reserved_special_token_3|>", | ||
"<|start_header_id|>", | ||
"<|end_header_id|>", | ||
"<|reserved_special_token_4|>", | ||
"<|eot_id|>", # end of turn | ||
] + [f"<|reserved_special_token_{i}|>" for i in range(5, num_reserved_special_tokens - 5)] | ||
tokenizer.add_special_tokens(special_tokens) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure about hardcoding the special tokens. These should be read from the added_tokens
field in tokenizer.json
. Maybe PreTrainedTokenizerFast
or TikTokenConverter
makes that information available somewhere.
It should also be possible to handle the chat template, bos and eos tokens simply and more generally with gguf.SpecialVocab
from gguf-py/gguf/vocab.py
.
Apologies for making another pr for this but I messed up the fork merging in my repo :(
Original PR
Vocab stuff probably still works, tested unquantized 8B model with simple math problems: