-
Notifications
You must be signed in to change notification settings - Fork 11.5k
quantize: Handle user-defined quantization levels for additional tensors #12511
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
That's an excellent idea! and it'll allow to add all supported tensor types (50+) without creating a mess of parameters. Plus, it will give me something to do over the weekend 😆 |
Yeah, I think this is definitely the way to go - the regex support of that PR gives really good flexibility. |
I started a discussion thread if anyone's interested, so we don't clog this PR: #12741 |
I'll add results of my weekend testing there as well |
fwiw I've been doing this with ik_llama.cpp llama-quantize --custom-q feature with good success. Just in case there is any desire to keep this PRs syntax compatible or not. Specifying exact quants per tensor becomes more important now as I have a couple custom quants up on huggingface ubergarm/DeepSeek-V3-0324-GGUF that use this technique. Here is an example bash script recipe for an experimental CPU only speed blend: CPU only quant performance blend V3-0324 recipeNOTE: mainline llama.cpp doesn't have all these quants, but you can see how regex tensor<->quant mappings via #!/usr/bin/env bash
# CPU only inference blend
# Notes:
# https://github.com/ikawrakow/ik_llama.cpp/issues/296#issuecomment-2765210993
# https://github.com/ikawrakow/ik_llama.cpp/issues/296#issuecomment-2768567062
custom="
# Token embedding and output tensors
# note token_embd cannot be repacked quant type
token_embd\.weight=iq6_k
output\.weight=iq5_k_r4
output_norm\.weight=iq5_k_r4
# First 3 dense layers (0-3)
blk\.[0-2]\.attn_k_b.*=q6_0_r4
blk\.[0-2]\.attn_.*=iq5_k_r4
blk\.[0-2]\..*=iq5_k_r4
# All attention, norm weights, and bias tensors for MoE layers (3-60)
# Except blk.*.attn_k_b.weight is not divisible by 256 and no iq6_k so go with q6_0_r4 for CPU only speed boost
blk\.[3-9]\.attn_k_b.*=q6_0_r4
blk\.[1-5][0-9]\.attn_k_b.*=q6_0_r4
blk\.60\.attn_k_b.*=q6_0_r4
blk\.[3-9]\.attn_.*=iq5_k_r4
blk\.[1-5][0-9]\.attn_.*=iq5_k_r4
blk\.60\.attn_.*=iq5_k_r4
blk\.[3-9]\.ffn_norm\.weight=iq5_k_r4
blk\.[1-5][0-9]\.ffn_norm\.weight=iq5_k_r4
blk\.60\.ffn_norm\.weight=iq5_k_r4
blk\.[3-9]\.exp_probs_b\.bias=iq5_k_r4
blk\.[1-5][0-9]\.exp_probs_b\.bias=iq5_k_r4
blk\.60\.exp_probs_b\.bias=iq5_k_r4
# Shared Experts (3-60)
blk\.[3-9]\.ffn_down_shexp\.weight=iq5_k_r4
blk\.[1-5][0-9]\.ffn_down_shexp\.weight=iq5_k_r4
blk\.60\.ffn_down_shexp\.weight=iq5_k_r4
blk\.[3-9]\.ffn_(gate|up)_shexp\.weight=iq5_k_r4
blk\.[1-5][0-9]\.ffn_(gate|up)_shexp\.weight=iq5_k_r4
blk\.60\.ffn_(gate|up)_shexp\.weight=iq5_k_r4
# Routed Experts (3-60)
# First 16 layers are more sensitive so keep larger
blk\.[3-9]\.ffn_down_exps\.weight=iq5_k_r4
blk\.[1][0-9]\.ffn_down_exps\.weight=iq5_k_r4
blk\.[2-5][0-9]\.ffn_down_exps\.weight=iq4_k_r4
blk\.60\.ffn_down_exps\.weight=iq4_k_r4
blk\.[3-9]\.ffn_(gate|up)_exps\.weight=iq4_k_r4
blk\.[1][0-9]\.ffn_(gate|up)_exps\.weight=iq4_k_r4
blk\.[2-5][0-9]\.ffn_(gate|up)_exps\.weight=iq3_k_r4
blk\.60\.ffn_(gate|up)_exps\.weight=iq3_k_r4
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
./build/bin/llama-quantize \
--imatrix /mnt/raid/models/ubergarm/DeepSeek-V3-0324-GGUF/DeepSeek-V3-0324.imatrix \
--token-embedding-type iq6_k \
--output-tensor-type iq5_k_r4 \
--custom-q "$custom" \
/mnt/raid/models/deepseek-ai/DeepSeek-V3-0324-bf16-GGUF/DeepSeek-256x21B-V3-0324-BF16-00001-of-00030.gguf \
/mnt/raid/models/ubergarm/DeepSeek-V3-0324-GGUF/DeepSeek-V3-0324-CPU-IQ3_K.gguf \
IQ3_K \
24 |
Super stoked about this, especially option to adjust weights for "shared expert" weights. |
TL;DR: A combination of Tensor-Wise Quantization (TWQ) and Layer-Wise Quantization (LWQ) is useful to generate custom models. Using DeepSeek-R1-Distill-Llama-8B-Q4_K_M as an example, LWQ yields a 10.4% smaller model with only a 0.83% 𝜌PPL penalty compared to the naive model. More info here Test results
|
@EAddario |
void * kv_overrides; // pointer to vector containing overrides | ||
void * tensor_types; // pointer to vector containing tensor types | ||
} llama_model_quantize_params; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This changes the public interface, so add a comment in #9289.
Note that passing C++ objects here is not correct and we eventually have to fix this API to not do that. It hasn't become a problem yet because the quantization functions are likely not used frequently by 3rd party applications.
@EAddario If you are interested, you can give it a shot in another PR and fix these structs to become C compatible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @ggerganov, happy to
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a bit too hacky for my preference, but I suppose if people are already creating custom mixes by modifying the code it is better to at least have a tool to do it.
I would prefer if the allowed tensor check is removed, it doesn't really work as a reliable check, and it will prevent some legitimate uses.
Thanks for approving @slaren. Any particular use case you have in mind it will prevent? Maybe I can work it into the logic. |
Got a better quality LWQ mix using the stats from the modified llama-imatrix. More info here Test results
|
For example, using |
I see what you mean. The choice of approach was a trade-off between ensuring the program continues to work exactly as before (backwards compatibility), not introducing new options that are already available (--pure, --output-tensor-type and --token-embedding-type), and adding new capabilities in a way that's consistent with existing error checking logic. By restricting the tensors, users won't be able to do things that clearly are not useful, like trying to quantize norms, lerps, ropes, etc., but you're right in that users wanting to quantize all attn tensors would need to pass three options (--tensor-type attn_q=q4_k --tensor-type attn_k=q4_k --tensor-type attn_v=q4_k) instead of just one (--tensor-type attn=q4_k). Once the changes are merged, I'll open a new PR to address this, within the tensor checking logic to avoid matching instances like attn_norm, ffn_norm, etc., plus implementing @ggerganov's recommendation to make the struct C compatible. |
Late to this conversation, but isn't this case already handled by a regex that uses grouping?
|
Not quite @acbits. For the reasons described above, the program requires the full tensor name, with the regex applying only to preceding characters. I'll improve this behaviour in the next PR. |
@EAddario Congrats! 🚀 |
Really late on that. And nice PR. 1 idea can we define all logic in more general way, for example using json format? and possibly read it from a file for most advance case? |
Thanks @Djip007. @ngxson had a similar suggestion, and it's in my to-do list. The way I'm thinking about it is for llama-imatrix (#12718) to generate a file with "recommended" quants, based on relevant statistics, which can then be processed by llama-quantize. The file can of course be changed/created by hand. I don't know exactly what "recommended" means yet so open to suggestions. |
I'll think about it... if I have any ideas, I'll try to share them. |
Feel free to comment on #12718 |
…ors (ggml-org#12511) * Add llama_model_quantize_params parameters * Add new quantize parameters parsing and validation * Update usage * Add new parameters defaults * Add new quantization parameters logic * Add llama_model_quantize_params parameters * Add new quantize parameters parsing and validation * Update usage * Add new parameters defaults * Add new quantization parameters logic * Minor refactoring as per the contributors' coding guidelines * Update descriptions to match existing style * Add llama_model_quantize_params parameters * Add new quantize parameters parsing and validation * Update usage * Add new parameters defaults * Add new quantization parameters logic * Minor refactoring as per the contributors' guidelines * Implement general --tensor-type instead of tensor-specific command option * Fix implied type bug * Restore missing #includes * Add regex capability for tensor selection * Refactor function name and update ALLOWED_TENSOR_TYPE * Add missing #include * Handle edge case when tensor name is cls.output * Minor logging improvement
This PR adds the ability to quantize other tensors, beyond token-embedding and output-tensor. It handles most of the supported architectures.
except Mamba, RWKV6, RWKV6QWEN2 and T5 to avoid having too many command options, but can add as well if maintainers request it.For full background on the PR, please see: Squeezing Tensor Bits: the quest for smaller LLMs