Skip to content

Conversation

@kylesayrs
Copy link
Collaborator

Purpose

Changes

  • Pass model.thinker to oneshot, since model does not implement a forward method (the thinker module is a PreTrainedModel that contains all of the parameters worth quantizing)
  • Patch the fast_pos_embed_interpolate to support accelerate offloading
  • Patch dataloader to squeeze image_grid_thw, but leave pixel_values and other inputs unsqueezed
  • Save example audio generation

Testing

  • Ran example and produced sane outputs
    output.wav

@github-actions
Copy link

👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review.

Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @kylesayrs, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request integrates the Qwen3OmniMoeForConditionalGeneration model by providing a comprehensive example for its quantization using llmcompressor. It addresses specific technical requirements such as patching the position embedding interpolation for offloading compatibility and adjusting data collation for multimodal inputs, ultimately enabling efficient and functional use of this advanced model.

Highlights

  • Qwen3OmniMoeForConditionalGeneration Support: This pull request introduces support for the Qwen3OmniMoeForConditionalGeneration model, enabling its use within the system.
  • New Example Script: A new example script, qwen3_omni_example.py, has been added to demonstrate the quantization of the Qwen3OmniMoeForConditionalGeneration model using llmcompressor's oneshot functionality.
  • Position Embedding Patch: The fast_pos_embed_interpolate function has been patched to correctly handle scenarios where pos_embed.weight might be offloaded, ensuring compatibility with accelerate offloading strategies.
  • Custom Data Collator: A custom data_collator is implemented to properly squeeze the image_grid_thw tensor while leaving other inputs unsqueezed, which is crucial for correct data processing during calibration.
  • Audio Generation Example: The example now includes saving generated audio output to sample_output.wav, showcasing the model's multimodal capabilities.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new example for Qwen3OmniMoeForConditionalGeneration, including a patch to support accelerated offloading. The example script demonstrates how to perform one-shot quantization with GPTQ and generate sample outputs. The changes are well-structured and the example is clear. My review includes a suggestion to improve the performance of the patch file by using more efficient tensor operations, and a comment on improving the clarity of the example script's save directory naming.

@brian-dellabetta
Copy link
Collaborator

Related to #1673 as well

@allerou4
Copy link

allerou4 commented Dec 17, 2025

Hi, I pass model.thinker to quantize, but model.save_pretrained saved a full bf16 model
If I save model.thinker only, it's correct

@Sekri0
Copy link

Sekri0 commented Dec 17, 2025

Hi, I pass model.thinker to quantize, but model.save_pretrained saved a full bf16 model If I save model.thinker only, it's correct

This is because in 'oneshot' pre_precess, 'model.save_pretrained' will be modified for saving compressed model. Passing model.thinker to oneshot means model.save_pretrained is not modified. Just add the code below before you save the model will fix the problem.

add the following code before save

from llmcompressor.transformers.compression.compressed_tensors_utils import modify_save_pretrained
modify_save_pretrained(model)

image

@allerou4
Copy link

Hi, I pass model.thinker to quantize, but model.save_pretrained saved a full bf16 model If I save model.thinker only, it's correct

This is because in 'oneshot' pre_precess, 'model.save_pretrained' will be modified for saving compressed model. Passing model.thinker to oneshot means model.save_pretrained is not modified. Just add the code below before you save the model will fix the problem.

add the following code before save

from llmcompressor.transformers.compression.compressed_tensors_utils import modify_save_pretrained modify_save_pretrained(model)

image

why does it save nothing?

# Apply AWQ quantization.
oneshot(
    model=model.thinker,
    processor=processor,
    recipe=recipe,
    dataset=ds,
    max_seq_length=MAX_SEQUENCE_LENGTH,
    num_calibration_samples=NUM_CALIBRATION_SAMPLES,
    data_collator=data_collator,
)

# calibrated_model.save_pretrained(SAVE_DIR, save_compressed=True)

print("========== SAMPLE GENERATION ==============")
# dispatch_for_generation(model)
modify_save_pretrained(model)
SAVE_DIR = MODEL_ID.rstrip("/").split("/")[-1] + "-W4A16-G128"
model.save_pretrained(SAVE_DIR, save_compressed=True)
processor.save_pretrained(SAVE_DIR)

@Sekri0
Copy link

Sekri0 commented Dec 17, 2025

Hi, I pass model.thinker to quantize, but model.save_pretrained saved a full bf16 model If I save model.thinker only, it's correct

This is because in 'oneshot' pre_precess, 'model.save_pretrained' will be modified for saving compressed model. Passing model.thinker to oneshot means model.save_pretrained is not modified. Just add the code below before you save the model will fix the problem.

add the following code before save

from llmcompressor.transformers.compression.compressed_tensors_utils import modify_save_pretrained modify_save_pretrained(model)
image

why does it save nothing?

# Apply AWQ quantization.
oneshot(
    model=model.thinker,
    processor=processor,
    recipe=recipe,
    dataset=ds,
    max_seq_length=MAX_SEQUENCE_LENGTH,
    num_calibration_samples=NUM_CALIBRATION_SAMPLES,
    data_collator=data_collator,
)

# calibrated_model.save_pretrained(SAVE_DIR, save_compressed=True)

print("========== SAMPLE GENERATION ==============")
# dispatch_for_generation(model)
modify_save_pretrained(model)
SAVE_DIR = MODEL_ID.rstrip("/").split("/")[-1] + "-W4A16-G128"
model.save_pretrained(SAVE_DIR, save_compressed=True)
processor.save_pretrained(SAVE_DIR)

I tried with no calibration quant, successfully saved compressed model.

CODE

import requests
import soundfile as sf
from PIL import Image
from qwen3_omni_patch import fast_pos_embed_interpolate
from transformers import (
AutoProcessor,
Qwen3OmniMoeForConditionalGeneration,
default_data_collator,
)

from llmcompressor import oneshot
from llmcompressor.modifiers.quantization import GPTQModifier, QuantizationModifier
from llmcompressor.utils import dispatch_for_generation
from llmcompressor.transformers.compression.compressed_tensors_utils import modify_save_pretrained

model_id = "/mnt/home/model/Qwen3-Omni-30B-A3B-Instruct"
model = Qwen3OmniMoeForConditionalGeneration.from_pretrained(
model_id, torch_dtype="auto"
)
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)

scheme = "W4A16"

recipe = [
QuantizationModifier(
targets="Linear",
scheme=scheme,
ignore=[
r"re:.lm_head.",
r"re:.talker.",
r"re:.code2wav.",
r"re:.mlp.gate.",
r"re:.audio_tower.",
r"re:.visual.",
],
),
]

oneshot(
model=model.thinker,
processor=processor,
recipe=recipe,
)

modify_save_pretrained(model)

SAVE_DIR = model_id.rstrip("/").split("/")[-1] + "-thinker-" + scheme
model.save_pretrained(SAVE_DIR, save_compressed=True)
processor.save_pretrained(SAVE_DIR)

@allerou4
Copy link

@Sekri0 Thanks, it worked

@kylesayrs
Copy link
Collaborator Author

kylesayrs commented Dec 17, 2025

Awesome catch, thank you @Sekri0 @allerou4 !

Signed-off-by: Kyle Sayers <[email protected]>
Signed-off-by: Kyle Sayers <[email protected]>
Signed-off-by: Kyle Sayers <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: get_input_embeddings not auto‑handled for Qwen3OmniMoeForConditionalGeneration

5 participants