Skip to content

Conversation

@ZewenShen-Cohere
Copy link

This PR addresses the following issues:

  1. _accumulate_mean produces incorrect output on its first run.

  2. cache_smooth_activations_hook previously performed the averaging computation on the CPU. When both the hidden dimension and sequence length are large, this makes AWQ calibration CPU-bound. The slowdown is especially severe when multiple AWQ quantization jobs run concurrently.

  3. Added more informative logging to the AWQ calibration grid search, including per-mapping JSON logs.

This PR is a subset of #2158

@github-actions
Copy link

👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review.

Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @ZewenShen-Cohere, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on enhancing the robustness, performance, and observability of the AWQ (Activation-aware Weight Quantization) calibration process. It addresses a critical bug in the mean accumulation logic, optimizes activation averaging to leverage GPU resources more effectively, and introduces comprehensive logging capabilities to provide better insights into the calibration grid search and its outcomes.

Highlights

  • _accumulate_mean Bug Fix: Corrected an issue where the _accumulate_mean function produced incorrect output on its initial run by ensuring it returns the mean instead of the sum, resolving a critical calculation error.
  • AWQ Activation Averaging Optimization: Optimized the AWQ activation averaging process by moving the computation off the CPU to the original device (typically GPU). This significantly improves performance and prevents CPU-bound bottlenecks during calibration, especially with large models and concurrent quantization jobs.
  • Enhanced AWQ Calibration Logging and Metrics: Implemented more detailed logging for the AWQ calibration grid search, including a tqdm progress bar for better visibility, real-time updates on the best error found, and the ability to save per-layer error metrics (initial error, best error, reduction rate) to a JSON file for comprehensive analysis.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@ZewenShen-Cohere
Copy link
Author

@brian-dellabetta

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces several valuable improvements to the AWQ modifier. It correctly fixes a bug in _accumulate_mean that caused incorrect output on its first run. Moving the activation averaging computation off the CPU is a significant performance enhancement, especially for large models. The addition of more informative logging, including a progress bar and JSON output for error metrics, greatly improves the usability and observability of the AWQ calibration process. The code is well-structured, but I have a couple of suggestions to enhance robustness and simplify the implementation.

Comment on lines 719 to 745
# Use the provided log_output_path or generate a default filename
if self.log_output_path:
output_path = Path(self.log_output_path)
# If it's a directory, create a filename with timestamp
if output_path.is_dir():
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
filename = output_path / f"awq_error_metrics_{timestamp}.json"
else:
filename = output_path
else:
# Default filename with timestamp
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
filename = Path(f"awq_error_metrics_{timestamp}.json")

# Ensure parent directory exists
filename.parent.mkdir(parents=True, exist_ok=True)

# Prepare data for saving
metrics_data = {
'timestamp': datetime.now().isoformat(),
'quantization_config': {
'duo_scaling': self.duo_scaling,
'n_grid': self.n_grid,
},
'total_layers': len(self._error_metrics),
'metrics': self._error_metrics
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This section can be simplified and made more robust.

  1. datetime.now() is called multiple times (lines 724, 730, 738), which could lead to slight timestamp inconsistencies between the filename and the file content. It's better to call it once at the beginning of the function and reuse the value.
  2. The else block (lines 728-731) is unreachable. The calling function on_finalize already ensures that self.log_output_path is not None before invoking _save_error_metrics.

The suggested change addresses both points by capturing datetime.now() once and removing the redundant code path.

        now = datetime.now()

        # Use the provided log_output_path.
        # The calling context ensures `self.log_output_path` is not None,
        # so the `else` branch is unreachable and has been removed.
        output_path = Path(self.log_output_path)
        # If it's a directory, create a filename with timestamp
        if output_path.is_dir():
            timestamp = now.strftime("%Y%m%d_%H%M%S")
            filename = output_path / f"awq_error_metrics_{timestamp}.json"
        else:
            filename = output_path
        
        # Ensure parent directory exists
        filename.parent.mkdir(parents=True, exist_ok=True)
        
        # Prepare data for saving.
        # Use the `now` object captured at the start for a consistent timestamp.
        metrics_data = {
            'timestamp': now.isoformat(),
            'quantization_config': {
                'duo_scaling': self.duo_scaling,
                'n_grid': self.n_grid,
            },
            'total_layers': len(self._error_metrics),
            'metrics': self._error_metrics
        }

Copy link
Collaborator

@brian-dellabetta brian-dellabetta left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for creating this! The fixes look good, I just have a few questions on the logging part, hopefully we can streamline some more of it

offload_device: torch.device | None = None
duo_scaling: bool | Literal["both"] = True
n_grid: int = 20
log_output_path: str | None = None
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

users are able to configure either a log file or log dir at a higher scope, see code here. So anything logged with loguru.logger should also be automatically added if those fields are set as inputs to oneshot. Have you tried that?

Comment on lines 272 to 273
if self.log_output_path is not None and len(self._error_metrics) > 0:
self._save_error_metrics()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rather than relying on log dir to determine if something should be logged, i think it is preferred to use the log level. You can log these metrics as debug so the user will only see it if log level is debug. WDYT?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good idea. Will make the change!

num_added = inp.size(0)
if prev_mean_and_count is None:
return sum_added, num_added
return sum_added / num_added, num_added
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah! this was the error you were talking about. yeah this makes sense, nice catch.

ZewenShen-Cohere and others added 2 commits December 19, 2025 14:07
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Signed-off-by: ZewenShen-Cohere <[email protected]>
@ZewenShen-Cohere
Copy link
Author

Thank you for the revision @brian-dellabetta. I've switched to use the logger.

@dsikka dsikka added the ready When a PR is ready for review label Dec 20, 2025
@dsikka
Copy link
Collaborator

dsikka commented Dec 20, 2025

Can you address the quality issues?

You can do this by running these steps in the root dir:

pip install -e .[dev]
make style
make quality

@ZewenShen-Cohere
Copy link
Author

Can you address the quality issues?

You can do this by running these steps in the root dir:

pip install -e .[dev] make style make quality

Done

@dsikka dsikka added the awq For any issue / PR related to AWQ support label Dec 20, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

awq For any issue / PR related to AWQ support ready When a PR is ready for review

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants