Skip to content

Conversation

@kylesayrs
Copy link
Collaborator

Purpose

  • Remove unnecessary code now that we are no longer using accelerate offloading

Signed-off-by: Kyle Sayers <[email protected]>
Signed-off-by: Kyle Sayers <[email protected]>
Signed-off-by: Kyle Sayers <[email protected]>
@github-actions
Copy link

👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review.

Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @kylesayrs, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on a significant code cleanup by removing deprecated accelerate offloading mechanisms from the TorchOffloader component. The changes streamline model handling, simplify modifier initialization logic by deferring certain setup tasks, and consolidate model dispatch functionality using a more generic utility. This refactoring reduces technical debt and improves the maintainability of the codebase by eliminating unused and redundant code.

Highlights

  • Removal of Accelerate Offloading: Eliminated all code paths and dependencies related to accelerate offloading, simplifying the codebase and removing unused functionality.
  • Refactored Modifier Initialization: Moved module mapping and target layer inference logic from on_initialize to on_start in several modifiers (AWQ, SparseGPT, GPTQ, SmoothQuant) to ensure these operations occur only when the modifier is actively starting, improving efficiency and clarity.
  • Simplified Model Dispatch: Replaced custom sequential dispatch logic with a more general dispatch_model utility from compressed_tensors.offload, streamlining model handling and reducing custom implementation overhead.
  • Cleanup of fuse.py: Removed explicit device alignment and dtype handling in center_embeddings and fuse_norm_linears, relying on the updated update_offload_parameter or simplified assumptions due to the removal of accelerate offloading.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request focuses on code cleanup by removing the dependency on accelerate for offloading and refactoring the modifier lifecycle. The changes are generally good, moving model-dependent initializations to the on_start hook for better consistency. However, I've identified a couple of issues. A potential bug in fuse.py omits casting tensors back to their original data type, which could lead to issues. Additionally, the removal of XPU support in the sequential pipeline is a regression that should be addressed. My detailed comments and suggestions are provided below.

Comment on lines +21 to +23
weight = embedding.weight.to(PRECISION)
weight = weight - weight.mean(dim=-1, keepdim=True)
update_offload_parameter(embedding, "weight", weight)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The weight tensor is not cast back to its original data type after the centering operation. It remains as PRECISION (torch.float64), which could lead to increased memory usage and potential dtype mismatches in subsequent operations. It's recommended to restore the casting back to the original dtype before updating the parameter.

Suggested change
weight = embedding.weight.to(PRECISION)
weight = weight - weight.mean(dim=-1, keepdim=True)
update_offload_parameter(embedding, "weight", weight)
weight_dtype = embedding.weight.dtype
weight = embedding.weight.to(PRECISION)
weight = weight - weight.mean(dim=-1, keepdim=True)
update_offload_parameter(embedding, "weight", weight.to(weight_dtype))

Comment on lines +41 to +42
linear_weight = linear.weight.to(PRECISION) * norm.weight.to(PRECISION)
update_offload_parameter(linear, "weight", linear_weight)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Similar to the center_embeddings function, the linear_weight is calculated using PRECISION (torch.float64) but is not cast back to the original linear.weight.dtype. This could lead to unintended dtype changes, increased memory usage, and potential errors. The weight should be cast back to its original data type.

Suggested change
linear_weight = linear.weight.to(PRECISION) * norm.weight.to(PRECISION)
update_offload_parameter(linear, "weight", linear_weight)
weight_dtype = linear.weight.dtype
linear_weight = linear.weight.to(PRECISION) * norm.weight.to(PRECISION)
update_offload_parameter(linear, "weight", linear_weight.to(weight_dtype))

Comment on lines +73 to +74
model_device = "cuda" if torch.cuda.is_available() else "cpu"
dispatch_model(model, model_device)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The previous implementation in dispatch_for_sequential included a check for torch.xpu.is_available() to support Intel XPU devices. This change removes that check, which is a regression in functionality. It would be beneficial to reintroduce XPU support to maintain broader hardware compatibility.

Suggested change
model_device = "cuda" if torch.cuda.is_available() else "cpu"
dispatch_model(model, model_device)
if torch.cuda.is_available():
model_device = "cuda"
elif hasattr(torch, "xpu") and torch.xpu.is_available():
model_device = "xpu"
else:
model_device = "cpu"
dispatch_model(model, model_device)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants