Skip to content

Avoid overwriting vllm_compile_cache.py #17418

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
May 1, 2025
Merged

Conversation

youngkent
Copy link
Contributor

@youngkent youngkent commented Apr 29, 2025

When vLLM is reusing a previously compile torch.compile cache files, it should not modify/overwrite it, since nothing is expected to be changed under the same hash.

After the change, verified the cache dir is no longer modified when existing cache file is found.

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@simon-mo simon-mo requested a review from youkaichao April 29, 2025 22:38
@@ -66,7 +66,7 @@ def initialize_cache(self, cache_dir: str, disable_cache: bool = False):
disable_cache=disable_cache)

def save_to_file(self):
if self.disable_cache:
if self.disable_cache or os.path.exists(self.cache_file_path):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if cache file path exists, how do we ensure it's valid?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

vLLM already computes a hash based on vLLM config, compiler, and env for each compiled cache. If the hash matches, we assume the cache is reusable across multiple runs. This is already true for rest of the files in the same cache, we should make this file consistent as well.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

vllm_compile_cache can change, even for the same hash.

The situation is:

  • we don't put compile_sizes into the cache key. We allow the users to tweak this configuration without requiring a recompile.
  • If the user goes from [1] compile_sizes to [1, 2] compile_sizes, we end up reusing the same cache directory, compiling one additional graph (for size 2), and then updating the cache_file_path.

Instead we should check to see if the contents of the file were modified, and if they were not, then skip writing to disk.

@houseroad houseroad requested a review from zou3519 April 29, 2025 23:28
Copy link
Collaborator

@WoosukKwon WoosukKwon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, but it'd be nice if we can get a stamp from @zou3519

@WoosukKwon WoosukKwon added the ready ONLY add when PR is ready to merge/full CI is needed label Apr 30, 2025
Copy link
Collaborator

@zou3519 zou3519 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The cache can change even with the same hash (it can grow larger), we should check if that is the case first before avoiding writing to it

@@ -66,7 +66,7 @@ def initialize_cache(self, cache_dir: str, disable_cache: bool = False):
disable_cache=disable_cache)

def save_to_file(self):
if self.disable_cache:
if self.disable_cache or os.path.exists(self.cache_file_path):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

vllm_compile_cache can change, even for the same hash.

The situation is:

  • we don't put compile_sizes into the cache key. We allow the users to tweak this configuration without requiring a recompile.
  • If the user goes from [1] compile_sizes to [1, 2] compile_sizes, we end up reusing the same cache directory, compiling one additional graph (for size 2), and then updating the cache_file_path.

Instead we should check to see if the contents of the file were modified, and if they were not, then skip writing to disk.

@WoosukKwon WoosukKwon enabled auto-merge (squash) April 30, 2025 18:06
@WoosukKwon WoosukKwon merged commit 26bc4bb into vllm-project:main May 1, 2025
47 checks passed
radeksm pushed a commit to radeksm/vllm that referenced this pull request May 2, 2025
RichardoMrMu pushed a commit to RichardoMrMu/vllm that referenced this pull request May 12, 2025
zzzyq pushed a commit to zzzyq/vllm that referenced this pull request May 24, 2025
minpeter pushed a commit to minpeter/vllm that referenced this pull request Jun 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants