-
-
Notifications
You must be signed in to change notification settings - Fork 9.1k
Avoid overwriting vllm_compile_cache.py #17418
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Keyun Tong <[email protected]>
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
vllm/compilation/backends.py
Outdated
@@ -66,7 +66,7 @@ def initialize_cache(self, cache_dir: str, disable_cache: bool = False): | |||
disable_cache=disable_cache) | |||
|
|||
def save_to_file(self): | |||
if self.disable_cache: | |||
if self.disable_cache or os.path.exists(self.cache_file_path): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if cache file path exists, how do we ensure it's valid?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
vLLM already computes a hash based on vLLM config, compiler, and env for each compiled cache. If the hash matches, we assume the cache is reusable across multiple runs. This is already true for rest of the files in the same cache, we should make this file consistent as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
vllm_compile_cache can change, even for the same hash.
The situation is:
- we don't put
compile_sizes
into the cache key. We allow the users to tweak this configuration without requiring a recompile. - If the user goes from [1] compile_sizes to [1, 2] compile_sizes, we end up reusing the same cache directory, compiling one additional graph (for size 2), and then updating the cache_file_path.
Instead we should check to see if the contents of the file were modified, and if they were not, then skip writing to disk.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, but it'd be nice if we can get a stamp from @zou3519
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The cache can change even with the same hash (it can grow larger), we should check if that is the case first before avoiding writing to it
vllm/compilation/backends.py
Outdated
@@ -66,7 +66,7 @@ def initialize_cache(self, cache_dir: str, disable_cache: bool = False): | |||
disable_cache=disable_cache) | |||
|
|||
def save_to_file(self): | |||
if self.disable_cache: | |||
if self.disable_cache or os.path.exists(self.cache_file_path): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
vllm_compile_cache can change, even for the same hash.
The situation is:
- we don't put
compile_sizes
into the cache key. We allow the users to tweak this configuration without requiring a recompile. - If the user goes from [1] compile_sizes to [1, 2] compile_sizes, we end up reusing the same cache directory, compiling one additional graph (for size 2), and then updating the cache_file_path.
Instead we should check to see if the contents of the file were modified, and if they were not, then skip writing to disk.
Signed-off-by: Keyun Tong <[email protected]>
Signed-off-by: Keyun Tong <[email protected]>
Signed-off-by: Keyun Tong <[email protected]>
Signed-off-by: Keyun Tong <[email protected]> Signed-off-by: Mu Huai <[email protected]>
Signed-off-by: Keyun Tong <[email protected]> Signed-off-by: Yuqi Zhang <[email protected]>
Signed-off-by: Keyun Tong <[email protected]> Signed-off-by: minpeter <[email protected]>
When vLLM is reusing a previously compile torch.compile cache files, it should not modify/overwrite it, since nothing is expected to be changed under the same hash.
After the change, verified the cache dir is no longer modified when existing cache file is found.