demo : per-layer KV / partial offloading of KV cache #3457
Closed
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Currently, the entire KV cache is allocated as a single tensor for all the layers. As a consequence, the KV cache is either fully on the CPU, or fully offloaded to the GPU.
With this change, the KV cache is allocated on a different tensor per layer. The result is more granular control over the parts of the KV offloaded to the GPU.
In this demo, when partially offloading a model, the KV cache corresponding to the offloaded layers is also offloaded. This increases performance at the expense of more VRAM.
Is it worth it compared to just offloading more layers? I am not sure, but probably wouldn't hurt to have more flexibility.
Note: only implemented for llama models. CUDA only.
Edit: removed a few unnecessary copies that caused performance to degrade.
Llama2 70B on a single 24 GB GPU:
Device 0: NVIDIA GeForce RTX 3090 Ti, compute capability 8.6
v1