Skip to content

context : allow cache-less context for embeddings #13108

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 7 commits into
base: master
Choose a base branch
from

Conversation

ggerganov
Copy link
Member

@ggerganov ggerganov commented Apr 25, 2025

target #12799

There is no need to create a KV cache when using embedding models such as BERT. This saves memory compared to master.

API Changes

  • The llama_encode() method is now the recommended way to compute embeddings and rerank.
  • llama_decode() can still be used to compute embeddings as before.
  • For embedding models such as BERT, llama_decode() fallbacks to llama_encode() and prints a warning.

In short, whenever the KV cache is not needed - use llama_encode(). Otherwise - use llama_decode(). The changes are backwards compatible.

@ggerganov ggerganov force-pushed the gg/llama-kv-cache-v6 branch 5 times, most recently from 58115a2 to 7e79a42 Compare May 2, 2025 13:02
Base automatically changed from gg/llama-kv-cache-v6 to master May 2, 2025 14:48
@ggerganov
Copy link
Member Author

I'll work on rebasing and merging this next - it should be a good improvement for embedding models by reducing the allocated memory during inference.

@ggerganov ggerganov force-pushed the gg/embeddings-no-kv branch from 4f0ea9b to c14ee72 Compare May 3, 2025 08:23
@ggerganov ggerganov marked this pull request as ready for review May 3, 2025 15:23
@ggerganov ggerganov requested a review from ngxson as a code owner May 3, 2025 15:23
@aviallon
Copy link
Contributor

aviallon commented May 4, 2025

Thanks for your awesome work Georgi.
Is there a donation / sponsoring page btw?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants