-
-
Notifications
You must be signed in to change notification settings - Fork 3.5k
Better handling of Sagemaker models #11410
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
if isinstance(response, list): | ||
embeddings = response | ||
elif isinstance(response, dict): | ||
embeddings = response["embedding"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if 'embedding' does not exist - can we raise a helpful error (maybe with the dict keys received) ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For sure! How about this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is also still the verbose print just above, which will log the entire response which can also help during debugging 🎉
Trying to understand the router logic, and if I have understood it correctly, maybe the model_list:
- litellm_params:
sagemaker_input_key: inputs
... Is that correctly understood? |
Yes that's right @Jacobh2 |
Tested this with the EmbeddingResponse(
model="my-test-endpoint",
data=[{"object": "embedding", "index": 0, "embedding": [-0.005336614, ..., -0.019391568]}],
object="list",
usage=Usage(
completion_tokens=0, prompt_tokens=4, total_tokens=4, completion_tokens_details=None, prompt_tokens_details=None
),
) |
fc45c46
to
f6a1aec
Compare
@krrishdholakia anything you'd like to change, or could we take this in? |
@Jacobh2 Can you please add a unit test under test_litellm/ This will prevent future regressions |
Dynamic key for request body and handle response being embeddings directly
First stab at fixing #11019 as well as better handling of the response format from Sagemaker.
This makes it possible to dynamically select what the key should be for the request body for embedding models. It defaults to the existing value for backwards compatability.
Also handles the response a bit better by allowing the embeddings to be directly the response (which it is for some models e.g. the https://huggingface.co/nomic-ai/nomic-embed-text-v1 one) and not only a dict with a key.
Not included yet: This only handles the LiteLLM cli part, but for myself I'd like this to be supported to set on a per-model basis in the proxy as well. Any hints on how to do this would be highly appreciated!
Relevant issues
#11019
Type
🆕 New Feature
🐛 Bug Fix