Support Ollama API in langfuse playground (and LLM API / Gateway) #5589
LiveOverflow
started this conversation in
Ideas
Replies: 2 comments 11 replies
-
|
This works, you can just select the openai adapter when adding a model and point it to your ollama endpoint. Let me know in case you run into issues while setting this up. |
Beta Was this translation helpful? Give feedback.
11 replies
-
|
@ianferreira Could you please share what you did in order to solve . I am facing the same problem . I want to locally evaluate MLLMs using Langfuse , Ollama and Docker. I succesfully can trace the generations but nothing more. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Describe the feature or potential improvement
I would like to see the Ollama API added to the supported LLM API / Gateway. This would allow users to use the langfuse playground and LLM-as-a-judge with local hosted models.
Business case:
I am currently experimenting with sensitive user data and confidential documents, thus I don't want to use cloud services at the moment. Once going to production, GDPR compliant DPAs can be setup. But during local experimentation and local evaluation for a use-cases, it would be great to support locally hosted models in langfuse.
Why Ollama?
From what I can tell, Ollama is one of the most common ways to run local models and is largely also compatible with the OpenAI API. The other one is LM Studio which would also be a good choice and has OpenAI API compatibility.
Due to the OpenAPI compatibilities, I would expect integration can be based on the existing OpenAI code.
Additional information
No response
Beta Was this translation helpful? Give feedback.
All reactions