Being able to use local Llama.cpp models #12299
Magnarks
started this conversation in
Feature Requests
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Currently, if I want to use the Llama.cpp provider, it only lets me select certain models; it doesn't allow me to use the ones I have downloaded, unlike Ollama, which has the function of automatically detecting models.
Beta Was this translation helpful? Give feedback.
All reactions