Skip to content

Which llama.cpp Windows package to choose for my hardware #13120

Answered by AbraxasVi
domasofan asked this question in Q&A
Discussion options

You must be logged in to vote

You can try SYCL, OpenBLAS, and Vulkan buildings, these versions can call GPU resources to run llama.cpp, which may bring some help to improve performance/speed.

Replies: 1 comment 1 reply

Comment options

You must be logged in to vote
1 reply
@domasofan
Comment options

Answer selected by domasofan
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants