-
Hi all, I am working with llama.cpp for a few weeks now. Greetings and thanks, |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
You can try SYCL, OpenBLAS, and Vulkan buildings, these versions can call GPU resources to run llama.cpp, which may bring some help to improve performance/speed. |
Beta Was this translation helpful? Give feedback.
You can try SYCL, OpenBLAS, and Vulkan buildings, these versions can call GPU resources to run llama.cpp, which may bring some help to improve performance/speed.