Replies: 3 comments 3 replies
-
|
I also tried the Yet this one looks like it's telling me something I don't understand... |
Beta Was this translation helpful? Give feedback.
-
The docker-compose file is correct. There is currently a bug in LocalAI as such the intel gpu is not identified correctly. This is being fixed in #5945, but meanwhile, as a temporary workaround you can set the This is documented in https://localai.io/features/gpu-acceleration/#automatic-backend-detection . |
Beta Was this translation helpful? Give feedback.
-
|
I can confirm that LocalAI uses nVidia GPU if configured correctly. My setup: Notice 1: My old video card requires Cuda 11. While nvidia-smi reports CUDA 13 installed, CUDA-11 has actual support for the card. That part is very confusing, but it has nothing to do with LocalAI project. Notice 2: One has to give up trying to run LocalAI on Fedora 43 directly. It is only possible for systems that statically compile CUDA support into the executable. LocalAI does not do that. It is nearly impossible to configure Fedora 43 with Cuda 11, because Cuda 11 nvcc requres older gcc, which can only be set globally. So, it is container route for Fedora 36+ users. Cuda 11 was supported on Fedora 35. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I am looking for assistance because I don't know what I am doing wrong.
I am using the following system
I am running Local-Ai through docker with the following docker file :
My GPU seems to be accessible from the container :
I tried the following images :
And yet, every time, the cpu is doing all the work.
Here is something the logs keeps telling :
I really need some help here because I must be doing something wrong, but I do not know what.
Beta Was this translation helpful? Give feedback.
All reactions