-
Notifications
You must be signed in to change notification settings - Fork 11.8k
Bug: GGML can no longer be statically linked to llama.cpp due to the source code reorganization #8166
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
It should be linked statically without the |
Thanks a lot for the reply @slaren! This flag indeed links GGML statically into llama.cpp. However, in this configuration, llama.cpp is also generated as a static library. I'm trying to compile llama.cpp as a dynamic library. For me, the ideal option would be to have both GGML_BUILD_SHARED_LIBS and LLAMA_BUILD_SHARED_LIBS. |
I'm hitting the same issue here - however it's more dodgy in my case as I'm linking also grpc - and, before the mentioned PR that's how the linker was looking like on the resulting binaries, and everything was working fine:
now, with shared mode on there are more shared libraries used, but that breaks badly my linker now:
and yeah, the symbol is hidden in libprotobuf.a - but the pity of that is that I can't always control how libprotobuf is built (indeed, it fails in mac with homebrew now)
Update: Thanks for the hint @slaren ! specifying |
Some features of llama.cpp require using ggml. If llama.cpp is built as a shared library linking statically to ggml, then applications cannot use these features without linking their own copy of ggml - which may not work at all. Is it really that much of a problem to bundle the ggml shared library alongside llama.cpp? @mudler |
In my case it is, because I'm using llama.cpp with gRPC, and that is by default static - and mixing gRPC static with shared ggml and llama cpp breaks the linker as some of the protobuf symbols are hidden when building from a DSO.
gotcha - thanks @slaren ! - it clicked here now thanks to your previous comment. I thought that was defaulting to ON before as well when I was looking at the PR diff, but it looks it wasn't (at least here), so now I had to disable it explicitly to get things going. I have yet to see what LocalAI's CI tells, but locally at least now builds fine as before. |
Thanks folks for what you've shared.
I don't believe that was the case unless I missed something. I used to use pretty much all of the llama.cpp API by compiling llama.cpp as a shared library, statically linking ggml, at least on Windows, macOS, and Linux systems.
Good point. I would not argue that it is a real problem. Plus I understand the willingness to keep ggml as a separate binary for the long run and/or for building apps using llama.cpp, Whisper.cpp, and probably other upcoming projects. -> I'm reviewing my compilation and deployment pipelines to follow this approach. If you don't mind, I'm marking this case as resolved. |
I'm having the same issue. Tried with both GCC and Clang in MSYS2. LLMA_STATIC from before was working and doing what we needed. What are the flags now to get llama-server linked statically to libc++, etc? |
|
Yes. I agree with you. I got lbc++ statically linking, but gomp and openblas are always dynamically linked. You can reclose this if you want. |
What happened?
Since the commit #8006 GGML is now compiled as Dynamic library (vs static library, before).
I can't find any option to reintroduce the previous mode. There is a GGML_STATIC option into the CMakeLists.txt of the ggml solution but it seems to do nothing.
I there a way to reintroduce static compilation mode?
Thanks a lot!
Loïc
Name and Version
latest.
cmake .. -DGGML_NATIVE=OFF -DLLAMA_BUILD_TESTS=OFF -DLLAMA_BUILD_EXAMPLES=OFF -DLLAMA_BUILD_SERVER=OFF -DBUILD_SHARED_LIBS=ON -DGGML_AVX2=ON -DGGML_AVX512=OFF -DGGML_STATIC=ON
What operating system are you seeing the problem on?
No response
Relevant log output
No response
The text was updated successfully, but these errors were encountered: