-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Build failed with cuBLAS #205
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Seems can not detect cuda arch. but I installed cuda as follows and CUDA toolkit already are found from CMAKE above: wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.0-1_all.deb
sudo dpkg -i cuda-keyring_1.0-1_all.deb
sudo apt-get update
sudo apt-get -y install cuda My Display card: *-display
description: VGA compatible controller
product: GA106M [GeForce RTX 3060 Mobile / Max-Q]
vendor: NVIDIA Corporation
physical id: 0
bus info: pci@0000:03:00.0
logical name: /dev/fb0
version: a1
width: 64 bits
clock: 33MHz
capabilities: pm msi pciexpress vga_controller bus_master cap_list rom fb
configuration: depth=32 driver=nouveau latency=0 mode=1920x1080 resolution=1920,1080 visual=truecolor xres=1920 yres=1080
resources: iomemory:383c0-383bf iomemory:383e0-383df irq:33 memory:fa000000-faffffff memory:383c00000000-383dffffffff memory:383e00000000-383e01ffffff ioport:e000(size=128) memory:fb000000-fb07ffff |
Did you install CUDA? |
yes. i already installed cuda i mentioned above. @gjmulder |
Apologies. So you did. I saw this error with cmake builds a while back. Can you try building from source as per my attached Dockerfile? This Dockerfile works with the latest code. |
I think this is related to a warning/issue I ran into building vanilla llama.cpp with cmake. I fixed it by manually specifying the CUDA architecture: https://cmake.org/cmake/help/latest/variable/CMAKE_CUDA_ARCHITECTURES.html In fact |
@AlphaAtlas I can add it here temporarily but do you mind reporting this issue in llama.cpp as that's the more appropriate place to set that flag. |
This will potentially need a lot of regression testing against different CUDA installs, as I do not have this problem. EDIT: Try running:
I suspect the problem may be the build can't find
|
Closing. Reopen if the issue is not resolved upstream in |
The text was updated successfully, but these errors were encountered: