Skip to content

When use the GPU, llama-cpp-python[server] keeps returning # #5014

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
felix1982 opened this issue Jan 18, 2024 · 5 comments
Closed

When use the GPU, llama-cpp-python[server] keeps returning # #5014

felix1982 opened this issue Jan 18, 2024 · 5 comments

Comments

@felix1982
Copy link

start llma-cpp-python sever use command:
sudo python3 -m llama_cpp.server --model ./codellama-7b-instruct.Q5_K_S.gguf --n_gpu_layers 35 --host 0.0.0.0 --port 8085
server is start successed
output:
lama_model_loader: loaded meta data with 20 key-value pairs and 291 tensors from ./codellama-7b-instruct.Q5_K_S.gguf (version GGUF V2) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = codellama_codellama-7b-instruct-hf llama_model_loader: - kv 2: llama.context_length u32 = 16384 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: llama.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 11: general.file_type u32 = 16 llama_model_loader: - kv 12: tokenizer.ggml.model str = llama llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32016] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32016] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32016] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 19: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q5_K: 225 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: mismatch in special tokens definition ( 264/32016 vs 259/32016 ). llm_load_print_meta: format = GGUF V2 llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32016 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 16384 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 32 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 4096 llm_load_print_meta: n_embd_v_gqa = 4096 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: n_ff = 11008 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 16384 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = Q5_K - Small llm_load_print_meta: model params = 6.74 B llm_load_print_meta: model size = 4.33 GiB (5.52 BPW) llm_load_print_meta: general.name = codellama_codellama-7b-instruct-hf llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_tensors: ggml ctx size = 0.22 MiB llm_load_tensors: offloading 32 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 33/33 layers to GPU llm_load_tensors: CPU buffer size = 85.98 MiB llm_load_tensors: CUDA0 buffer size = 4349.61 MiB warning: failed to mlock 90902528-byte buffer (after previously locking 0 bytes): Cannot allocate memory Try increasing RLIMIT_MLOCK ('ulimit -l' as root). .................................................................................................. llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA_Host KV buffer size = 1024.00 MiB llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB llama_new_context_with_model: graph splits (measure): 66 llama_new_context_with_model: CUDA0 compute buffer size = 188.00 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 28.00 MiB AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 0 | AVX512_VNNI = 1 | FMA = 1 | NEON =

but when i post prompt to sever,it return

{
"id": "cmpl-1f427aeb-7a8e-4816-b987-fb64bcd9667c",
"object": "text_completion",
"created": 1705555806,
"model": "./codellama-7b-instruct.Q5_K_S.gguf",
"choices": [
{
"text": "",
"index": 0,
"logprobs": null,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 25,
"completion_tokens": 3,
"total_tokens": 28
}
}

when i start server remove --n_gpu_layers ,it work fine!

@themanyone
Copy link

It could be related to #5046

Try running main -m llama_cpp.server --model ./codellama-7b-instruct.Q5_K_S.gguf --n_gpu_layers 35 from the command line. If it is saying the GPU architecture is unsupported, you may have to look up your card's compute capability here and add it to the compile line. My card is Compute_50 (Compute capability 5.0). Yours will probably be different. This hack will go away when the build can properly determine what arch= to use. Or you can use git checkout to pull earlier versions that work.

CUDA_DOCKER_ARCH=compute_50 sh mymake LLAMA_FAST=1

@iamlemec
Copy link
Collaborator

If it's returning a bunch of #'s repeating, this looks a lot like abetlen/llama-cpp-python#1089, which was fixed in 0.2.30. Upgrade to latest llama-cpp-python or pass --offload_kqv True to the server command.

@themanyone
Copy link

I was able to reproduce the bunch of #'s repeating by copying the model, via thumbdrive (exfat), to another computer. The model may have become corrupt or truncated with zeros. I have since reformatted the USB stick, because it had other errors.

Copy link
Contributor

This issue is stale because it has been open for 30 days with no activity.

@github-actions github-actions bot added the stale label Mar 18, 2024
Copy link
Contributor

github-actions bot commented Apr 3, 2024

This issue was closed because it has been inactive for 14 days since being marked as stale.

@github-actions github-actions bot closed this as completed Apr 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants