Skip to content

Commit fa2a422

Browse files
NeoZhangJianyuarthwairMeng
authored andcommitted
[SYCL]set context default value to avoid memory issue, update guide (ggml-org#9476)
* set context default to avoid memory issue, update guide * Update docs/backend/SYCL.md Co-authored-by: Meng, Hengyu <[email protected]> --------- Co-authored-by: arthw <[email protected]> Co-authored-by: Meng, Hengyu <[email protected]>
1 parent 1ddb77d commit fa2a422

File tree

2 files changed

+12
-3
lines changed

2 files changed

+12
-3
lines changed

docs/backend/SYCL.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -636,6 +636,14 @@ use 1 SYCL GPUs: [0] with Max compute units:512
636636

637637
It's same for other projects including llama.cpp SYCL backend.
638638

639+
- Meet issue: `Native API failed. Native API returns: -6 (PI_ERROR_OUT_OF_HOST_MEMORY) -6 (PI_ERROR_OUT_OF_HOST_MEMORY) -999 (UNKNOWN PI error)` or `failed to allocate SYCL0 buffer`
640+
641+
Device Memory is not enough.
642+
643+
|Reason|Solution|
644+
|-|-|
645+
|Default Context is too big. It leads to more memory usage.|Set `-c 8192` or smaller value.|
646+
|Model is big and require more memory than device's.|Choose smaller quantized model, like Q5 -> Q4;<br>Use more than one devices to load model.|
639647

640648
### **GitHub contribution**:
641649
Please add the **[SYCL]** prefix/tag in issues/PRs titles to help the SYCL-team check/address them without delay.

examples/sycl/run-llama2.sh

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -11,16 +11,17 @@ source /opt/intel/oneapi/setvars.sh
1111
#ZES_ENABLE_SYSMAN=1, Support to get free memory of GPU by sycl::aspect::ext_intel_free_memory. Recommended to use when --split-mode = layer.
1212

1313
INPUT_PROMPT="Building a website can be done in 10 simple steps:\nStep 1:"
14-
MODEL_FILE=llama-2-7b.Q4_0.gguf
14+
MODEL_FILE=models/llama-2-7b.Q4_0.gguf
1515
NGL=33
16+
CONEXT=8192
1617

1718
if [ $# -gt 0 ]; then
1819
GGML_SYCL_DEVICE=$1
1920
echo "use $GGML_SYCL_DEVICE as main GPU"
2021
#use signle GPU only
21-
ZES_ENABLE_SYSMAN=1 ./build/bin/llama-cli -m models/${MODEL_FILE} -p "${INPUT_PROMPT}" -n 400 -e -ngl ${NGL} -s 0 -mg $GGML_SYCL_DEVICE -sm none
22+
ZES_ENABLE_SYSMAN=1 ./build/bin/llama-cli -m ${MODEL_FILE} -p "${INPUT_PROMPT}" -n 400 -e -ngl ${NGL} -s 0 -c ${CONEXT} -mg $GGML_SYCL_DEVICE -sm none
2223

2324
else
2425
#use multiple GPUs with same max compute units
25-
ZES_ENABLE_SYSMAN=1 ./build/bin/llama-cli -m models/${MODEL_FILE} -p "${INPUT_PROMPT}" -n 400 -e -ngl ${NGL} -s 0
26+
ZES_ENABLE_SYSMAN=1 ./build/bin/llama-cli -m ${MODEL_FILE} -p "${INPUT_PROMPT}" -n 400 -e -ngl ${NGL} -s 0 -c ${CONEXT}
2627
fi

0 commit comments

Comments
 (0)