Skip to content

Huge perplexity score generated by CLBLAST based on GPU of Android phone? #2133

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
hchenphd opened this issue Jul 7, 2023 · 9 comments
Closed
Labels

Comments

@hchenphd
Copy link

hchenphd commented Jul 7, 2023

I try to use a Xiaomi phone to run perplexity by CLBLAST based on GPU , the model is tested on MacBook , but the perplexity scores are really huge, like [1]2717.5986,[2]3794.2774,

I am confused , and anyone can give some hints about this issue?

./bin/perplexity -m ../../storage/downloads/llm-models/ggml-model-q4_0.bin -f ../../storage/downloads/wikitext-2-raw/wiki.test.raw

main: build = 787 (7f0e9a7)
main: seed = 1688739070
ggml_opencl: selecting platform: 'QUALCOMM Snapdragon(TM)'
ggml_opencl: selecting device: 'QUALCOMM Adreno(TM)'
ggml_opencl: device FP16 support: true
llama.cpp: loading model from ../../storage/downloads/llm-models/ggml-model-q4_0.bin
llama_model_load_internal: format = ggjt v3 (latest)
llama_model_load_internal: n_vocab = 32000
llama_model_load_internal: n_ctx = 512
llama_model_load_internal: n_embd = 4096
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 32
llama_model_load_internal: n_layer = 32
llama_model_load_internal: n_rot = 128
llama_model_load_internal: ftype = 2 (mostly Q4_0)
llama_model_load_internal: n_ff = 11008
llama_model_load_internal: model size = 7B
llama_model_load_internal: ggml ctx size = 0.08 MB
llama_model_load_internal: using OpenCL for GPU acceleration
llama_model_load_internal: mem required = 5407.72 MB (+ 1026.00 MB per state)
llama_model_load_internal: offloading 0 repeating layers to GPU
llama_model_load_internal: offloaded 0/35 layers to GPU
llama_model_load_internal: total VRAM used: 0 MB
llama_new_context_with_model: kv self size = 256.00 MB

system_info: n_threads = 4 / 8 | AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 |
perplexity: calculating perplexity over 160 chunks, batch_size=512
perplexity: 78.23 seconds per pass - ETA 3 hours 28 minutes
[1]2717.5986,[2]3794.2774,

@ggerganov
Copy link
Member

How are the values with CPU-only?

perplexity: calculating perplexity over 160 chunks, batch_size=512

These is unexpected - wiki.test.raw should have 655 chunks. Double check the contents of the file

@hchenphd
Copy link
Author

hchenphd commented Jul 8, 2023

Thanks for your reply.

  1. Double check the contents of the file

Sorry for my careless. I have deleted some contents of wiki.test.raw before(I thought the file is too large to do perplexity calculating for a mobile device), so chunks number is wrong (160 chunks instead of 655 chunks)

However, I use the original wiki.test.raw file to calculating perplexity(I can make sure this file is complete). The perplexity is also very huge, like [1]2717.5986,[2]3794.2774.

~/llama.cpp/build-test $ ./bin/perplexity -m ../../storage/downloads/llm-models/ggml-model-q4_0.bin -f ../../storage/downloads/wikitext-2-raw/wiki.test.raw -t 8
main: build = 787 (7f0e9a7)
main: seed = 1688745959
ggml_opencl: selecting platform: 'QUALCOMM Snapdragon(TM)'
ggml_opencl: selecting device: 'QUALCOMM Adreno(TM)'
ggml_opencl: device FP16 support: true
llama.cpp: loading model from ../../storage/downloads/llm-models/ggml-model-q4_0.bin
llama_model_load_internal: format = ggjt v3 (latest)
llama_model_load_internal: n_vocab = 32000
llama_model_load_internal: n_ctx = 512
llama_model_load_internal: n_embd = 4096
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 32
llama_model_load_internal: n_layer = 32
llama_model_load_internal: n_rot = 128
llama_model_load_internal: ftype = 2 (mostly Q4_0)
llama_model_load_internal: n_ff = 11008
llama_model_load_internal: model size = 7B
llama_model_load_internal: ggml ctx size = 0.08 MB
llama_model_load_internal: using OpenCL for GPU acceleration
llama_model_load_internal: mem required = 5407.72 MB (+ 1026.00 MB per state)
llama_model_load_internal: offloading 0 repeating layers to GPU
llama_model_load_internal: offloaded 0/35 layers to GPU
llama_model_load_internal: total VRAM used: 0 MB
llama_new_context_with_model: kv self size = 256.00 MB

system_info: n_threads = 8 / 8 | AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 |
perplexity: calculating perplexity over 655 chunks, batch_size=512
perplexity: 87.65 seconds per pass - ETA 15 hours 56 minutes
[1]2717.5986,[2]3794.2774

How are the values with CPU-only?
Here is the result based on CPU-only , perplexity score are [1]4.4625 and [2]4.9528.

/bin/perplexity -m ../../storage/downloads/llm-models/ggml-model-q4_0.bin -f ../../storage/downloads/wikitext-2-raw/wiki.test.raw -t 8
main: build = 787 (7f0e9a7)
main: seed = 1688800782
llama.cpp: loading model from ../../storage/downloads/llm-models/ggml-model-q4_0.bin
llama_model_load_internal: format = ggjt v3 (latest)
llama_model_load_internal: n_vocab = 32000
llama_model_load_internal: n_ctx = 512
llama_model_load_internal: n_embd = 4096
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 32
llama_model_load_internal: n_layer = 32
llama_model_load_internal: n_rot = 128
llama_model_load_internal: ftype = 2 (mostly Q4_0)
llama_model_load_internal: n_ff = 11008
llama_model_load_internal: model size = 7B
llama_model_load_internal: ggml ctx size = 0.08 MB
llama_model_load_internal: mem required = 5407.72 MB (+ 1026.00 MB per state)
llama_new_context_with_model: kv self size = 256.00 MB

system_info: n_threads = 8 / 8 | AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 0 | VSX = 0 |
perplexity: calculating perplexity over 655 chunks, batch_size=512
perplexity: 2202.55 seconds per pass - ETA 400 hours 44 minutes
[1]4.4625,[2]4.9528,

@ghost
Copy link

ghost commented Jul 8, 2023

/bin/perplexity -m ../../storage/downloads/llm-models/ggml-model-q4_0.bin -f ../../storage/downloads/wikitext-2-raw/wiki.test.raw

Hi, it may not be related, but llama.cpp performance improves by loading models from the $HOME path.

Move the model to $HOME:
mv ~/storage/downloads/llm-models/ggml-model-q4_0.bin ~/

Edit: It's not related. I found the same error.

Here's CPU working as expected:

~/ollama (master) [1]> ./perplexity -m ~/wizardlm-7b-v1.0-uncensored.ggmlv3.q4_0.bin -f ~/storage/downloads/wiki.test.raw
main: build = 796 (31cfbb1)
main: seed  = 1688821396
llama.cpp: loading model from /data/data/com.termux/files/home/wizardlm-7b-v1.0-uncensored.ggmlv3.q4_0.bin
llama_model_load_internal: format     = ggjt v3 (latest)
llama_model_load_internal: n_vocab    = 32000
llama_model_load_internal: n_ctx      = 512
llama_model_load_internal: n_embd     = 4096
llama_model_load_internal: n_mult     = 256
llama_model_load_internal: n_head     = 32
llama_model_load_internal: n_layer    = 32
llama_model_load_internal: n_rot      = 128
llama_model_load_internal: ftype      = 2 (mostly Q4_0)
llama_model_load_internal: n_ff       = 11008
llama_model_load_internal: model size = 7B
llama_model_load_internal: ggml ctx size =    0.08 MB
llama_model_load_internal: mem required  = 5407.72 MB (+ 1026.00 MB per state)
llama_new_context_with_model: kv self size  =  256.00 MB

system_info: n_threads = 4 / 8 | AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 0 | VSX = 0 |
perplexity: calculating perplexity over 655 chunks, batch_size=512
perplexity: 155.99 seconds per pass - ETA 28 hours 22 minutes
[1]5.2248,[2]5.6920

Here's GPU with high perplexity:

./perplexity -m ~/wizardlm-7b-v1.0-uncensored.ggmlv3.q4_0.bin -f ~/storage/downloads/wiki.test.raw
main: build = 796 (31cfbb1)
main: seed  = 1688821757
ggml_opencl: selecting platform: 'QUALCOMM Snapdragon(TM)'
ggml_opencl: selecting device: 'QUALCOMM Adreno(TM)'
ggml_opencl: device FP16 support: true
llama.cpp: loading model from /data/data/com.termux/files/home/wizardlm-7b-v1.0-uncensored.ggmlv3.q4_0.bin
llama_model_load_internal: format     = ggjt v3 (latest)
llama_model_load_internal: n_vocab    = 32000
llama_model_load_internal: n_ctx      = 512
llama_model_load_internal: n_embd     = 4096
llama_model_load_internal: n_mult     = 256
llama_model_load_internal: n_head     = 32
llama_model_load_internal: n_layer    = 32
llama_model_load_internal: n_rot      = 128
llama_model_load_internal: ftype      = 2 (mostly Q4_0)
llama_model_load_internal: n_ff       = 11008
llama_model_load_internal: model size = 7B
llama_model_load_internal: ggml ctx size =    0.08 MB
llama_model_load_internal: using OpenCL for GPU acceleration
llama_model_load_internal: mem required  = 5407.72 MB (+ 1026.00 MB per state)
llama_model_load_internal: offloading 0 repeating layers to GPU
llama_model_load_internal: offloaded 0/33 layers to GPU
llama_model_load_internal: total VRAM used: 0 MB
llama_new_context_with_model: kv self size  =  256.00 MB

system_info: n_threads = 4 / 8 | AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 |
perplexity: calculating perplexity over 655 chunks, batch_size=512
perplexity: 90.42 seconds per pass - ETA 16 hours 27 minutes
[1]2713.9373,[2]3749.2572

GPU perplexity >2500!

Here's my device specs:

lscpu:

Architecture:           aarch64
  CPU op-mode(s):       32-bit, 64-bit
  Byte Order:           Little Endian
CPU(s):                 8
  On-line CPU(s) list:  0-7
Vendor ID:              Qualcomm
  Model name:           Kryo-4XX-Silver
    Model:              14
    Thread(s) per core: 1
    Core(s) per socket: 4
    Socket(s):          1
    Stepping:           0xd
    CPU(s) scaling MHz: 62%
    CPU max MHz:        1785.6000
    CPU min MHz:        300.0000
    BogoMIPS:           38.40
    Flags:              fp asimd evtstrm aes pmull
                         sha1 sha2 crc32 atomics f
                        php asimdhp cpuid asimdrdm
                         lrcpc dcpop asimddp
  Model name:           Kryo-4XX-Gold
    Model:              14
    Thread(s) per core: 1
    Core(s) per socket: 2
    Socket(s):          2
    Stepping:           0xd
    CPU(s) scaling MHz: 69%
    CPU max MHz:        2841.6001
    CPU min MHz:        710.4000
    BogoMIPS:           38.40
    Flags:              fp asimd evtstrm aes pmull
                         sha1 sha2 crc32 atomics f
                        php asimdhp cpuid asimdrdm
                         lrcpc dcpop asimddp
Vulnerabilities:
  Itlb multihit:        Not affected
  L1tf:                 Not affected
  Mds:                  Not affected
  Meltdown:             Vulnerable
  Spec store bypass:    Vulnerable
  Spectre v1:           Mitigation; __user pointer
                         sanitization
  Spectre v2:           Mitigation; Branch predict
                        or hardening
  Srbds:                Not affected
  Tsx async abort:      Not affected

uname -a

Linux localhost 4.14.190-23725627-abG975WVLS8IWD1 #2 SMP PREEMPT Mon Apr 10 18:16:39 KST 2023 aarch64 Android

OpenCL:

u0_a1282@localhost ~/o/b/bin (master)> LD_LIBRARY_PATH=/vendor/lib64 clinfo                         
Number of platforms                               1
  Platform Name                                   QUALCOMM Snapdragon(TM)
  Platform Vendor                                 QUALCOMM
  Platform Version                                OpenCL 2.0 QUALCOMM build: commit #3dad7f8ed7 changeid #I593c16c433 Date: 10/01/21 Fri Local Branch:  Remote Branch: refs/tags/AU_LINUX_ANDROID_LA.UM.9.1.R1.11.00.00.604.073
  Platform Profile                                FULL_PROFILE
  Platform Extensions                             

  Platform Name                                   QUALCOMM Snapdragon(TM)
Number of devices                                 1
  Device Name                                     QUALCOMM Adreno(TM)
  Device Vendor                                   QUALCOMM
  Device Vendor ID                                0x5143
  Device Version                                  OpenCL 2.0 Adreno(TM) 640
  Driver Version                                  OpenCL 2.0 QUALCOMM build: commit #3dad7f8ed7 changeid #I593c16c433 Date: 10/01/21 Fri Local Branch:  Remote Branch: refs/tags/AU_LINUX_ANDROID_LA.UM.9.1.R1.11.00.00.604.073 Compiler E031.37.12.01
  Device OpenCL C Version                         OpenCL C 2.0 Adreno(TM) 640
  Device Type                                     GPU
  Device Profile                                  FULL_PROFILE
  Device Available                                Yes
  Compiler Available                              Yes
  Linker Available                                Yes
  Max compute units                               2
  Max clock frequency                             1MHz
  Device Partition                                (core)
    Max number of sub-devices                     1
    Supported partition types                     None
    Supported affinity domains                    (n/a)
  Max work item dimensions                        3
  Max work item sizes                             1024x1024x1024
  Max work group size                             1024
  Preferred work group size multiple (kernel)     128
  Preferred / native vector sizes
    char                                                 1 / 1
    short                                                1 / 1
    int                                                  1 / 1
    long                                                 1 / 0
    half                                                 1 / 1        (cl_khr_fp16)
    float                                                1 / 1
    double                                               0 / 0        (n/a)
  Half-precision Floating-point support           (cl_khr_fp16)
    Denormals                                     No
    Infinity and NANs                             Yes
    Round to nearest                              Yes
    Round to zero                                 No
    Round to infinity                             Yes
    IEEE754-2008 fused multiply-add               No
    Support is emulated in software               No
  Single-precision Floating-point support         (core)
    Denormals                                     No
    Infinity and NANs                             Yes
    Round to nearest                              Yes
    Round to zero                                 No
    Round to infinity                             Yes
    IEEE754-2008 fused multiply-add               No
    Support is emulated in software               No
    Correctly-rounded divide and sqrt operations  No
  Double-precision Floating-point support         (n/a)
  Address bits                                    64, Little-Endian
  Global memory size                              3911956480 (3.643GiB)
  Error Correction support                        No
  Max memory allocation                           977989120 (932.7MiB)
  Unified memory for Host and Device              Yes
  Shared Virtual Memory (SVM) capabilities        (core)
    Coarse-grained buffer sharing                 Yes
    Fine-grained buffer sharing                   Yes
    Fine-grained system sharing                   No
    Atomics                                       Yes
  Minimum alignment for any data type             128 bytes
  Alignment of base address                       1024 bits (128 bytes)
  Page size (QCOM)                                4096 bytes
  External memory padding (QCOM)                  0 bytes
  Preferred alignment for atomics
    SVM                                           128 bytes
    Global                                        0 bytes
    Local                                         0 bytes
  Max size for global variable                    65536 (64KiB)
  Preferred total size of global vars             1048576 (1024KiB)
  Global Memory cache type                        Read/Write
  Global Memory cache size                        131072 (128KiB)
  Global Memory cache line size                   64 bytes
  Image support                                   Yes
    Max number of samplers per kernel             16
    Max size for 1D images from buffer            134217728 pixels
    Max 1D or 2D image array size                 2048 images
    Base address alignment for 2D image buffers   64 bytes
    Pitch alignment for 2D image buffers          64 pixels
    Max 2D image size                             16384x16384 pixels
    Max 3D image size                             16384x16384x2048 pixels
    Max number of read image args                 128
    Max number of write image args                64
    Max number of read/write image args           64
  Max number of pipe args                         16
  Max active pipe reservations                    7680
  Max pipe packet size                            1024
  Local memory type                               Local
  Local memory size                               32768 (32KiB)
  Max number of constant args                     8
  Max constant buffer size                        65536 (64KiB)
  Max size of kernel argument                     1024
  Queue properties (on host)
    Out-of-order execution                        Yes
    Profiling                                     Yes
  Queue properties (on device)
    Out-of-order execution                        Yes
    Profiling                                     Yes
    Preferred size                                655376 (640KiB)
    Max size                                      655376 (640KiB)
  Max queues on device                            1
  Max events on device                            1024
  Prefer user sync for interop                    No
  Profiling timer resolution                      1000ns
  Execution capabilities
    Run OpenCL kernels                            Yes
    Run native kernels                            No
  printf() buffer size                            1048576 (1024KiB)
  Built-in kernels                                (n/a)
  Device Extensions                               cl_khr_3d_image_writes cl_img_egl_image cl_khr_byte_addressable_store cl_khr_depth_images cl_khr_egl_event cl_khr_egl_image cl_khr_fp16 cl_khr_gl_sharing cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_image2d_from_buffer cl_khr_mipmap_image cl_khr_srgb_image_writes cl_khr_subgroups cl_qcom_create_buffer_from_image cl_qcom_ext_host_ptr cl_qcom_ion_host_ptr cl_qcom_perf_hint cl_qcom_other_image cl_qcom_subgroup_shuffle cl_qcom_vector_image_ops cl_qcom_extract_image_plane cl_qcom_android_native_buffer_host_ptr cl_qcom_protected_context cl_qcom_priority_hint cl_qcom_compressed_yuv_image_read cl_qcom_compressed_image cl_qcom_ext_host_ptr_iocoherent cl_qcom_accelerated_image_ops cl_qcom_ml_ops

NULL platform behavior
  clGetPlatformInfo(NULL, CL_PLATFORM_NAME, ...)  No platform
  clGetDeviceIDs(NULL, CL_DEVICE_TYPE_ALL, ...)   No platform
  clCreateContext(NULL, ...) [default]            No platform
  clCreateContext(NULL, ...) [other]              Success [P0]
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_DEFAULT)  Success (1)
    Platform Name                                 QUALCOMM Snapdragon(TM)
    Device Name                                   QUALCOMM Adreno(TM)
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_CPU)  No devices found in platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU)  Success (1)
    Platform Name                                 QUALCOMM Snapdragon(TM)
    Device Name                                   QUALCOMM Adreno(TM)
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_ACCELERATOR)  No devices found in platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_CUSTOM)  Invalid device type for platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_ALL)  Success (1)
    Platform Name                                 QUALCOMM Snapdragon(TM)
    Device Name                                   QUALCOMM Adreno(TM)

@hchenphd hchenphd changed the title Huge perplexity score generated by CLBLAST based on GPU of Andorid phone? Huge perplexity score generated by CLBLAST based on GPU of Android phone? Jul 8, 2023
@hchenphd
Copy link
Author

hchenphd commented Jul 8, 2023

Yep~~reproduce bug.

/bin/perplexity -m ../../storage/downloads/llm-models/ggml-model-q4_0.bin -f ../../storage/downloads/wikitext-2-raw/wiki.test.raw

Hi, it may not be related, but llama.cpp performance improves by loading models from the $HOME path.

Move the model to $HOME: mv ~/storage/downloads/llm-models/ggml-model-q4_0.bin ~/

Edit: It's not related. I found the same error.

Here's CPU working as expected:

~/ollama (master) [1]> ./perplexity -m ~/wizardlm-7b-v1.0-uncensored.ggmlv3.q4_0.bin -f ~/storage/downloads/wiki.test.raw
main: build = 796 (31cfbb1)
main: seed  = 1688821396
llama.cpp: loading model from /data/data/com.termux/files/home/wizardlm-7b-v1.0-uncensored.ggmlv3.q4_0.bin
llama_model_load_internal: format     = ggjt v3 (latest)
llama_model_load_internal: n_vocab    = 32000
llama_model_load_internal: n_ctx      = 512
llama_model_load_internal: n_embd     = 4096
llama_model_load_internal: n_mult     = 256
llama_model_load_internal: n_head     = 32
llama_model_load_internal: n_layer    = 32
llama_model_load_internal: n_rot      = 128
llama_model_load_internal: ftype      = 2 (mostly Q4_0)
llama_model_load_internal: n_ff       = 11008
llama_model_load_internal: model size = 7B
llama_model_load_internal: ggml ctx size =    0.08 MB
llama_model_load_internal: mem required  = 5407.72 MB (+ 1026.00 MB per state)
llama_new_context_with_model: kv self size  =  256.00 MB

system_info: n_threads = 4 / 8 | AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 0 | VSX = 0 |
perplexity: calculating perplexity over 655 chunks, batch_size=512
perplexity: 155.99 seconds per pass - ETA 28 hours 22 minutes
[1]5.2248,[2]5.6920

Here's GPU with high perplexity:

./perplexity -m ~/wizardlm-7b-v1.0-uncensored.ggmlv3.q4_0.bin -f ~/storage/downloads/wiki.test.raw
main: build = 796 (31cfbb1)
main: seed  = 1688821757
ggml_opencl: selecting platform: 'QUALCOMM Snapdragon(TM)'
ggml_opencl: selecting device: 'QUALCOMM Adreno(TM)'
ggml_opencl: device FP16 support: true
llama.cpp: loading model from /data/data/com.termux/files/home/wizardlm-7b-v1.0-uncensored.ggmlv3.q4_0.bin
llama_model_load_internal: format     = ggjt v3 (latest)
llama_model_load_internal: n_vocab    = 32000
llama_model_load_internal: n_ctx      = 512
llama_model_load_internal: n_embd     = 4096
llama_model_load_internal: n_mult     = 256
llama_model_load_internal: n_head     = 32
llama_model_load_internal: n_layer    = 32
llama_model_load_internal: n_rot      = 128
llama_model_load_internal: ftype      = 2 (mostly Q4_0)
llama_model_load_internal: n_ff       = 11008
llama_model_load_internal: model size = 7B
llama_model_load_internal: ggml ctx size =    0.08 MB
llama_model_load_internal: using OpenCL for GPU acceleration
llama_model_load_internal: mem required  = 5407.72 MB (+ 1026.00 MB per state)
llama_model_load_internal: offloading 0 repeating layers to GPU
llama_model_load_internal: offloaded 0/33 layers to GPU
llama_model_load_internal: total VRAM used: 0 MB
llama_new_context_with_model: kv self size  =  256.00 MB

system_info: n_threads = 4 / 8 | AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 |
perplexity: calculating perplexity over 655 chunks, batch_size=512
perplexity: 90.42 seconds per pass - ETA 16 hours 27 minutes
[1]2713.9373,[2]3749.2572

GPU perplexity >2500!

Here's my device specs:

lscpu:

Architecture:           aarch64
  CPU op-mode(s):       32-bit, 64-bit
  Byte Order:           Little Endian
CPU(s):                 8
  On-line CPU(s) list:  0-7
Vendor ID:              Qualcomm
  Model name:           Kryo-4XX-Silver
    Model:              14
    Thread(s) per core: 1
    Core(s) per socket: 4
    Socket(s):          1
    Stepping:           0xd
    CPU(s) scaling MHz: 62%
    CPU max MHz:        1785.6000
    CPU min MHz:        300.0000
    BogoMIPS:           38.40
    Flags:              fp asimd evtstrm aes pmull
                         sha1 sha2 crc32 atomics f
                        php asimdhp cpuid asimdrdm
                         lrcpc dcpop asimddp
  Model name:           Kryo-4XX-Gold
    Model:              14
    Thread(s) per core: 1
    Core(s) per socket: 2
    Socket(s):          2
    Stepping:           0xd
    CPU(s) scaling MHz: 69%
    CPU max MHz:        2841.6001
    CPU min MHz:        710.4000
    BogoMIPS:           38.40
    Flags:              fp asimd evtstrm aes pmull
                         sha1 sha2 crc32 atomics f
                        php asimdhp cpuid asimdrdm
                         lrcpc dcpop asimddp
Vulnerabilities:
  Itlb multihit:        Not affected
  L1tf:                 Not affected
  Mds:                  Not affected
  Meltdown:             Vulnerable
  Spec store bypass:    Vulnerable
  Spectre v1:           Mitigation; __user pointer
                         sanitization
  Spectre v2:           Mitigation; Branch predict
                        or hardening
  Srbds:                Not affected
  Tsx async abort:      Not affected

uname -a

Linux localhost 4.14.190-23725627-abG975WVLS8IWD1 #2 SMP PREEMPT Mon Apr 10 18:16:39 KST 2023 aarch64 Android

OpenCL:

u0_a1282@localhost ~/o/b/bin (master)> LD_LIBRARY_PATH=/vendor/lib64 clinfo                         
Number of platforms                               1
  Platform Name                                   QUALCOMM Snapdragon(TM)
  Platform Vendor                                 QUALCOMM
  Platform Version                                OpenCL 2.0 QUALCOMM build: commit #3dad7f8ed7 changeid #I593c16c433 Date: 10/01/21 Fri Local Branch:  Remote Branch: refs/tags/AU_LINUX_ANDROID_LA.UM.9.1.R1.11.00.00.604.073
  Platform Profile                                FULL_PROFILE
  Platform Extensions                             

  Platform Name                                   QUALCOMM Snapdragon(TM)
Number of devices                                 1
  Device Name                                     QUALCOMM Adreno(TM)
  Device Vendor                                   QUALCOMM
  Device Vendor ID                                0x5143
  Device Version                                  OpenCL 2.0 Adreno(TM) 640
  Driver Version                                  OpenCL 2.0 QUALCOMM build: commit #3dad7f8ed7 changeid #I593c16c433 Date: 10/01/21 Fri Local Branch:  Remote Branch: refs/tags/AU_LINUX_ANDROID_LA.UM.9.1.R1.11.00.00.604.073 Compiler E031.37.12.01
  Device OpenCL C Version                         OpenCL C 2.0 Adreno(TM) 640
  Device Type                                     GPU
  Device Profile                                  FULL_PROFILE
  Device Available                                Yes
  Compiler Available                              Yes
  Linker Available                                Yes
  Max compute units                               2
  Max clock frequency                             1MHz
  Device Partition                                (core)
    Max number of sub-devices                     1
    Supported partition types                     None
    Supported affinity domains                    (n/a)
  Max work item dimensions                        3
  Max work item sizes                             1024x1024x1024
  Max work group size                             1024
  Preferred work group size multiple (kernel)     128
  Preferred / native vector sizes
    char                                                 1 / 1
    short                                                1 / 1
    int                                                  1 / 1
    long                                                 1 / 0
    half                                                 1 / 1        (cl_khr_fp16)
    float                                                1 / 1
    double                                               0 / 0        (n/a)
  Half-precision Floating-point support           (cl_khr_fp16)
    Denormals                                     No
    Infinity and NANs                             Yes
    Round to nearest                              Yes
    Round to zero                                 No
    Round to infinity                             Yes
    IEEE754-2008 fused multiply-add               No
    Support is emulated in software               No
  Single-precision Floating-point support         (core)
    Denormals                                     No
    Infinity and NANs                             Yes
    Round to nearest                              Yes
    Round to zero                                 No
    Round to infinity                             Yes
    IEEE754-2008 fused multiply-add               No
    Support is emulated in software               No
    Correctly-rounded divide and sqrt operations  No
  Double-precision Floating-point support         (n/a)
  Address bits                                    64, Little-Endian
  Global memory size                              3911956480 (3.643GiB)
  Error Correction support                        No
  Max memory allocation                           977989120 (932.7MiB)
  Unified memory for Host and Device              Yes
  Shared Virtual Memory (SVM) capabilities        (core)
    Coarse-grained buffer sharing                 Yes
    Fine-grained buffer sharing                   Yes
    Fine-grained system sharing                   No
    Atomics                                       Yes
  Minimum alignment for any data type             128 bytes
  Alignment of base address                       1024 bits (128 bytes)
  Page size (QCOM)                                4096 bytes
  External memory padding (QCOM)                  0 bytes
  Preferred alignment for atomics
    SVM                                           128 bytes
    Global                                        0 bytes
    Local                                         0 bytes
  Max size for global variable                    65536 (64KiB)
  Preferred total size of global vars             1048576 (1024KiB)
  Global Memory cache type                        Read/Write
  Global Memory cache size                        131072 (128KiB)
  Global Memory cache line size                   64 bytes
  Image support                                   Yes
    Max number of samplers per kernel             16
    Max size for 1D images from buffer            134217728 pixels
    Max 1D or 2D image array size                 2048 images
    Base address alignment for 2D image buffers   64 bytes
    Pitch alignment for 2D image buffers          64 pixels
    Max 2D image size                             16384x16384 pixels
    Max 3D image size                             16384x16384x2048 pixels
    Max number of read image args                 128
    Max number of write image args                64
    Max number of read/write image args           64
  Max number of pipe args                         16
  Max active pipe reservations                    7680
  Max pipe packet size                            1024
  Local memory type                               Local
  Local memory size                               32768 (32KiB)
  Max number of constant args                     8
  Max constant buffer size                        65536 (64KiB)
  Max size of kernel argument                     1024
  Queue properties (on host)
    Out-of-order execution                        Yes
    Profiling                                     Yes
  Queue properties (on device)
    Out-of-order execution                        Yes
    Profiling                                     Yes
    Preferred size                                655376 (640KiB)
    Max size                                      655376 (640KiB)
  Max queues on device                            1
  Max events on device                            1024
  Prefer user sync for interop                    No
  Profiling timer resolution                      1000ns
  Execution capabilities
    Run OpenCL kernels                            Yes
    Run native kernels                            No
  printf() buffer size                            1048576 (1024KiB)
  Built-in kernels                                (n/a)
  Device Extensions                               cl_khr_3d_image_writes cl_img_egl_image cl_khr_byte_addressable_store cl_khr_depth_images cl_khr_egl_event cl_khr_egl_image cl_khr_fp16 cl_khr_gl_sharing cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_image2d_from_buffer cl_khr_mipmap_image cl_khr_srgb_image_writes cl_khr_subgroups cl_qcom_create_buffer_from_image cl_qcom_ext_host_ptr cl_qcom_ion_host_ptr cl_qcom_perf_hint cl_qcom_other_image cl_qcom_subgroup_shuffle cl_qcom_vector_image_ops cl_qcom_extract_image_plane cl_qcom_android_native_buffer_host_ptr cl_qcom_protected_context cl_qcom_priority_hint cl_qcom_compressed_yuv_image_read cl_qcom_compressed_image cl_qcom_ext_host_ptr_iocoherent cl_qcom_accelerated_image_ops cl_qcom_ml_ops

NULL platform behavior
  clGetPlatformInfo(NULL, CL_PLATFORM_NAME, ...)  No platform
  clGetDeviceIDs(NULL, CL_DEVICE_TYPE_ALL, ...)   No platform
  clCreateContext(NULL, ...) [default]            No platform
  clCreateContext(NULL, ...) [other]              Success [P0]
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_DEFAULT)  Success (1)
    Platform Name                                 QUALCOMM Snapdragon(TM)
    Device Name                                   QUALCOMM Adreno(TM)
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_CPU)  No devices found in platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU)  Success (1)
    Platform Name                                 QUALCOMM Snapdragon(TM)
    Device Name                                   QUALCOMM Adreno(TM)
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_ACCELERATOR)  No devices found in platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_CUSTOM)  Invalid device type for platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_ALL)  Success (1)
    Platform Name                                 QUALCOMM Snapdragon(TM)
    Device Name                                   QUALCOMM Adreno(TM)

@ggerganov
Copy link
Member

Since the CPU results look OK, this means that for some reason the OpenCL implementation fails on these devices.
Could be some memory alignment / padding issue

@hchenphd
Copy link
Author

hchenphd commented Jul 17, 2023

Since the CPU results look OK, this means that for some reason the OpenCL implementation fails on these devices.
Could be some memory alignment / padding issue

The model was quantized on Macbook but inference on Android. Is it possible that endianess is different when reading or writing model files?

ggml does not respect host endianess when reading or writing the files. try to create the f16 and the qunatized qx_y on the system itself. (not sure if your SoC has big endian) Might be something else entirely too.
#1455 (comment)

@ghost
Copy link

ghost commented Aug 22, 2023

new info, same result:

/perplexity -m ~/llama-2-7b-chat.ggmlv3.q4_0.gguf -f ~/storage/downloads/wiki.test.raw         

main: build = 1015 (226255b)
main: seed  = 1692670820
ggml_opencl: selecting platform: 'QUALCOMM Snapdragon(TM)'
ggml_opencl: selecting device: 'QUALCOMM Adreno(TM)'
ggml_opencl: device FP16 support: true
llama_model_loader: loaded meta data with 15 key-value pairs and 291 tensors from /data/data/com.termux/files/home/llama-2-7b-chat.ggmlv3.q4_0.gguf (version GGUF V1 (latest))
..
perplexity: calculating perplexity over 655 chunks, batch_size=512                                                   
perplexity: 105.44 seconds per pass - ETA 19 hours 11.03 minutes                                                     
 [1]7800.5490,[2]10828.8153,^C

OpenCL produces enormous perplexity. CPU working as expected.

@hchenphd
Copy link
Author

new info, same result:

/perplexity -m ~/llama-2-7b-chat.ggmlv3.q4_0.gguf -f ~/storage/downloads/wiki.test.raw         

main: build = 1015 (226255b)
main: seed  = 1692670820
ggml_opencl: selecting platform: 'QUALCOMM Snapdragon(TM)'
ggml_opencl: selecting device: 'QUALCOMM Adreno(TM)'
ggml_opencl: device FP16 support: true
llama_model_loader: loaded meta data with 15 key-value pairs and 291 tensors from /data/data/com.termux/files/home/llama-2-7b-chat.ggmlv3.q4_0.gguf (version GGUF V1 (latest))
..
perplexity: calculating perplexity over 655 chunks, batch_size=512                                                   
perplexity: 105.44 seconds per pass - ETA 19 hours 11.03 minutes                                                     
 [1]7800.5490,[2]10828.8153,^C

OpenCL produces enormous perplexity. CPU working as expected.

New observation, If you try with larger model(like Q4_1 or Q5_0), the perplexity score will be infinite, which is super huge that system can't be represented properly.

Copy link
Contributor

github-actions bot commented Apr 9, 2024

This issue was closed because it has been inactive for 14 days since being marked as stale.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants