Is it possible that it is bug about llama.cpp? Please forgive me if it is not a proper issue reporting.